Jump to content
The Dark Mod Forums

Search the Community

Searched results for '/tags/forums/images/' or tags 'forums/images/q=/tags/forums/images/&'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • General Discussion
    • News & Announcements
    • The Dark Mod
    • Fan Missions
    • Off-Topic
  • Feedback and Support
    • TDM Tech Support
    • DarkRadiant Feedback and Development
    • I want to Help
  • Editing and Design
    • TDM Editors Guild
    • Art Assets
    • Music & SFX

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


AIM


MSN


Website URL


ICQ


Yahoo


Jabber


Skype


Location


Interests

  1. I just read@motorsep Discovered that you are able to create a brush, then select it and right click "create light". Now you have a light that ha the radius of the former brush. Just read it on discord and thought it may be of use for some people in the forums here too.
  2. I think the reason the dev forums exist is to provide a place where the implementation of features can be discussed without getting mixed up with other debates when someone believes what the devs are doing is wrong. We often post public discussion threads for features with subjective elements like the frob outline, because community feedback is very important. But there will always be vocal defenders with strong views for or against certain features, or how exactly it should be implemented in their opinion. At some point a decision has to be made and be carried through, which is what the dev forums are for. Almost all of the threads are very technical, basically explaining and discussing recent or potential code changes with other devs. Its hard to say. Its a hobby the devs do in their spare time, so people come and go when they're in the mood and when they have the time. The team page is mostly accurate except for some relatively newer additions like myself.
  3. Agree with this, it's risky for the forum to publish downloadable content with copyright, because the person in charge is the owner/admin of the forum who allows it, not the user who put the link. but there the regulations of each country are valid in this regard, extrem use it in German forums according to their own experiences, which do not even allow images without these being checked before if they have copyright or not, if they are not their own images. I certainly find this somewhat exaggerated. Anyway, in any case it is a good habit to put the sources of content that is posted. Nor should anything happen if someone puts a link to a page, for example one dedicated to downloading games, if it can be verified that the downloads are virus-free, the user does not have to know if the game is legal or not. The responsibility of the Forum in this case can only be limited to checking the security, not any rights that the games on the list may have. It is also necessary to differentiate between a download for private use, practically always legitimate, or for commercial use, very different and where copyright becomes relevant. Although there are also nuances there that are sometimes somewhat absurd, such as the Eiffel Tower, without problems to photograph it during the day, but with lighting at night it is illegal to take a photo, because the lighting is copyrighted, it can be considered this problems when posting a photo of the Eiffel Tower at night.
    1. Obsttorte
    2. Bikerdude

      Bikerdude

      He changed ita long while back, it was so he was using the same name as he uses on other forums.

  4. Are you refering to the image quality? It's because of the file size limitation of the forum. I could probably make the images better than to just make them 25% and save them as gifs, but I figured that you've seen crates and stairs before, right?
  5. This is already possible to a degree. It takes two things: first you use the language model of the AI to "interrogate" the image and find what text description would lead to generate something close to it. Then you basically use the diffusion algorithm which normally generates images from noise in reverse, to gradually generate noise from an image. This gives you the seed and description, and the result is often very close to the original image. I was playing with it yesterday because it then allows you to change some details without changing the whole image, and that allows you to take the portraits of your women friends and transform them into being old and unattractive, which I found all of them universally appreciate and find kind and funny. The early implementation in Stable Diffusion is just a test and it's quite imperfect (it produces noisy images with overblown colours among other things), plus it's limited by the quality of normal SD image generation, but as a proof of concept it obviously works and will probably get better soon. However I don't see this happening any time soon. The AIs work with bitmaps, are trained on a specific image size (with Stable Diffusion it's 512 x 512 px at the moment) and making anything significantly smaller or larger completely breaks the process. Various AI upsampling algorithms exist, but they never work as well as straight up generating the image in the resolution that the neural net was optimized for. And I don't know about any practical solutions to this yet.
  6. MidJourney does some really mind-blowingly phenomenal creations, stuff that can be Escher-esque, Dali impressionistic, Thomas Kinkade landscapes, etc. However it's only free for making around 20 or so images and has a tiered payment system for more. The Discord channel does let subscribers post their own creations, and many of them are quite creative at getting the MidJourney engine to render some awesome stuff. Definitely worth the scrolling if you are like me and like to download high quality images for a revolving screen saver.
  7. I just found this thread on ttlg listing Immersive Sims: https://www.ttlg.com/forums/showthread.php?t=151176
  8. That moment you log into TDM forums and suddenly feel nostalgic...

    1. Sotha

      Sotha

      Protip: if you never log off and stay for ever, there is no nostalgia when you visit.

    2. Melan

      Melan

      Welcome back!

    3. RPGista

      RPGista

      Haha yeah, I feel like that from time to time. Good to see you around.

  9. Actually, Nvidia has had image models for generating photorealistic faces for years now. They have gotten better over the years. https://developer.nvidia.com/blog/generating-photorealistic-fake-celebrities-with-artificial-intelligence/ Applying 2D text-to-image algorithms to modern games seems unlikely, with the exception of making lots of textures and maybe 2D portraits for UI/character creation, but there are many other algorithms being worked on. Maybe a similar approach could be used to replace procedural generation techniques. Like making a cave/dungeon in Skyrim, or that thing Tels was working on a decade ago. Rather than making a raster image, it could make geometry, place textures, and design a whole city. Bring on the negative societal implications of bots invading art, I'm all for it. But one thing to watch out for is the copyright question. These image models can be trained on a superset of copyrighted images or a smaller focused subset (to mimic an artist's style) and produce images that could lead to novel legal questions and expensive copyright lawsuits. This is not a problem for people making memes and shitposting online, but it could be a massive problem for game developers, big or indie. Save a few bucks on art, get sued into oblivion. Maybe we'll see Business Software Alliance style shakedowns of game developers? "Where'd you get these sprites, EH?" Sounds like vozka has made some textures with it? These can be scaled up from small 512x512 sizes to higher resolution with a separate upscaling algorithm, that's what people have been doing to make stuff presentable along with touchups in Photoshop/GIMP. Whether the results are any good is another story, maybe vozka should post their results.
  10. Yes, I used control nets with a prompt describing the subject and style. There's also a seed number which by default is randomized, so each time you get slightly different results. When you get something that looks ok you can do some changes in editing program and then run it through img2img, again with a prompt. You can do inpainting where you mask the parts you want to alter. You can set weights that tell the program how strictly it should stick to the prompt or to the images that are used as the input. There are negative prompts too. Here are Cyberpunk concept art pieces that I converted using Stable Diffusion. It took quite a lot of work and manual editing. Original artwork by Marta Detlaff and Lea Leonowicz.
  11. Recently revisiting the forums after a longer period of time I wanted to check the unread content. I don't know if I am doing this wrong since.. ever... but on mobile (visiting the unread content page on my smartphone) you have to click on that tiny speech bubble to go to the most recent post in a thread. If you don't click correctly you'll hit the headline and end up at post 1 in the beginning of the thread. It's terrible on mobile, since not only the speech bubble is really small and was to miss. But also the thread headline is just millimeters away from it so you go right to the first post that was ever made instead of the most recent ones. Am I doing it wrong? I just want to go through u read content a d the to the newest post from that topic.
  12. I didn't want to spam this thread even more since I wasn't the one who was asked, but since you already replied: img2img does not work well for this use case as it only takes the colors of the original image as a starting point. Therefore it would either stay black & white and sketch-like or deviate significantly from the sketch in every way. Sometimes it's possible to find a balance, but it's time consuming and it doesn't always work. This probably used Control Nets, specialized addon neural nets that are trained to guide the diffusion process using specialized images like normal maps, depth maps, results of edge detection and others. And there's also a control net trained on scribbles which is what I assume Arcturus used. It still needs a text prompt, the control net functions as an added element to the standard image generation process, but it allows to extract the shapes and concepts from the sketch without also using its colors.
  13. A@datiswous Ah yeah, well sorry, I was quiet busy and only visiting discord. First time here on the forums since months now I think.. Thank you for the subtitles. I encourage everyone who is interested in using them to download it from here as I'm not sure when I'll be able to implement them myself into the mission. Again, thank you for your work.
  14. Curious, I was wanting to make an animated avatar but does our site accept gif images for profiles as some do?

    1. Show previous comments  8 more
    2. Anderson

      Anderson

      That is offensive. Also the site isn't nerdy. Hit me with a stone, gauge out my eyes, strangle me and hang me upside down if there are more then 5 gothic games around still developed. GOOD ONES.

    3. Airship Ballet

      Airship Ballet

      Look at you, being all passionately nerdy about nerd things.

    4. Anderson

      Anderson

      How rude of you. I hope you're happy now that my week was ruined by your tasteless humour.

  15. https://wiki.thedarkmod.com/index.php?title=Briefing#Controlling_Where_the_Player_Starts On the section about "Controlling Where the Player Starts" there is a download link, but it's a dead link. Does somebody have this altered mainmenu_briefing.gui ? Or otherwise, is there a mission where it's used? I thought maybe it could be added to core, to the same file, but have that section editted out by default, or make a renamed version? Same for Button Controlled Animated Briefing and Timed Flowing Briefing (although maybe without the example images), I think they should be things in core. Doesn't take up much space and then they never get lost. Edit: Testmap: https://drive.google.com/file/d/1YQQknGlVJE9TJyItE_H2KjggvrIAuhnf/view?usp=sharing (with working self-made test mainmenu_briefing.gui file) When you start normal startlocation is near sand-sack. When in last frame of briefing you klick on text, mission starts near bucket oposite of room instead. Currently only works up to TDM 2.09
    1. demagogue
    2. jaxa

      jaxa

      I've found it difficult to find where TDM is listed as #1 on Greenlight. This page ( https://steamcommunity.com/greenlight/ ) has no ranked listing. This one ( https://steamcommunity.com/sharedfiles/filedetails/?id=858048394 ) has no visible rank or stats page. Is it my script blocker?

  16. DarkRadiant 3.7.0 is ready for download. What's new: Feature: Skin Editor Improvement: Script Window usability improvements Fixed: Hitting escape while autosaving crashes to desktop Fixed: Def parsing problem in tdm_playertools_lockpicks.def Fixed: DR hangs if selecting a lot of entities with entity list open Fixed: Float Property Editor's entry box is sticking around after selecting a float key Fixed: Spline entities without model spawnarg are unselectable Fixed: Entity window resets interior sizing forcing resize each time it is opened Fixed: Spline curves should not be created with a model spawnarg Fixed: Newly appended curve control vertices aren't shown at first Fixed: Light entities are zoomed out in preview window Fixed: Entity inspector spawnarg fields not always updated by UI windows such as Model Chooser Feature: Skin Editor (see video) Windows and Mac Downloads are available on Github: https://github.com/codereader/DarkRadiant/releases/tag/3.7.0 and of course linked from the website https://www.darkradiant.net Thanks to all the awesome people who keep using DarkRadiant to create Fan Missions - they are the main reason for me to keep going. Please report any bugs or feature requests here in these forums, following these guidelines: Bugs (including steps for reproduction) can go directly on the tracker. When unsure about a bug/issue, feel free to ask. If you run into a crash, please record a crashdump: Crashdump Instructions Feature requests should be suggested (and possibly discussed) here in these forums before they may be added to the tracker. The list of changes can be found on the our bugtracker changelog. Keep on mapping!
  17. Yes, it does. Which makes it interesting that you yourself explicitly said that it's interesting nobody had complained here on the official forums: I did, which is why it stood out to me so much that even though you yourself had personally been involved you would reply claiming nobody had complained here on the official forums. I'm not colorblind at all. Does that make people pointing out that almost no modern games have proper colorblindness support hyperbole? Just because it doesn't affect you, or you choose not to pay attention to the discussion of something, doesn't make it hyperbole. Pick pretty much any modern FPS and you will find plenty of discussion about the near universal disregard for FOV and camera movement as accessibility issues. Denigrating those as hyperbole because you personally don't feel the affects is as bad of a look as demeaning people who bring up the importance of valid allergen warnings like gluten or colorblindness and deafness support.
  18. Ignoring is somewhat inadequate as you still see other members engaging in a discussion with the problematic user, and as Wellingtoncrab says such discussions displace all other content within that channel. Moderation is also imperfect as being unpleasant to engage with is not in itself banworthy, so there is nothing more to do if such people return to their old behaviour after a moderator had a talk with them, except live with it or move away. I'd be more willing to deal with it if it felt like there were more on-topic discussion, i.e. thoughts about recently played fan missions or mappers showcasing their progress, rather than a stream of consciousness about a meta topic that may or not have to do with TDM. I guess the forums already serve the desired purpose, or they just compartmentalise discussions better.
  19. Currently the fieldwidths are a fixed proportion of the screen width (e.g., 90%). In the example images, there is the suggestion that the fieldwidth would be adjusted for aspect ratio instead. So a fieldwidth of 90% for a 4:3 screen becomes, for a 16:9 screen (with a 0.75 reduction) a fieldwidth of 67.5%. That is, about 2/3rds of the screen.
×
×
  • Create New...