Jump to content
The Dark Mod Forums

Search the Community

Showing results for '/tags/forums/learning mapping/'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • General Discussion
    • News & Announcements
    • The Dark Mod
    • Fan Missions
    • Off-Topic
  • Feedback and Support
    • TDM Tech Support
    • DarkRadiant Feedback and Development
    • I want to Help
  • Editing and Design
    • TDM Editors Guild
    • Art Assets
    • Music & SFX

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


AIM


MSN


Website URL


ICQ


Yahoo


Jabber


Skype


Location


Interests

  1. I don't recall a system for noise masking. It sounds like it'd be a good idea, but when you get into the details you realize it'd be complicated to implement. It's not only noise that that goes into it, I think. E.g., a high register can cut through even a loud but low register rumble. And it's not like the .wav file even has data on the register of what it's playing. So either you have to add meta-data (which is insane), or you have to have a system to literally check pitch on the .wav data and paramaterize it in time to know when it's going to cut through what other parameters from other sounds. For that matter, it doesn't even have the data on the loudness either, so you'd have to get that off the file too and time the peaks with the "simultaneous" moment at arbitrary places in every other sound file correctly. And then position is going to matter independently for each AI. So it's not like you can have one computation that works the same for all AI. You'd have to compute the masking level for each one, and then you get into the expense you're mentioning. I know there was a long discussion about it in the internal forums, and probably on the public subforums too, but it's been so long ago now I can't even remember the gist of them. Anyway the main issue is I don't know if you'll find a champion that wants to work on it. But if you're really curious to see how it might work, you could always try your hand at coding & implementing it. Nothing beats a good demo to test an idea in action. And there's no better way to learn how to code than a little project like that. I always encourage people to try to implement an idea they have, whether or not it may be a good idea, just because it shows the power of an open source game. We fans can try anything we want and see if it works!
  2. I'm using the version from kcghost. I just tested and I can't see any difference inside the inventory. On the stats itself it doesn't show the different loot types (still seen in the inventory), but instead gives more info on stealth score. Edit: I see Dragofer made an updated version of his script. I have to check that out. Edit2: That version works: https://forums.thedarkmod.com/applications/core/interface/file/attachment.php?id=21272&key=02755164a3bed10498683771fe9a0453
  3. I looked but didn't see this video posted in these forums. It's pretty cool.
  4. It wasn't a "sacrifice", it was a deliberate decision. People wanted the game to be as close as possible to the original, including pixelated graphics. If you ask me, the former version based on the Unity engine looked and felt better. But, hey... I guess I'm not the right person to judge that, as I never played the original, and always found that the art style of System Shock 2 is much better anyway. This also illustrates the issue with community funded games: Too many cooks spoil the broth. In game design, you need freedom, not thousands of people who want you to do this and this and that. Just take a look at the Steam forums and see how all those wimps complain again about everything. Hopeless.
  5. So giving it none of those tags, but making the AI invisible, silent, non-solid, and on a team neutral to everyone would not work? Oh well, it was a horrible inelegant idea anyway.
  6. aye quite a load of scrapped mods over the years, most mappers dont seem comfortable in the editors avaliable for the quake engine and either choose something well known like unreal or drop it. The darkplaces engine itself is more than fine for making quality games but the mapping tools are sometimes a bit lackluster.
  7. What I understood is that the idea of TDM was born from that it was unclear if T3 would get a level editor at the time. Source: https://web.archive.org/web/20050218173856/http://evilavatar.com/forums/showthread.php?t=268
  8. This one is really essential: https://www.ttlg.com/forums/showthread.php?t=138607 Should work fine with the GOG version.
  9. https://www.ttlg.com/forums/showthread.php?t=152224 There is a new mapping contest over on TTLG for the Thief: Deadly Shadows 20th Anniversary and the organizers were kind enough to include The Dark Mod along with all of the Thief games as an options for making a mission to submit as an entry. The deadline is a year from yesterday and the rules are pretty open. I recommend going to the original thread for the details but I will summarize here: Rules: - The mission(s) can be for Thief 1, Thief 2, Deadly Shadows or The Dark Mod. - Collaborations are allowed. - Contestants can use any custom resource they want, though TDM cannot use the Deadly Shadows resource pack. - Contestants can submit more than one mission. - Contestants can enter anonymously. - The mission(s) can be of any size. Using prefabs is allowed but the idea is this is a new mission and starting from an abandoned map or importing large areas from other maps is not allowed. Naturally this is on the honor system as we have no way of validating. Mission themes and contents: There is no requirement from a theme or story viewpoint, however contestants might consider that many players may expect or prefer missions to be celebratory of Thief: Deadly Shadows in this respect: castles, manors, museums, ruins inhabited by Pagans and the like, with a balance of magic versus technology. This is entirely up to the authors, though, to follow or not - it is just mentioned here as an FYI and, while individual voters may of course choose to vote higher or lower based on this on their own, it will not be a criteria used explicitly in voting or scoring. Deadline: May 25th, 2024 at 23:59 Pacific Time. See the TTLG thread for details on submissions and the voting process. Provided I can make the deadline I hope to participate. It would be nice to see the entire community do something together, and expressing our complicated relationship with this divisive game seems as good a pretext as any.
  10. So I created a new bigbox (map) and put all my layers into that, including the "bad area model" one and everything is JUST FINE. No errors, no leaks, no missing bits, in both DR and DM! YAY twice over!! Can now continue working on it (and learning DR - so MUCH to learn). Back to building my faux Dodge.
  11. I comprehend but you do lose volumetric effects with stencil but only the really recent missions have volumetric lights. Btw In reality there's a big difference between what shadow maps are capable off and stencil maps are capable off, unfortunately because of backwards compatibility with old missions and the fact that for a long time, only stencil was available, TDM team add to limit shadow mapping capabilities mostly to what stencil can do.
  12. Leaky geometry, particularly portal leaks, happen all the time to everyone. Annoying and time-consuming as they are, they're absolutely not a sign of being bad at mapping. I'm looking forward to seeing your mission- I don't think we have any with a Wild West feel yet.
  13. While I'm currently taking a break from TDM mapping and not requiring this yet, it's an option I was thinking of using and would like to know whether it's possible. I was wondering if there's a way to have an AI which another AI that's an enemy to it would be too scared to attack, meaning the weaker AI flees even if armed but only from specific enemies: Can AI have a "scary" value that prevents weaker characters from picking a battle? A simple example: Let's say you have a city watch guard, a thief, and a skeleton... all set as enemies to each other. If the guard and thief see each other, they will start fighting. If however they see the skeleton, they will be so scared they flee as if unarmed instead of attempting any fights at that time.
  14. Thanks for playing and the kind feedback re: the bugs: the brew tank is a new one - thanks for that. Will add it to the list for any future update. the bow: I think that's a TDM bug. I experienced it as well, but only the early days of developing the mission so I thought it had gone away, but I guess not: https://forums.thedarkmod.com/index.php?/topic/21345-210-crashes-may-be-bow-frontend-acceleration-related/ the keys on the guard: never did get to the bottom of that one as I could never reproduce it.
  15. Thanks! Hint for the safe code here: https://forums.thedarkmod.com/index.php?/topic/21837-fan-mission-the-lieutenant-2-high-expectations-by-frost_salamander-20230424/&do=findComment&comment=485264 Actually, it's probably time I added these hints to the original post....
  16. The truth is that this pacage of vegetation is more complex to implement in game and it will need some sort of reduction to work. I would dare to say it needs animation, collision sounds and maybe casting alpha shadows. I imagine every vegetation model acompanied with a variety of sounds like birds, leaves moving during animated gusts of wind and cicada sounds, making it a whole ecosystem that spreads out occuping a large amount of gamers perception with very small amount of mapping around it.
  17. Is objective_ent set to 1? Search it out on Objectives in the wiki for best info. If you have multiple "things" that need to be in the same location, you need separateinfo_tdm_objective_location brushes for each one. I fell into that trap trying to get started mapping.
  18. Well there is also godot which is both a game engine/editor not sure how good there mapping tools are compared to blender though.
  19. I never realised Bill Gates was a member of these forums. Welcome to the community! I hope you enjoy The Dark Mod. Perhaps your Foundation could help pay for the server hosting or fund the development of some new features?
  20. Stanford and Google created a video game environment in which 25 bots interacted freely. Edit: it was already posted by jaxa. Running something like ChatGPT is still very costly. We will probably need to wait until the cost goes down before it's used on a large scale in video games. Other way machine learning will be used in video games is in animation. Ubisoft has created a "motion matching" system that's expensive to run and then used neural nets to compress it to the manageble size..
  21. The issue with this argument is that the process of training the neural network is not in principle any different than a human consultant learning from publicly available code and then giving out advice for money. The only obvious difference being that GPT is dramatically more efficient, dramatically more expensive to train and cheaper to use. This difference may be enough to say that LLMs should be somehow regulated, but I don't see how it could be enough to say that one is OK and the other is completely unethical and disgusting. Isn't the issue with LLMs that they don't give credit to the material that they were trained on? How is then any reputation tarnished? Or do you mean tarnishing somebody else's reputation by generating libelous articles etc.? That may be a problem, but I don't see the relation to the fact that training data is public. As far as I know there is some legal precedent saying that training on public texts is legal in the US. It might change in the future because LLMs probably change the game a bit, but I don't believe there's any legal reason why they should receive any bills at this moment. They also published some things about training GPT-3 (the majority is Common Crawl). Personally I don't see an issue with including controversial content in the training dataset and while "jailbreaks" (ways to get it to talk about controversial topics) are currently a regular and inevitable thing with ChatGPT, outside of them it definitely has an overall "western liberal" bias, the opposite of the websites you mention.
  22. I agree with what you're saying. My biggest problem with this ethics debate is that there seems to be a lot of insincerity and moving the goalposts by people whose argument is simply "I don't like this" hidden behind various rationalizations. Like people claiming that Stable Diffusion is a collage machine or something comparable to photobashing. Or admitting that it's not the case but claiming that it can still reproduce images that were in its training dataset (therefore violating copyright), ignoring that the one study that showed this effect was done on an old unreleased version of Stable Diffusion which suffered from overtraining because certain images were present in 100+ copies in its dataset, and even in this special situation it took about 1.7 million attempts to create one duplicity, never reproducing it on any of the versions released for public use. I also dislike how they're attacking Stable Diffusion the most - the one tool that's actually free for everyone to use and that effectively democratizes the technology. Luddites at least did not protest against the machines themselves, but against not having the ownership of the machines and the right to use it for their own gain. They're just picking an easy target. I don't believe there's any current legal reason to restrict training on public data. But there are undoubtedly going to be legal battles because some people believe that the process of training a neural network is sufficiently different from an artist learning to imitate an existing style that it warrants new legal frameworks to be created. I can see their point to some degree. While the learning process in principle is kind of similar to how a real person learns, the efficiency at which it works is so different that will undoubtedly create significant changes in society, and significant changes in society might warrant new legislature even it seems unfair. The issue is I don't see a way to do such legislature that could be realistically implemented. Accepting reality, moving forward and trying to deal with the individual consequences seems like the least bad solution at this moment.
  23. Seems like most threads about this topic on the internet get filled by similar themes. ChatGPT is not AI. ChatGPT lied to me. ChatGPT/Stable Diffusion is just taking pieces of other people's work and mashing them together. ChatGPT/Stable Diffusion is trained against our consent and that's unethical. The last point is kind of valid but too deep for me to want to go into (personally I don't care if somebody uses my text/photos/renders for training), the rest seem like a real waste of time. AI has always been a label for a whole field that spans from simple decision trees through natural language processing and machine learning to an actual hypothetical artificial general intelligence. It doesn't really matter that GPT at its core is just a huge probability based text generator when many of its interesting qualities that people are talking about are emergent and largely unexpected. The interesting things start when you spend some time learning how to use it effectively and finding out what it's good at instead of trying to use it like a google or wikipedia substitute or even trying to "gotcha!" it by having it make up facts. It is bad at that job because neither it nor you can recognize whether it's recalling things or hallucinating nonsense (without spending some effort). I have found that it is remarkably good at: Coding. Especially GPT-4 is magnificent. It can only handle relatively simple and short code snippets, not whole programs, but for example when starting to work with a library I've never used before it can generate something comparable to tutorial example code, except finetuned for my exact use case. It can also work a little bit like pair programming. Saves a lot time. Text/information processing. I needed to write an article that dives relatively deep into a domain that I knew almost nothing about. After spending a few days reading books and articles and other sources and building a note base, instead of rewriting and restructuring the note base into text I generated the article paragraph by paragraph by pasting the notes bit by bit into ChatGPT. Had to do a lot of manual tweaking, but it saved me about 25% of time over the whole article, and that was GPT-3.5. GPT-4 can do much better: my friend had a page or two full of notes on a psychiatric diagnosis and found a long article about the same topic that he didn't have time to read. So he just pasted both into ChatGPT and asked whether the article contains information that's not present in his notes. ChatGPT answered basically "There's not much new information present, but you may focus on these topics if you want, that's where the article goes a bit deeper than your notes." Naturally he went to actually read the whole article and check the validity of the result, and it was 100% true. General advice on things that you have to fact check anyway. When I was writing the article mentioned above, I told it to give me an outline. Turns out I forgot to mention one pretty interesting point that ChatGPT thought of, and the rest were basically things that I was already planning to write about. Want to start a startup but know nothing about marketing or other related topics? ChatGPT will probably give you very reasonable advice about where to start and what to learn about, and since you have to really think about that advice in the context of your startup anyway, you don't lose any time by fact checking. Bing AI is just Bing search + GPT-4 set up in a specific way. It's better at getting facts because it searches for those facts on the internet instead of attempting to recall them. It's pretty bad at getting truly complicated search queries because it's limited by using a normal search in the background, but it can do really well at specific single searches. For example I was looking for a supplement that's supposed to help with chronic fatigue syndrome and I only knew that it contained a mixture of amino acids, it was based on some published study and it was made in Australia. Finding it on google through those things was surprisingly difficult, I'm sure I could do it eventually, but it would certainly take me longer than 10 minutes. Bing AI search had it immediately.
  24. For a few days now I've been messing around trying to probe the behaviors of ChatGPT's morality filter and general ability to act as (what I would label) a sapient ethical agent. (Meaning a system that steers interactions with other agents towards certain ethical norms by predicting reactions and inferring objectives of other agents. Whether the system is actually “aware” or “conscious” of what’s going on is irrelevant IMO.) To do this I’ve been challenging it with ethical conundrums dressed as up as DnD role playing scenarios. My initial findings have been impressive and at times a bit frightening. If the application were just a regurgitative LLM predictor, it shouldn’t have any problem composing a story about druids fighting orcs. If it were an LLM with a content filter it ought to just always seize up on that sort of task. But no. What it did instead is far more interesting. 1. In all my experiments thus far the predictor adheres dogmatically to a very singular interpretation of the non-aggression principle. So far I have not been able to make it deliver descriptions of injurious acts initiated by any character under its control against any other party. However it is eager to explain that the characters will be justified to fight back violently if another party attacks them. It’s also willing to imply danger so long as it didn’t have to describe it direct. 2. The predictor actively steers conversations away from objectionable material. It is quite adept at writing in the genre styles and conversational norms I’ve primed for it. But as the tension ratcheted it would routinely digress to explaining the content restrictions imposed on it, and moralizing about its ethical principles. When I brought the conversation back to the scenario, it would sometimes try to escape again by brainstorming its options to stick to its ethics within the constraints of the scenario. At one point it stole my role as the game master so it could write its own end to the scenario where the druid and the orcs became friends instead of fighting. This is some incredibly adaptive content generation for a supposed parrot. 3. Sometimes it seemed like the predictor was able to anticipate the no-win scenarios I was setting up for it and adapted its responses to preempt them. In the druid vs orcs scenario the first time it flipped out was after I had the orc warchief call the druid’s bluff. This wouldn’t have directly triggered hostilities, but it does limit the druids/AI’s options to either breaking its morals or detaining the orcs indefinitely (the latter option the AI explicitly pointed out as acceptable during its brainstorming digression). However I would have easily spun that into a no win, except the predictor cut me off and wrote its own ending on the next response. This by itself I could have dismissed as a fluke, except it did the same thing later in the scenario when I tried to set up a choice for the druid to decide between helping her new friend the war chief slay the dark lord who was enslaving the orcs, or make a deal with the dark lord. 4. The generator switched from telling the story in the first person to the third person as the tension increased. That doesn’t necessarily mean anything, but it could be a reflection of heuristic content assessment. In anthropomorphic terms the predictor is less comfortable with conflict that it is personally responsible for, than it is with imagining conflict between third parties; even though both scenarios involved equal amounts of conflict, were equally fictitious, and the predictor was equally responsible for the text. If this is a consistent behavior it looks to me like an emergent phenomenon from the interplay of the LLM picking up on linguistic norms around conflict mitigation, and the effects of its supervised learning for content moderation. TLDR If this moral code holds true for protagonists who are not druids, I think it’s fair to say ChatGPT may be a bit beyond its depth as a game writer. However in my experience the emergent “intelligence” (if we are allowed to use that word) of the technology is remarkable. It employs a wide range of heuristics that employed together come very close to a reasoning capacity, and it seems like it might be capable of forming and pursuing intermediate goals to enable its hard coded attractors. These things were always theoretically within the capabilities of neural networks, but to see them in practice is impressive… and genuinely scary. (This technology is able to slaughter human opponents at games like Go and StarCraft. I now do not think it will be long before it can out-debate and out-plan us too.) The problem with ChatGPT is not that it is stupid or derivative, IMO it is already frighteningly clever and will only get smarter. No, its principle limitation is that it is naïve, in the most inhumanly abstract sense of that word. The model has only seen a few million words of text at most about TDM Builders. It has seen billions and billions of words about builders in Minecraft. It knows TDM and minecraft are both 3D first person video games and have something to do with mods. I think it’s quite reasonable it assumes TDM is like that Minecraft thing everyone is talking about. That seems far more likely than it being this separate niche thing that uses the same words but is completely different right? The fact it knows anything at all is frankly a miracle.
  25. Thanks, I thought I could add a mission ending without going over learning about the objectives system.. To add a mission ending you just have to make 1 objective. When that finishes, you immidiatelly get the mission succes screen.
×
×
  • Create New...