Jump to content
The Dark Mod Forums

Search the Community

Showing results for '/tags/forums/model/'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • General Discussion
    • News & Announcements
    • The Dark Mod
    • Fan Missions
    • Off-Topic
  • Feedback and Support
    • TDM Tech Support
    • DarkRadiant Feedback and Development
    • I want to Help
  • Editing and Design
    • TDM Editors Guild
    • Art Assets
    • Music & SFX

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


AIM


MSN


Website URL


ICQ


Yahoo


Jabber


Skype


Location


Interests

  1. It seems we killed two birds with one stone (double model / lights on) by moving the attachment from the entity to the animation: anim idle models/md5/weapons/mod_playerlamp/idle.md5anim { frame 1 attach mod:attachment_playerlamp hand_r frame 1 sound_weapon blackjack_sheath frame 12 melee_hold } The lamp no longer auto-spawns but script/tdm_user_addons.script is still required to load script/mod_weapon_playerlamp.script. Console command to spawn the lantern: spawn mod:weapon_playerlamp In v0.2 I also updated def/tdm_player_thief.def to version 2.11 (thanks to @Dragofer). z_handheld_lamp_v0.2.pk4
  2. Can confirm. Any idea why this is? Is it because of this line? https://github.com/thedarkmodcommunity/mod-handheld-lamp/blob/f871527938df96a7efc308fc3ee85c70d8271544/def/mod_weapon_playelamp.def#L38 Other weapons (like the shortsword) have this attachment as well, but they don't show up attached to each other like that. We could just have a separate entity of that model to just for the plater to frob and kick off a frob_action_script to set everything up instead I guess?
  3. That's right. If a mapper decides to adopt tdm_player_thief.def then the next question is: how do you let players know where the lamp is / how to use it? The lamp can replace the blackjack, sword or any arrow type. Just let the player know which slot it is replacing. The lamp can remain in one of the unused slots but then players must know how to cycle through weapons or you must invite them to check the in-game inventory screen. The lamp can remain hidden in one of the unused slots and players could have an item (inventory tool) that operates the lamp. The lamp can remain in one of the unused slots and you can force-bind a dedicated key for players and ask them to use that key. This method is a little intrusive - use at your own risk. Now, if you want players to pick up the lamp then there's more work ahead. When I tested this feature with the lamp model another lamp was attached to my lamp (two models). Furthermore, the lamp was on by default. Perhaps we want to start there. And thinking of improvements, since the lamp is a weapon, it could be operated with the attack button: Click: open aperture (low intensity) Click: medium intensity Click: high intensity Click: close aperture
  4. If dmap doesn't show a leak and there's a leak into the void (a gap in the world into the void or a model origin in the void) that is really strange, too see if is a leak and dmap somehow fails to show it, try to start your map with "devmap mapname.map" instead of using "map mapname.map" devmap afaik ignores leaks, if the map/mission opens then is a leak. But the fact that you are having AAS out of date warnings after dmap, sounds like you are opening a old version of the map not the one you dmap a new, perhaps you have DR badly setup and it is saving the fresh maps and files into a different folder than you think?
  5. Yeh that is also how Doom 3 weapons were done, but perhaps this can be worked around? Next to weaponDepthHack there's also a modelDepthhack bool I assume for every other model, perhaps that just needs to be put in the model def file or enabled by a spawn argument? I really don't know, or if that spawnarg even exists, if not then is something to put in the roadmap.
  6. This is true, but the fact is that the viewmodel only consists of a bare arm (animated model), with something bound to it (static model). The depth hack only applies to the arm model itself. If you want to stop the lantern clipping through walls you have to make a new version of the animated arm model that has the lantern integrated into it.
  7. Thanks for playing and the kind feedback re: the bugs: the brew tank is a new one - thanks for that. Will add it to the list for any future update. the bow: I think that's a TDM bug. I experienced it as well, but only the early days of developing the mission so I thought it had gone away, but I guess not: https://forums.thedarkmod.com/index.php?/topic/21345-210-crashes-may-be-bow-frontend-acceleration-related/ the keys on the guard: never did get to the bottom of that one as I could never reproduce it.
  8. left it out for a day unfortunatly this did nothing the ryzen 3900x does not support onboard gfx sadly but i tried a few other gfx cards but no cigar. driving round the net i see several mentions of problems with this board and the same setup i used when building it -> gfx rx 5700 xt, cpu ryzen 3900x, PSU corsair 1000W model so it seems to be a trend with atleast some of those specific components (the ram used were different than what the other users who had this problem used though there were also a few who used other types). Strangely up untill this it has perfomed perfectly fine and the only change recently was upgrading it to win11.
  9. Built a PC for one of my buddies some years back, lately it started behaving rather weird though. a year back i had to come service it because it refused to boot (no light in keyboard and black screen) reset bios and it worked again, then a few days back it started doing the same crap so reset bios again but this time it failed completly. It does power up but there is no light in keyboard and no screen i even tried flashing the latest bios with the qflash button but it cant even do that... it was newer overclocked and the cooler is a noctua nh d15 so one of the best air coolers out there (newer got about 35" celcius) so i doubt its the cpu that has gone bad. gfx card and ram also work fine but i cannot test the cpu as i dont have an AM4 board on hand. mounted a bios speaker to see if there were any bios codes but the speaker is completely silent to also tested the PSU which is a corsair 1000W platinum model and it works just fine so ughhh any ideas ?.
  10. Thanks! Hint for the safe code here: https://forums.thedarkmod.com/index.php?/topic/21837-fan-mission-the-lieutenant-2-high-expectations-by-frost_salamander-20230424/&do=findComment&comment=485264 Actually, it's probably time I added these hints to the original post....
  11. The truth is that this pacage of vegetation is more complex to implement in game and it will need some sort of reduction to work. I would dare to say it needs animation, collision sounds and maybe casting alpha shadows. I imagine every vegetation model acompanied with a variety of sounds like birds, leaves moving during animated gusts of wind and cicada sounds, making it a whole ecosystem that spreads out occuping a large amount of gamers perception with very small amount of mapping around it.
  12. Yes, I would guess in creative mode it has tweaked generation parameters, and maybe even an invisible header inserted into the model's memory buffer instructing it to be extra friendly and spontaneous. I think OpenAI's API allows you to modify those sorts of things to some extent. (I haven't tried it yet.) The other thing to keep in mind is that the algorithm doesn't work by thinking of an answer, refining it, and then writing it down. What you see on the page is its actual stream of consciousness in real time. It can only remember and write new text based on what's already on the page.... So its thought process for your discussion might look something like this: The really interesting thing is if at any point you inserted a warning that LLMs are bad at arithmetic and suggested a strategy to work around that limitation, then it might not have made the error or lied about the reason. It always knew this information that would give it the right answers, but until it's written down it's not part of the pattern the model is trying to match so it would be ignored. Bringing this back to games, this demonstrates how immature the technology is. A true consumer AGI based on this technology would be augmented with tools to avoid problems like these: a contextual long term memory that feeds in relevant background information into the model. A supplemental internal memory buffer for planning and contemplation. An adversarial response review process. Etc. We are already seeing developments in that direction, and experiments like the Skyrim NPC demo are showing the way.
  13. Also, a more general lesson to draw from these examples is that context is critical to Large Language Model (LLM) algorithms. LLMs are pattern completion algorithms. They function by searching for patterns in the letter-sequence of the text within its memory buffer. It then predicts the most likely sequence of letters to come next. (Or more accurately it randomly selects a block of letters called a token from the predicted probability distribution of possible tokens, but that distinction is mostly academic for the end user.) These models are then trained on effectively the complete written works of humankind to self-generate an obscenely sophisticated prediction model, incorporating literally billions of factors. Context matters because the LLM can only build on patterns already established in the prompts you give it. The less context is given in the prompt, the more the response will tend towards the most common sort of non-specific example in the data set. Conversely the more patterns you establish in a conversation the more the model will want to stick to those patterns, even if they are contradicted by the user's directions or basic logic. In the life is a journey example, once the model has been infected with the idea that "Life is a journey" has four syllables that very simple and powerful meme starts to stick in its "mind". The mistake is to then introduce linkages to syllable counting and even arithmetic without ever directly contradicting that original mistake, which becomes a premise for the entire conversation. In a world where "Life is a journey" has four syllables is an axiom, it is actually correct that 1+1+1+2=4, Incidentally that conversation also demonstrates what I like to call mirroring. Not only does ChatGPT pick up on the content of the prompts you give it, it will also notice and start mimicking text features humans aren’t normally even conscious of: like patterns of writing style, word choice, tone, and formatting. This can be very powerful once you become aware of it, but causes issues when starting off. If you want a specific sort of output, don’t model an opposing mode of conversation in your inputs. If you want the maximize the model's openness to admitting (and embracing) that its previous statements are wrong then you should model open mindedness in your own statements. If you want it to give intelligent responses then talk to it like someone who understands the subject. If you want it to be cooperative and polite, model diplomacy and manners. I actually think it is worthwhile regularly saying please and thank you to the bot. Give it encouragement and respect and it will reciprocate to keep the conversation productive. (Obviously there are also tasks where you might want the opposite, like if you were having the AI write dialogue for a grumpy character. Mirroring is powerful.)
  14. I got it to crash on my Windows debug build. I clicked 'debug', and in VS I was able to see the stack trace: https://drive.proton.me/urls/B06A4E8MV4#2lezsq0gsgfd I think I might know what that is. The entity in question is a door (atdm:arched01_111x40_left) that I didn't want to be openable. If I remember correctly, the usual tricks weren't working (making it a func_static made it disappear, and making it non-frobable the AI were still using it). So I changed the spawnclass to an idStaticEntity. Because it was a prefab, I think I thought it was a custom brush door as well. I see now it's just using a model, so I can probably just change it to that. Anyways, those are all my excuses. I'll fix this and send out a new version that someone can test.
  15. That warning is just because I'm using a custom model (the letters on the buildings in the square) and the material declared in the model itself ('font') doesn't actually exist and it's replaced with a new material using a skin. AFAIK it shouldn't cause a crash. That's not to say something in the map isn't causing it though - just no idea what.
  16. I never realised Bill Gates was a member of these forums. Welcome to the community! I hope you enjoy The Dark Mod. Perhaps your Foundation could help pay for the server hosting or fund the development of some new features?
  17. OK this is a bit frustrating. I'm on version 2.11 (technically 2.11a) and not only is the inventory image for the "Map of Highborough" item black but the map itself is also black. Have I screwed up something here? EDIT: Did some debugging. I did a condump looking for issues and found this: ----- idImageManager::EndLevelLoad ----- WARNING:Couldn't load image: font [map entity: func_static_29] [model: models/map_specific/symbols/H.ase] [decl: font in <implicit file>] [image: font] WARNING:Couldn't load image: guis/assets/game_maps/map_of_icon [map entity: atdm_map_of_1] [decl: guis/assets/game_maps/map_of_icon in <implicit file>] [image: guis/assets/game_maps/map_of_icon] WARNING:Couldn't load image: guis/assets/game_maps/map_of [map entity: atdm_map_of_1] [decl: atdm:map_of in def/tdm_shopitems.def] [window: Desktop] [window: background_map] [decl: guis/assets/game_maps/map_of in <implicit file>] [image: guis/assets/game_maps/map_of] 0 purged from previous 194 kept from previous 2679 new loaded all images loaded in 6.8 seconds --------------------------------------- I opened the highex.pk4 file for examination. The game is trying to load the map images in guis/assets/game_maps, but it appears the PK4 has the actual location of guis\assets\game_maps\guis\assets\game_maps. There's a second level of directories added for some reason.
  18. As far as I know ChatGPT does not do this at all. It only saves content within one conversation, and while the developers definitely use user conversations to improve the model (and tighten the censorship of forbidden topics), it is not saved and learned as is.
  19. This most be a TDM only thing, because I tried it right now and I can use volume keyword, in a sound shader, in Dewm 3 engine, to both increase and decrease sound volume. Or unless is particular to the player footsteps, I didn't tested those, will see and update this comment. Worked with the player footsteps sounds as well but I remember messing around with the c++ code to make those work. I used Blendo Games starter pack and the player system that comes with it, has the footstep system disabled, this because the starter pack has no real player model, just a floating camera. Ps- btw I add to call reloadDecls in the console NOT reloadSounds for the volume keyword to work.
  20. Language models are a mirror, reflecting the collected works of humanity back at us. Some people look in that mirror, see their own reflection, and conclude "there is a artificial person behind this sheet of glass that looks and behaves exactly like me... our days as humans are numbered!". But it's not true. It's just a reflection. It can't create anything that humans couldn't (or haven't) created to begin with. I have no doubt that one day, artificial human-like intelligence will exist, but it will require a lot more than just a language model remixing stuff on the internet. If you're a cargo cult programmer copy-pasting junk code off Stack Overflow, or a hack blog writer churning out articles with titles like "20 dumb things Trump has said", AI is coming for your job — but that's because your job wasn't worth anything to begin with.
  21. @kano It's possible that open source efforts will always lag behind corporate products, but here are some things to consider: You can get datasets such as LAION-5B for free. Whether or not that data has been adequately screened is another story, but all sorts of work are being swapped around for free right now. Just look at what people are doing with Stable Diffusion. Training an LLM/AI requires more resources than running it. If the model leaks, as we have already seen in a few cases, it can be possible to run it at home. That's not "open source" per se, but it might be able to dodge censorship measures if they aren't baked into the model, relying instead on screening the user input and model output server-side. Increasing the parameters and hardware needed to train a model by a factor of 10x doesn't necessarily mean the model will be "10x as good". If large models like GPT-4+ are reaching a plateau of quality, then that could allow smaller players to catch up. There has been research around reducing the number of parameters, since many of them are redundant and could be removed without affecting quality that much. The "little guys" can pool their resources together. You can compare this to hackerspaces with expensive tools that can be used by ordinary people who pay a membership fee. Not only could a few individuals come together and make their own miniature GPU cluster, they could also rent hardware in the cloud, probably saving a lot of money by doing so. Why buy an Nvidia A100 80GB GPU when you can rent 10 of them for the length of time that you need them? Services like Amazon's Bedrock might be helpful, time will tell. Regarding lawsuits or DMCAs, when it comes to software, you can get away with almost anything. It is trivial for power users to anonymously swap files that are hundreds or thousands of gigabytes in size. Even if we're talking about a 100 terabyte blob, that should cost only about $1000 to store on spinning rust, which is well within the means of millions of people. Doing something useful with that may be difficult, but if it's accessible, someone motivated enough will be able to use it. It seems unlikely that we're going to get something self-aware from the current approaches. That battle will be fought a couple decades from now, with much different hardware and more legislative red tape arising out of the current hype fest.
  22. Thanks, wesp5! Same but over here Tools are both the basic starting gear of our protagonist (I get annoyed when I don't get the Spyglass right at the beginning) plus useful stuff found in a mission. I would like someday to remove Alchemy from the skills. I have an idea but we'll see. Hmm I've never checked how that is done but I believe there's a card deck model somewhere in the core pk4. In any case, isn't a little late into the game to have such a tool as optional? Even if the deck gets formally added to TDM it would feature in handful of missions but then, I guess it can be made available in the Buy Equipment screen? This screen isn't available in all missions, though. I don't know!
  23. Congratulations on a new release! I have a small nitpick to make though: I always wondered what's the difference between Skills and Tools and for myself decided, that the former is something you can always do and the latter is something you have to get an external tool for. So if Shadowmark is now a Tool, have you provided a full card deck model for mappers?
  24. I'm NOT talking about "AI" but about "large language model" and that's the SAME thing people do (they DON'T think, they just apply learned patterns in their work) You misread my post, I'm saying people are "stupid" just like the LLMs "The human brain is capable of so many things machines couldn't even "think about" doing" And I'm saying people DON'T use brain capabilities when working, they use patterns.
  25. For a few days now I've been messing around trying to probe the behaviors of ChatGPT's morality filter and general ability to act as (what I would label) a sapient ethical agent. (Meaning a system that steers interactions with other agents towards certain ethical norms by predicting reactions and inferring objectives of other agents. Whether the system is actually “aware” or “conscious” of what’s going on is irrelevant IMO.) To do this I’ve been challenging it with ethical conundrums dressed as up as DnD role playing scenarios. My initial findings have been impressive and at times a bit frightening. If the application were just a regurgitative LLM predictor, it shouldn’t have any problem composing a story about druids fighting orcs. If it were an LLM with a content filter it ought to just always seize up on that sort of task. But no. What it did instead is far more interesting. 1. In all my experiments thus far the predictor adheres dogmatically to a very singular interpretation of the non-aggression principle. So far I have not been able to make it deliver descriptions of injurious acts initiated by any character under its control against any other party. However it is eager to explain that the characters will be justified to fight back violently if another party attacks them. It’s also willing to imply danger so long as it didn’t have to describe it direct. 2. The predictor actively steers conversations away from objectionable material. It is quite adept at writing in the genre styles and conversational norms I’ve primed for it. But as the tension ratcheted it would routinely digress to explaining the content restrictions imposed on it, and moralizing about its ethical principles. When I brought the conversation back to the scenario, it would sometimes try to escape again by brainstorming its options to stick to its ethics within the constraints of the scenario. At one point it stole my role as the game master so it could write its own end to the scenario where the druid and the orcs became friends instead of fighting. This is some incredibly adaptive content generation for a supposed parrot. 3. Sometimes it seemed like the predictor was able to anticipate the no-win scenarios I was setting up for it and adapted its responses to preempt them. In the druid vs orcs scenario the first time it flipped out was after I had the orc warchief call the druid’s bluff. This wouldn’t have directly triggered hostilities, but it does limit the druids/AI’s options to either breaking its morals or detaining the orcs indefinitely (the latter option the AI explicitly pointed out as acceptable during its brainstorming digression). However I would have easily spun that into a no win, except the predictor cut me off and wrote its own ending on the next response. This by itself I could have dismissed as a fluke, except it did the same thing later in the scenario when I tried to set up a choice for the druid to decide between helping her new friend the war chief slay the dark lord who was enslaving the orcs, or make a deal with the dark lord. 4. The generator switched from telling the story in the first person to the third person as the tension increased. That doesn’t necessarily mean anything, but it could be a reflection of heuristic content assessment. In anthropomorphic terms the predictor is less comfortable with conflict that it is personally responsible for, than it is with imagining conflict between third parties; even though both scenarios involved equal amounts of conflict, were equally fictitious, and the predictor was equally responsible for the text. If this is a consistent behavior it looks to me like an emergent phenomenon from the interplay of the LLM picking up on linguistic norms around conflict mitigation, and the effects of its supervised learning for content moderation. TLDR If this moral code holds true for protagonists who are not druids, I think it’s fair to say ChatGPT may be a bit beyond its depth as a game writer. However in my experience the emergent “intelligence” (if we are allowed to use that word) of the technology is remarkable. It employs a wide range of heuristics that employed together come very close to a reasoning capacity, and it seems like it might be capable of forming and pursuing intermediate goals to enable its hard coded attractors. These things were always theoretically within the capabilities of neural networks, but to see them in practice is impressive… and genuinely scary. (This technology is able to slaughter human opponents at games like Go and StarCraft. I now do not think it will be long before it can out-debate and out-plan us too.) The problem with ChatGPT is not that it is stupid or derivative, IMO it is already frighteningly clever and will only get smarter. No, its principle limitation is that it is naïve, in the most inhumanly abstract sense of that word. The model has only seen a few million words of text at most about TDM Builders. It has seen billions and billions of words about builders in Minecraft. It knows TDM and minecraft are both 3D first person video games and have something to do with mods. I think it’s quite reasonable it assumes TDM is like that Minecraft thing everyone is talking about. That seems far more likely than it being this separate niche thing that uses the same words but is completely different right? The fact it knows anything at all is frankly a miracle.
×
×
  • Create New...