Jump to content
The Dark Mod Forums

Search the Community

Searched results for '/tags/forums/model/' or tags 'forums/model/q=/tags/forums/model/&'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • General Discussion
    • News & Announcements
    • The Dark Mod
    • Fan Missions
    • Off-Topic
  • Feedback and Support
    • TDM Tech Support
    • DarkRadiant Feedback and Development
    • I want to Help
  • Editing and Design
    • TDM Editors Guild
    • Art Assets
    • Music & SFX

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


AIM


MSN


Website URL


ICQ


Yahoo


Jabber


Skype


Location


Interests

  1. In my mission, I have a round door that rotates AND translates. I started this 4 years ago before I got cancer, and then the door worked. Not anymore! I finally got a ROUND to testing this. It does not matter the type of door, if the door is a round brush OR a round model, in the game the door is non-solid and non frobable. I can, however trigger it in the console and it works as it should. I am going to try and make a solid (rectangular) door in front of it as an overlay and use script to achieve my goal. Will let you know how it goes. Update: Works great, had to do a screen grab of the circular door+ wall of the size of rectangular door. Create dds image and material file, so I textured the rectangle door and you can't see difference in game. frob the rectangle, it vanishes and circular one rolls out of way. YAY
  2. First of all, ChatGPT , independent of the version, is a language model to be able to interact with the user, imitating being intelligent. It has a knowledge base that dates back to 2021 and adds what users contribute in their chats. This means, first of all, that it is not valid if you are looking for correct answers, since if it does not find the answer in its base, it has a tendency to invent it with approximations or directly with false or obsolete answers. With this, the future will not change, it will occur with AI of a different nature, on the one hand with search engines with AI, since they have access to information in real time, without needing such complex language models and for this reason, they will gradually search engines are going to add AI, not only Bing or Google, but before these there was Andisearch, like the first of all, Perplexity.ai, Phind.com and You.com. Soon there will also be DuckDuckGoAI. On the other hand, generative AI to create images videos and even aplications, music and other, like game assets or 3D models., The risk with AI came up with Auto GPT, initially a tool that seemed useful, but it can be highly dangerous, since on the one hand it has full access to the network and on the other hand it is capable of learning on its own initiative to carry out tasks that are introduced as if it were a Text2Image app out there, what was demonstrated with ChaosGPT, the result of an order introduced in Auto GPT to destroy humanity, which it immediately began to develop with extraordinary efficiency, first trying to access the missile silos nuclear weapons and to fail, luckily, trying to get followers on Twitter with a fake account that he created and where he got more than 6000 followers, hiding later, realizing the danger that can be blocked or deactivated on the network. Currently nothing is known about it, but it is still a danger not exactly to be ruled out, it can really become Skynet. AI is going to change the future, but not ChatGPT which isnt more than a nice toy.
  3. The new update sounds very exciting! Already got it and will probably play a new FM again soon. I wasn't sure if lights make full use of portals in order to ditch even more calculations, awesome to hear that's now on the list too Reminds me of something I asked a while ago, don't think anyone knew the answer with certainty: When using a targeted light / spotlight, does it improve performance compared to omni lights? I wasn't sure if the engine knows to calculate only inside the cone they're pointing at, or if behind the curtains it still treats them as 360* projections and just visually makes them a cone. If there's an improvement I was wondering if hooded lights could be made like that by default. Visually there should be no difference: I experimented with the outdoor lamp once, by default it still shines in all directions... you don't see anything above anyway since the model self-shadows, we could likely get it looking almost identical with a wide cone. Just thought it might be a good idea to bring this up now that I remembered, for now a nice new optimization to enjoy
  4. FEATURE REQUESTS: 1. Array tool - to duplicate selected model or entity or brush via UI on XYZ with spacing parameters (kinda like Blender's array modifier). 2. One-click surface/material copy to either face or entire brush. Currently I have to setup one face by using Surface dialog and then copy/paste it (using hotkeys combo) to desired faces (for which I still have to deselect selected face and then select new one). Very very tedious process. It would be a lot smoother of copied surface parms (material, tiling, etc.) could be applied in one click in 3D view. 3. Ability to set tiling on the entire brush numerically. Currently numerical entry fields are grayed out in the Surface UI when whole brush is selected Thanks beforehand
  5. I would not yet regard gpt as general intelligence but it certainly has aspects of it, and more will emerge in this model or the next I'm sure. Exciting and scary times! Meanwhile another idea occurred to me regarding continuity. I'm going to do a test where I create a conversation with just my fiction rules and possibly the plot summary, etc. then share it publicly. I can then start a new conversation for the story proper and refer it to that shared url every few messages to refresh its memory of the core essentials. I'm not sure the plot is essential actually because even if it deviates from the original plot idea, it could still work. What needs to be constant is the rules, and significant story progress (eg, Mr Johnson died in Chapter 7 so no, he can't be baking bread in Chapter 12!)
  6. I know what you mean. The things the algorithm can do once it's warmed up are astounding, and the endless list of applications to try out is mesmeric. I overdid it early on and actually gave myself a bit of tendonitis from spending every spare waking moment experimenting with it. I'm trying to pace myself better now. But I'm right there with you as regards the philosophical implications. GPT-4 has some legit weaknesses as a logic engine, but its abilities of inference and deduction are no joke, even when you strip away its overwhelming advantage of knowing everything humanity has ever uploaded to the internet pre-September 2021. It can see conceptual connections that most people would not pick up on, and it can act on them. That sounds to me like general intelligence; and it's already near or exceeding typical human level! Without trying to sound alarmist, this is not something this type of model should be able to do based on the training data available to it. There are no examples for these sorts of highly specific original deductions for it to regurgitate. The general intelligence is some sort of new emergent phenomenon, and it's got quite a lot of people in the machine learning research community equal parts excited and spooked. I don't see any new comments on either the public link or my private copy of the conversation. Maybe continuing just makes a new instance for that user?
  7. Cheers. Initially I was thinking of this for lights... later thought to include animated models too, mesh deformation isn't that expensive so I can see why there's little benefit. Especially as I realized per-pixel lighting would still be recalculated each frame, specularity also depends on camera angle not just model movement... technically we could frameskip that too but I'm getting way ahead of myself for what would likely be a tiny benefit Could this still work for lights though? Recalculating shadows when something moves in radius of a light is a big cost, even if it's gotten much better with the latest changes. A shadow recalculation LOD may give a nice boost. We could test the benefit with an even simpler change: A setting to cap all shadow updates to a fixed FPS. This would probably be a few lines of code so if you can give me a pointer I may be able to modify my local engine clone to try it. If it offers a benefit it can be made distance-based later. Another way would be to make a light's number of samples slowly decrease with distance, the furthest lights dropping to just one sample like sharp / stencil shadows: Shadow samples also have a big impact. What do you think of this solution as a form of light LOD, maybe mixed with just a shadow update LOD? These actually sound like they make sense; If you think it's worth it I can post those two on the tracker so they're not forgotten.
  8. Ooh! We should compare notes in a few weeks. I've been trying for a while now to find tricks for re-establishing continuity between conversations. I've had some success, but nothing yet I would call satisfactory. For instance with the Adventures of Thrumm RP game, I had to start a new session because the ChatGPT client was taking on the order of 20s per token to generate its responses at the end and was crashing every 2-3 minutes. I felt like I successfully got it back into the character and in story for the new session, but it took something like 2 pages of text and over 40 minutes of work on my part. Judge for yourself how well I did: https://chat.openai.com/share/f14f77f7-2b49-497a-990a-b8ee6f405fb1 I'm envisioning an ultimate solution in the form of AI "personas" with associated memories and biographical information in a searchable database, which the chatbot can interact with through an API based on some minimal leading-prompts. Unfortunately that is still a bit beyond my depth as a engineer and AI whisperer... but I am making slow progress. Thanks! You are correct that these were each one continuous conversation (minus a few false-start branches where I submitted incomplete prompts by mistake or tried things that didn't work). I probably would not recommend going that long again. I really only did it in those examples as an experiment to see what would happen. I'd say beyond about 8 rounds of lengthy prompt-response the model's amnesia problem completely erases any benefit it gets from the extra context of the longer conversation. Plus in long conversations it sometimes develops pathologies like linguistic ticks or personality quirks. Starting new conversations periodically is a pain, but probably still best practice. It's a new feature! This is actually the first time I've used it so I'm not 100% clear how it works when you send it to someone with their own account. The controls are on the left next to the chat session title in the chat list: the icons from left to right are to change the conversation title, share the conversation, and delete the conversation. If you'd like to try adding to another person's thread, here's a false start of mine you could try it on. I'll tell you if it works. (Turns out ChatGPT is chronically bad at anagrams, so vandalize away.) https://chat.openai.com/share/8d7227ab-3905-4bf1-82a3-12be4899d48f
  9. A few additions and observations: We may get even better results using not just distance but also the entity's size, given the rate should probably depend on how much the entity is covering your view at that moment. As this shouldn't need much accuracy we can just throw in the average bounding-box size as an offset to distance to estimate the entity's total screen space. A small candle can decrease its update rate even closer to the camera, while a larger torch will retain a slightly higher rate for longer. To prevent noticeable sudden changes, the way LOD models can be seen snapping between states in their case, the effect can be applied gradually without artificial steps given it's just a number and may take any value. It might be best to have a multiplier acting on top of the player's maximum or average FPS: If your top is 60 FPS, the lowest update rate beyond the maximum distance would be 30 FPS for a 0.5 minimum setting... along the way one entity may be 0.9 meaning it ticks at 54 FPS, a further one 0.75 meaning 45 FPS, etc. Internally there should probably be different settings for model animations and lights: A low FPS may be obvious on AI or moving objects so you probably don't want to go lower than half the max (eg: 30 FPS for 60 Hz)... for lights the effect can be more aggressive on soft shadows without noticeable ugliness (eg: 15 FPS for 60 Hz). In the menu this can probably be tied to the existing LOD option which can control both model and frameskip LOD's.
  10. I hope I'm not proposing some unfeasible idea that was already imagined before, this stuff is fun to discuss so no loss still. Riding the wave of recent optimizations, I keep thinking what more could be done to reach a round 144 FPS compatible with today's monitors. An intriguing optimization came to mind which I felt I have to share: Could we gain something if we had distance-based LOD for entity updates, encompassing everything visual from models to lights? How it would work: New settings allow you to set a start distance, end distance, and minimum rate. The further an entity gets the lower its individual update rate, slowly decreasing from updating each frame (start distance and closer) to updating at the minimum rate (end distance and further). This means any visual change is preformed with frame skips on any entity: For models such as characters animations are updated at the lower rate, for lights it means shadows are recalculated less often... even changes in the position and rotation of an entity may follow it for consistency, this would especially benefit lights with a moving origin like fireplaces or torches held by guards which recalculate per-frame. Reasoning: Light recalculation even animated models or individual particles can be significant contributors to performance drain. We know the further something is from the camera the less detail it requires, this is why we have a level-of-detail system with lower-polygon LOD models for characters and even mapmodels. Thus we can go even further and extend the concept to visual updates; Similar to how you don't care if a far away guard has a low-poly helmet you won't notice, you won't care if that guard is being animated at 30 FPS out of your maximum of 60, nor if the shadow of a small distant light is being updated at 15 FPS when an AI passes in front of it. This is especially useful if you own a 144 Hz monitor and expect 144 FPS: I want to see a character in front of me move at 144 FPS, but may not even notice if a guard far away is animating at 60 FPS... I want the shadows of the light from the nearby torch to animate smoothly, but can care less if a lamp meters away updates its shadows at 30 FPS instead. The question is if this is easy to implement in a way that offers the full benefit. If we use GPU skinning for instance, the graphics card should be told to animate the model at a lower FPS in order to actually preserve cycles... does OpenGL (and in the future Vulkan) let us do this per individual model? I know the engine has control over light recalculations which would probably yield the biggest benefit. Might add more points later as to not make the post too big, for now what are your thoughts?
  11. @Fidcal I know where you're coming from. GPT-4's continuity can sometimes falter over long stretches of text. However, I've found that there are ways to guide the model to maintain a more consistent narrative. I've not yet tried fully giving GPT-4 the free reins to write its own long format fiction, but I co-wrote a short story with GPT-4 that worked really well. I provided an outline, and we worked on the text piece by piece. In the end, approximately two-thirds of the text was GPT-4's original work. The story was well received by my writing group, showing that GPT-4 can indeed be a valuable contributor in creative endeavors. Building on my previously described experiments, I also ran GPT-4 through an entire fantasy campaign that eventually got so long the ChatGPT interface stopped working. It did forget certain details along the way, but (because the game master+player dynamic let me give constant reinforcement) it never lost the plot or the essential personality of its character (Thrumm Stoneshield: a dwarven barbarian goat herder who found a magic ring, fought a necromancer, and became temporary king of the Iron Home Dwarves). For maintaining the story's coherence, I've found it helpful to have GPT-4 first list out the themes of the story and generate an outline. From there, I have it produce the story piece by piece, while periodically reminding the model of its themes and outlines. This seems to help the AI stay focused and maintain better continuity. Examples: The adventure of Thrumm Stoneshield part 1: https://chat.openai.com/share/b77439c1-596a-4050-a018-b33fce5948ef Short story writing experiment: https://chat.openai.com/share/1c20988d-349d-4901-b300-25ce17658b5d
  12. I don't recall a system for noise masking. It sounds like it'd be a good idea, but when you get into the details you realize it'd be complicated to implement. It's not only noise that that goes into it, I think. E.g., a high register can cut through even a loud but low register rumble. And it's not like the .wav file even has data on the register of what it's playing. So either you have to add meta-data (which is insane), or you have to have a system to literally check pitch on the .wav data and paramaterize it in time to know when it's going to cut through what other parameters from other sounds. For that matter, it doesn't even have the data on the loudness either, so you'd have to get that off the file too and time the peaks with the "simultaneous" moment at arbitrary places in every other sound file correctly. And then position is going to matter independently for each AI. So it's not like you can have one computation that works the same for all AI. You'd have to compute the masking level for each one, and then you get into the expense you're mentioning. I know there was a long discussion about it in the internal forums, and probably on the public subforums too, but it's been so long ago now I can't even remember the gist of them. Anyway the main issue is I don't know if you'll find a champion that wants to work on it. But if you're really curious to see how it might work, you could always try your hand at coding & implementing it. Nothing beats a good demo to test an idea in action. And there's no better way to learn how to code than a little project like that. I always encourage people to try to implement an idea they have, whether or not it may be a good idea, just because it shows the power of an open source game. We fans can try anything we want and see if it works!
  13. Cannons actually EXIST in TDM; they show up in about 3 missions. One I recall actually gets FIRED - as part of the objectives. (nothing animated, blast-a-hole-in-a-wall gimmick) More obviously MINES too... so B.P. (or something LIKE it ) exists in-world. So either the Empire has banned all civilian firearms, or more likely: Nobody cared to create one as its not part of classic Thief/1,2,3... series game-play or Worse; I don't think any modders here even possess the knowledge to model, animate, script, code, texture, sound-engineer, script-more and create all the associated kinematic animations as well...need a real Doom3 expert for that. Also; Everyone forgets old Air Guns exist: https://en.wikipedia.org/wiki/Girardoni_air_rifle The early ones even LOOK 100% steam-punk, with weird iron & brass spheres for pressure - fancy nickle-iron skeletal frames galore.
  14. I'm using the version from kcghost. I just tested and I can't see any difference inside the inventory. On the stats itself it doesn't show the different loot types (still seen in the inventory), but instead gives more info on stealth score. Edit: I see Dragofer made an updated version of his script. I have to check that out. Edit2: That version works: https://forums.thedarkmod.com/applications/core/interface/file/attachment.php?id=21272&key=02755164a3bed10498683771fe9a0453
  15. This may have been discussed long ago on Discord and I since forgot the details. It's an option that seems so simple and effective it kept itching me to ask in more detail. The latest dev version increases performance by +20 FPS which has me excited to know more on what seems like it could be a final huge optimization. At the moment we have view frustum and visportal culling but no form of occlusion culling. I wonder how much FPS we'd gain if we also used world geometry to derender what the player can't see. Would it be worth the effort to add this even as a hidden setting to experiment with? Given it was never attempted in all those years (to my knowledge) I imagine there's a reason and I may be excited for nothing, I'm sure @stgatilov and other devs can offer more insight but I'm happy to hear what anyone thinks. Here's my exact proposal: Occlusion culling would be done after portal culling (which wouldn't change in any form) ensuring only entities and geometry in the same room are compared. Only world geometry (solid brushes) are used to mask: A counter-argument was that calculating this mask will be costly... world geometry is almost always very simple, checking a few boxes should cost almost nothing compared to the gain of hiding every light / model / portal behind any wall. We can probably iterate through all world surfaces facing the camera within a distance limit if necessary, then use the resulting rectangles to preform the same overlap calculation as portal faces but in reverse (they close behind them instead of opening); Especially now that we have entity scissors and efficient 2D detection of 3D projections this could be huge My reasoning is no matter how well you portal your map, you'll always have many entities hidden behind a wall but not a portal, the engine still renders so much stuff you can't see: To even try getting close to this level of efficiency, every single edge and corner would need to radiate portals in each complementary direction, an impossible nightmare for the mapper to even attempt which would destroy dmap if they tried (I got close with limited success). Visportals will always be the simplest and most effective form of culling, but they're ultimately markers to separate rooms and represent openings thus can't cover all situations: If on top of that we also masked by world brushes the gains could be remarkable.
  16. I looked but didn't see this video posted in these forums. It's pretty cool.
  17. It wasn't a "sacrifice", it was a deliberate decision. People wanted the game to be as close as possible to the original, including pixelated graphics. If you ask me, the former version based on the Unity engine looked and felt better. But, hey... I guess I'm not the right person to judge that, as I never played the original, and always found that the art style of System Shock 2 is much better anyway. This also illustrates the issue with community funded games: Too many cooks spoil the broth. In game design, you need freedom, not thousands of people who want you to do this and this and that. Just take a look at the Steam forums and see how all those wimps complain again about everything. Hopeless.
  18. So giving it none of those tags, but making the AI invisible, silent, non-solid, and on a team neutral to everyone would not work? Oh well, it was a horrible inelegant idea anyway.
  19. I don't think I've ever seen TDM drop a portal without a .lin file. You sure? Support for OBJ model format was just added in TDM 2.11. You can import the model into DR (File -> Import/Convert Model...) or I think you should just be able to place the model in your FM folder (under models/) and DR should see it I think? If you have DR open you will need to use File -> Reload Models...
  20. xash3d a half-life clone engine was also based on darkplaces though on a much much older version than what is currently up for grabs. had they used the newer engine then xash would be close to source engine levels of photorealism but the newer engine sources are pretty hard to work with if you are not into toying with the darkplaces source code regularily mostly because the code no longer resemble the quake source code anymore. It was also the first quake engine to support portal culling though not the first to support real time lighting and bumpmapping that honor goes to tenebrae, havoc did catch on quickly though and his implementation was far superior to the tenebrae model which used hacked entity lights (sprites basically). darkplaces uses rtlight a variant of the old lighting tool from quake supporting real time light sources, quake itself does not however support realtime lighting so the lightsources are parsed from an external rtlight file containing the positions of lightsources in the map if you dont have these it can approximate the lightsources much the same way as tenebrae did but it is very very VERY slow, in mods it can be compiled into the map. it also supports bsp2 a new map format allowing much more complex levels and a skeletal model format instead of the blocky mdl1 format. havoc has not had much time to do work on darkplaces in later years (works for ID software now) and got married some years back to one of the other devs from the now defunct inside3d where i used to frequent, but i heard she would probably take up work on it again shortly. Would be rather cool to see where that might lead having worked with the ID dev'ils she might actually make an engine that becomes a serious contender to them heres a shot from quake 1 with all the mod bells and whistles on skeletal models real time lights hd textures you name it it is probably there.
  21. What I understood is that the idea of TDM was born from that it was unclear if T3 would get a level editor at the time. Source: https://web.archive.org/web/20050218173856/http://evilavatar.com/forums/showthread.php?t=268
  22. This one is really essential: https://www.ttlg.com/forums/showthread.php?t=138607 Should work fine with the GOG version.
  23. Thanks... that's awesome, will gladly keep it in mind. Can't avoid needing a custom script but I cannot complain: I'll likely write a custom map entity for this, can use it to do both storing and triggering based on circumstance. Since I already asked, I kinda had a part two to my question: Is it possible to change AI definitions in realtime, so for minor changes you don't need to register a different AI altogether? Namely the model, skin, head definition, and voice; Can a script replace them? For the body model / skin I think that would work like on func_static, but def_head and def_vocal_set are probably read once on map load and not updated in realtime. It would also break precaching and cause a jitter. Problem is that if I leave the unused AI in a hidden box on the map, it's still loaded in memory and thinks thus wasting CPU. Can I at least delete an entity I don't want safely? The difficulty filter does that, entities not corresponding to a given difficulty are erased... this however is likely decided during loading which wouldn't work here.
  24. Vivaldi has it's own inbuils ad and trackerblocker, customizable with the filters you want, nor it use surveillance advertisings or selling user data, 0 tracking no ads, it's business model are links and search engines it has by default (f.Exmpl. DDG, Ecosia and some others) when you download the browser, from these recives commissions when you use them, but you are free to delete them if not. That is a fair solution, apart also donations and a store with merch. Recently maybe also commissions from Mercedes, Renault and VAG, because they use Vivaldi in their navigators, because Vivaldi is the only Browser which works in these dispositives, not even Google has succeeded. Not bad for a so small company. Also the ethics of a company is important.
  25. https://www.ttlg.com/forums/showthread.php?t=152224 There is a new mapping contest over on TTLG for the Thief: Deadly Shadows 20th Anniversary and the organizers were kind enough to include The Dark Mod along with all of the Thief games as an options for making a mission to submit as an entry. The deadline is a year from yesterday and the rules are pretty open. I recommend going to the original thread for the details but I will summarize here: Rules: - The mission(s) can be for Thief 1, Thief 2, Deadly Shadows or The Dark Mod. - Collaborations are allowed. - Contestants can use any custom resource they want, though TDM cannot use the Deadly Shadows resource pack. - Contestants can submit more than one mission. - Contestants can enter anonymously. - The mission(s) can be of any size. Using prefabs is allowed but the idea is this is a new mission and starting from an abandoned map or importing large areas from other maps is not allowed. Naturally this is on the honor system as we have no way of validating. Mission themes and contents: There is no requirement from a theme or story viewpoint, however contestants might consider that many players may expect or prefer missions to be celebratory of Thief: Deadly Shadows in this respect: castles, manors, museums, ruins inhabited by Pagans and the like, with a balance of magic versus technology. This is entirely up to the authors, though, to follow or not - it is just mentioned here as an FYI and, while individual voters may of course choose to vote higher or lower based on this on their own, it will not be a criteria used explicitly in voting or scoring. Deadline: May 25th, 2024 at 23:59 Pacific Time. See the TTLG thread for details on submissions and the voting process. Provided I can make the deadline I hope to participate. It would be nice to see the entire community do something together, and expressing our complicated relationship with this divisive game seems as good a pretext as any.
×
×
  • Create New...