Jump to content
The Dark Mod Forums

stgatilov

Active Developer
  • Posts

    6774
  • Joined

  • Last visited

  • Days Won

    233

Everything posted by stgatilov

  1. I think we should deprecate parallel lights. The way they are implemented in Doom 3 only works intuitively if it is fully contained in a single visportal area, which is rarely the case. Otherwise, there are huge issue with area-portal graph and shadows. For something global like moonlight, the new parallelSky light should be used. If it is contains the whole level in its light volume, then the behavior should be well-defined as long as portalsky material is used to seal levels from the sky (not caulk or something similar). The light is supposed to come through the portalsky surfaces into all areas in this case. Are there any other usages of parallel lights?
  2. Maybe this: https://bugs.thedarkmod.com/view.php?id=6244 ? If yes, please leave a comment there about how much it breaks existing mission. Such circumstances greatly raise significance of an issue, at least in my eyes
  3. Well, the previous dev build introduced a bug with noshadows lights. Their lighting can be optimized away in some cases, for faster performance and wrong results The upcoming fix will probably reduce performance a bit back, although it seems to be rather small.
  4. Every light volume is represented as a frustum. Indeed, spotlight has smaller frustum than ordinary point light which is shadowed, so it generates less interactions. It should be faster in most cases. Currently shadow maps always use cubemap, even for spotlights. But I hope we'll change that too in the future.
  5. dev16814-10408 is released. It finished the optimization on shadow-casting lights, might give some more performance.
  6. Another issue is that sound mixing sometimes changes. The lockpicking section in the training missions includes a case when player needs to pick a lock with a noisy machine nearby. I remember when I played it first time, the generator noise made lockpicking sounds totally inaudible. But since then, sound mixing has moved to OpenAL and maybe something else changed, and now this section looks stupid. Because I can pick lock as usual, the generator does not block lockpicking sounds for me. Perhaps this can be implemented. Of course, it would require more sound propagation processing: now we only trace sound from everything to player, and from player (maybe something else?) to AIs. With this proposal sound has to be traced from everything to all AIs. The masking computation can be done based on overall amplitude or sound sample + final volume. It is not correct but should probably be find as long if we stay on conservative side. On the other hand, it is hardly possible to deal with multiple footsteps issue. There is too much cues that human brain uses here. For instance, it is a thief with special boots would have footsteps very different from footsteps of guards, especially if guards carry armor. Also, a guard might know were his buddies can/should walk and where they should not, and can get suspicious depending on that. Also, thief probably has a very different tempo of footsteps: he moves in more irregular fashion, and never talks while doing so.
  7. In case of stencil shadows, there are two parts: computing shadow volume (on CPU) rendering shadow volume (on GPU) Shadow volume computation is cached in interactions table for each light+entity pair. It is recomputed only when at least one of them changes. Rendering shadow volume cannot be cached because it depends on camera too much. So with stencil shadows it is already good. Yeah, it is possible to skip shadow volumes computation for distant lights. In case of shadow maps, there is also some CPU filtering and actual rendering on GPU. CPU filtering is also cached and only recomputed on light or entity change, but on GPU shadow maps are rendered from scratch every frame. And this is not good: at the very least, static light which has static set of shadow casters should not have its shadow map recomputed every frame. Perhaps it is also good to have shadow map of static entities saved persistently, then every frame copy it and render only dynamic entities on top. I have this issue in my mind for some time already. There are some issues here, since shadow maps which are not recomputed must not change their location in shadow map atlas, and thus they can block other shadow maps from taking proper space (basically, typical memory allocation issues). Also if we try caching only static portion of shadow map, then we have to create another shadow atlas for them, which raises memory requirements dramatically. Right now it is not possible to skip anything for shadow maps at all. It might become possible after I implement reusing shadow maps in static cases. Overall, I can't say I like this idea. Even if light is distant, it is still ugly when shadow does not move in sync with shadow caster. But shadow maps should definitely be optimized for static cases.
  8. FXAA is cheap but looks awful. Supersampling AA is strictly more expensive than multisampling, no reason to use it if multisampling looks OK. Temporal AA requires major changes in the engine in order to be used, plus it kinda requires motion blur to hide its uselessness on fast camera motions. So don't expect multisampling to be replaced soon.
  9. In order to better discuss performance, please look into this article: https://wiki.thedarkmod.com/index.php?title=Tracy:_timeline_profiler#Gameplay Perhaps inspect some Tracy records too. The game consists of four main parts, most of them run in parallel: game modelling renderer frontend renderer backend GPU Let's suppose n-th part consumes Tn milliseconds of time on some interval, then the interval takes min(t1 + t2, t3, t4) of time in total. Usually either p.1 + p.2 or p.4 are bottleneck. If you try doing less updates, you cannot make renderer backend and GPU faster: it still has to render all the stuff every frames. Except for something like reusing shadow maps between frames, which is not implemented yet. You can make game modelling faster if some entities skip thinking. The most time-consuming entities are AIs, and this kind of optimization is applied to them since the very beginning of TDM, known as "interleaded thinking optimization" (Doom 3 had similar but stronger "dormancy" concept). If an AI is behind closed doors and far from player, then he thinks rarely, e.g. 3 times per second. You can use cv_ai_opt_forceopt cvar to experiment with it. It took much effort to make this work reliably, but without it TDM should be very slow on anything but small levels. And this optimization caused some headache later, for instance it caused distant AIs to randomly die with uncapped and low FPS. All the other entities think every frame, as far as I know. Some entities are known to be dangerous to think rarely: that's mostly physics-related things like ropes and ragdolls. Another thing is that many entities spend negligible time in their Think methods, and most entities don't think at all (only "active" entities think, see listActiveEntities command). Speaking of renderer frontend, you can update entities less often. Then you can reduce time on understanding which areas entities belong to, and on generating interactions (that's the main part I think). However, there are parts of frontend which are view-dependent, so they need to run every frame. You can experiment with cvar r_skipUpdates. As far as I understand, it disables all updates of render entities/lights from game code. The game modelling still runs, guards still move around, see you, speak, chase you, and kill you. But renderer displays the obsolete state of the world from the moment when you set the cvar.
  10. I guess the difference is because of long-standing issue was with stencil shadows: https://bugs.thedarkmod.com/view.php?id=5851 Other then that, multisampling is a big waste of memory bandwidth, so antialiasing will never become totally free in TDM.
  11. I can reply with the very same text: why fixing tiny issues of developer/cheat commands in any version if they get broken anyway a year later? We usually support and fix some commands/cvars that mappers use regularly when working on missions, but that's it. I don't have that much time fixing all the minor stuff. It does not pay off fixing some noclip issue that you can a) not get into, b) fix with other console command. When game developers follow the idea "all console commands must work flawlessly", they usually close the console, so that players cannot get into it And yes, that's how testing works: the software is tested constantly, again and again, and the very same bugs are discovered again and again. There was definitely a case when we aimed for stability release (like 2.07), but I don't feel 2.11 is anything close to that.
  12. I investigated the topic, and I still think it is too hard. Precomputed visibility is perhaps the best thing for us. So we can split space into cells, and precompute whether cell A and cell B have unoccluded straight line connecting them. We can limit occlusion only by brushes: there is no need to take models/patches into account. Precomputed visibility should be done on per-area basis. When we compute the visibility data for one area, we consider all visportals and all other areas opaque. In other words, we only check for direct visibility within the area. If such information is available, it can be combined with existing visportal&area traversal code. The main problem is how to precompute visibility on per-cell basis. A solution must: Be conservative: you don't want to occasionally see small holes into nowhere Do perfect occluder fusion: otherwise a big house would not occlude most of the stuff behind it. Have sane build times for brush geometry of our scale. This inevitably leads to pretty complex algorithms. If mapper can add a special brush and say "this is major occluder in visarea N", then we can probably (not sure yet) verify that he is correct in saying that, and simply raytrace this occluder during visportal traversal. But realistically... I don't think mappers would really use this tecnhique, except maybe for a very few people/missions.
  13. The problem I have with antiportals is that using them is too hard. If the obstacle is small, then antiportal won't help much. If it is huge, then it's probably some building and you try to optimize outdoors area. The antiportal over whole building makes sense, but: Rectangular antiportal is not enough: you need something that fills as much of the building's volume as possible, at least a box or two crossing antiportals. Are you really 100% sure player cannot open doors and look through the whole building? Are you sure you would set antiportal in such a way that it won't block something visible? Your antiportals cannot affect all areas like visportals do, because when player is indoors, the antiportal will block his visibility. You want antiportal to work only when player is outdoors, so you need some filter with locations. It becomes so messy that if a mapper wants to optimize his building, he just better place all the portals around and hope for the best.
  14. Occlusion culling is not as easy as it looks. For instance, if you simply check entity's bbox against individual brushes, then most of the entities behind walls would pass the check because no individual brush covers a whole entity completely. Also, there are dozens thousand brushes on large maps, you cannot iterate through them naively. On the other hand, this is close to the concept of "antiportals". That's when mapper puts an "antiportal", ensuring that everything behind it is occluded, and the engine takes it into account whenever it works with portals. But this requires work from mapper, and I don't think it would provide much help. To get serious benefits, you need to recognize the whole wall consisting of many brushes as a single inpenetrable surface, at which point you necessarily have something really complicated. I thought about doing Umbra-like occlusion culling on brushes (with automatic portals and conservative rasterization inside), but I realized it's ton of work. There is always something else to do. By the way, the recent change is not about what player does not see, it's about what light don't see/hit. Just rendering a surface is very cheap because of depth prepass, but light interactions are costly with all the textures, soft shadows, etc. Realizing that you don't need light interaction on something you see is quite beneficial. The problem with original Doom 3 engine is that it basically lights up all objects within light volume. With the exception of static shadow-casting lights, portals are just ignored! I have expanded usage of portals to dynamic lights. It might sound funny, but there is still some room for improvement in this area...
  15. dev16809-10394 is released. This version might give a noticeable performance boost depending on a mission. Let's hope it does not bring along more bugs
  16. Maybe you can check if you can reproduce it without Frontend Acceleration. And if you can, then try also com_smp 0.
  17. As a matter of face, I cannot reproduce the issue on High Expectations as well.
  18. How do you cancel shot? Does the same issue happens to you on other maps?
  19. "player1_weapon" entity usually has two attachments: so-called aimer, and arrow. In this crashdump, the aimer is here, but arrow attachment is dead: it was deleted beforehand, without clearing the attachment reference. And since this code does not check e for NULL, it calls a method on NULL object, which certainly crashes. I'll of course add a check for NULL here, but it would be interesting to understand what kills arrow attachment, and why this problem does not happen every time. UPDATE: Since everyone reported the arrow to be missing just before the crash, I suppose the real issue is that arrow entity is occasionally lost...
  20. It can be easier to trigger on Debug build. Also, the ability to record crashdumps might depend on some Windows OS settings. You can first try to execute "crash" console command and see if you can record a crashdump from it. Just to be sure everything is prepared properly.
  21. Yes. I think it would be even easier if you take clean 2.11 installation and record crashdump on it. This way our release PDB would match your crashdump automatically.
  22. Here you can select configuration. The default one is "Debug" for some reason. It is interesting, because we almost never use pure Debug these days The problem is that "Debug", "Debug Editable", "Debug Fast" all use FASTLINK setting for debug information, which makes PDB file useless. Only "Release" configuration can generate crashdumps that can be passed to someone else for analysis (with PDB file of course). It looks like I can see some stack traces, and the main thread seems to "wait for frontend". It would be interesting to see what is wrong with e->spawnArgs value, but my debugger does not show anything.
  23. Well, this is not a release build, so it is unlikely I can use my PDB. Which build it is, at least? It is near executable. It has .pdb extension.
  24. I suppose you built your own executable. So crashdumps generated on it can only be opened with PDB file that was generated alongside it. It is possible to hack PDB file to "match" EXE file (works fine if you were using Release build of 2.11), but unfortunately it does not work for a crashdump, and I don't have your executable that crashdump was generated for. So either attach a PDB or at least EXE. If you rebuilt TDM executable after recording crashdump, then I guess crashdump is no longer openable.
×
×
  • Create New...