Jump to content
The Dark Mod Forums

stgatilov

Active Developer
  • Content Count

    2821
  • Joined

  • Last visited

  • Days Won

    46

stgatilov last won the day on May 20

stgatilov had the most liked content!

Community Reputation

809 Legendary

About stgatilov

Contact Methods

  • Website URL
    http://dirtyhandscoding.github.io
  • Jabber
    stgatilov@gmail.com

Profile Information

  • Gender
    Male
  • Location
    Novosibirsk, Russia

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I think it is possible, but it would require writing one more shader. So not for 2.08... No idea which rescaling is best though. Nbohr1more has complained that even with natural scale like r_fboResolution 0.5 the picture looks blurred, voting for "nearest" filtering So opinions differ here...
  2. Here is another announcement, arguably more important than what would happen to 32-bit executable. Starting from version 2.09, TDM will not run on Intel HD 3000 and below. The reason is that TDM 2.09 will require OpenGL 3.3 to run. OpenGL 3.3 is the final version of GL 3, and both NVIDIA and AMD have updated their drivers to support it on every hardware which supports GL 3 (e.g. GeForce 8600 from 2006 is OK). Despite Intel HD 3000 being only 9 years old, Intel has already discontinued it (last driver update in 2015, no Win10 support), and did not upgrade the driver to version 3.3. Going with GL 3.3 will greatly simplify rendering code, and will finish the effort of removing deprecated stuff. There is no plan for further increases in required OpenGL version. OpenGL 3.3 will be enough to play TDM for a long time. More recent OpenGL features will be used via extensions, only in case your hardware supports them. Speaking of TDM 2.08, we tried to do our best to support HD 3000, leaving some hacks around. Despite that, we bumped into problems with running the game on the GPU when we accidentally found it. In case you are trying to play TDM 2.08 on Intel HD 3000, please set cvar: r_uniformTransforms 0
  3. beta208-07 has been released. The list of changes is provided in its usual place. Remember that you must download tdm_mirrors.txt again from the link in the original post before running tdm_update. Please pay attention to AI behavior, in particular sitting and sleeping.
  4. I think a separate thread would be better. Why would someone want more than 10K events per frame? The main question is: what are all these events, and why are there so many. The problem is that there is no limit that would be enough for everyone. Someone could always write a loop over all entities and exceed any preset limit.
  5. Ok, I tried the material that you cited. It has the turbulent deform, that's the reason it is excluded. Something to think about. Maybe continue discussion elsewhere?
  6. This is material for water overlay, but not for water surface. I.e. it is the material which is intended to be shown all over the screen when the player dives in. But not for the water surface itself. If you look into tdm_water.mtr, most of materials there have "water" keyword. The rare material which don't should be fixed, I think. The translucent keyword should not do any harm.
  7. You have to add water keyword to the material of the surface which you want to block rain. All proper water materials should have this keyword, since without it player won't be able to dive into it. I know that one brush side is usually enough, but why not expect it on all water surfaces?
  8. The reason this water is ignored is that its material does not have the "water" content flag. It is merely "translucent", so it is skipped because it could be a light flare/rays. There are two levels of determining whether an object is blocker or not. The first one checks if the entity should be added at all, and this one can be forced with a spawnarg. The second (internal) level happens during collision detection, it skips 1) deformed materials, 2) dynamic models, 3) materials without solid or water content flag, 4) all entities in case you have set collisionStaticWorlsOnly spawnarg on emitter.
  9. The liquid case is present in testmap. Does rain work properly there? I think the problem is not with the fact that it is liquid, but with something else. Could you send some sort of map to me? I guess privately...
  10. This is just a minor detail. Yes, you can enable PAE and use 32 GB on a 32-bit Windows server OS, as long as you are ready to live with segmented addressing (like in 16-bit times).
  11. It already works like that, because idlib.natvis has a rule for it. Judging from your screenshot, the natvis rule doesn't work for you. Plane data must be inside parentheses, and in your case they are surrounded by braces.
  12. So many misconceptions here, I guess I should say something. Yes, 32-bit is perfectly OK in general. There is limitation of 2GB of memory in 32-bit mode. And that's a hard limit: a process cannot use more than 2 GB even if you have 16GB RAM physically installed and 32GB pagefile for swap. When trying to allocate more, process will simply crash on 32-bit platform. If your workload fits well into this limit, then there is no harm from it. If it does not, then it becomes an unavoidable blocker. If you have a very big map, you simply cannot dmap it on 32-bit, but if your map is small/medium size, it does not matter for you if executable is 32-bit or 64-bit. Speaking of variables in code and 64-bit mode switch, some of them stay the same size, but some of then switch from 32-bit to 64-bit. All the pointers inevitably become two times larger, and all sizes in STL types do the same. Also, 64-bit process has more overhead for heap allocations. It means that the process consumes more memory in 64-bit, and consequently can work slower since the cache size does not change. You can probably see increased memory consumption yourself: take a big brushy map, and dmap it both in 32-bit and in 64-bit modes, looking at memory consumption in process explorer. While this point is not very strong for native applications and games, it is a major problem for managed languages like Java, where pointers are everywhere. For instance, modern JVM still uses 32-bit pointers internally, as long as you don't request for more than 32 GB of heap memory. They say it is 30% faster 64-bit mode does not offer any new capabilities except for more-than-2GB memory. There are many other differences between 32-bit and 64-bit, but they are all about performance. 64-bit mode does not offer better precision. Floating point numbers are 32-bit and 64-bit both on 32-bit and 64-bit, working with same performance. In fact, 32-bit mode also offers 80-bit floating point numbers. Moreover, before something like TDM 2.06, this awful 80-bit arithmetic was used everywhere for intermediate values when computing complicated expressions. We got many precision problems when we disabled it, and we are still fixing them (e.g. there is a bunch of such fixes for dmap in upcoming 2.08). Now 80-bit floats are disabled both for 32-bit and 64-bit builds of TDM. So it turns out that 32-bit mode offers more floating point precision, although these 80-bit numbers cannot be used universally in Visual Studio anyway. (assembly geeks can also notice that 32-bit mode without 80-bit arithmetic has a tiny bit of overhead for passing floats into functions, due to old calling convention) Speaking of integers, both 32-bit and 64-bit modes offer integers of size up to 64-bit. The 64-bit integers are emulated on 32-bit platform and are pretty slow, but TDM never uses them. 64-bit GCC also offers builtin 128-bit integer type, but Visual Studio does not, and so TDM does not use them. Anyway, I can hardly imagine what we could use them for (assembly geeks can also notice that using 32-bit integers in 64-bit mode occasionally have a bit of overhead, due to some additional extending instructions) And of course bitness of executable does not affect GPU The main performance advantage of 64-bit mode is having 2x more registers in CPU: both GP registers and SSE/AVX registers. When compiler goes out of registers in a tight loop, it has to "spill" into memory. This memory is surely in L1 cache, but L1 cache accesses are still slower than registers, plus more dependencies between instructions. In TDM, there are many computationally heavy routines done in SSE/AVX (aka SIMD), which we had to rewrite for 64-bit mode. Because the original implementation by ID was written in inline assembly, and 1) you cannot use 32-bit assembly on 64-bit build, and 2) VC does not allow inline assembly at all in 64-bit build. So there are two completely different implementations of SIMD routines now: the first one (by ID) is used in 32-bit mode, and the second one (by us) is used in 64-bit mode. As far as I remember, the second one is often pretty low on registers, so if we start using it in 32-bit mode, it will work a bit slower. Some of the new routines written by us are already used in 32-bit mode, e.g. AVX code. Quite unlikely I would say. Only that 32-bit build will start working slower than 64-bit build. People with low-end machines should use 64-bit build, so it's not a big problem. As you see, performance difference is a very complicated question. Out of all I wrote, I think the most difference comes from: 64-bit build wastes more memory (pointers, heap overhead) 64-bit build has more registers 64-bit and 32-bit builds have different SIMD implementations To be honest, no idea which build wins now.
  13. Well, the image is the same in 2.06 and 2.07, but different in 2.08. It does not depend on shadows, you can even disable them. Hence, it is not related to the changelog entry that you referenced. Perhaps someone should bisect SVN history and find the culprit. Could I ask @duzenko do this?
  14. Did you try it in 2.06? Did you try it with shadow maps? Could you share a test map?
  15. It was a pretty hard work to make engine work in 64-bits, because ID wrote code in very platform-specific manner, they never expected it to become 64-bit. However, all that work was finished by 2.06, and no more effort was spent on it since then. It would be great if people do independent performance testing. To be honest, I did not check it for a long time already, and one guy (nbohr1more) is not enough to draw a final conclusion. I believe the performance difference is not very high though.
×
×
  • Create New...