Jump to content
The Dark Mod Forums

cabalistic

Development Role
  • Posts

    1579
  • Joined

  • Last visited

  • Days Won

    47

Posts posted by cabalistic

  1. 23 minutes ago, Araneidae said:

    I have noticed one problem since 2.07: I can't edit the brightness settings anymore.  To be precise, the Brightness and Gamma sliders are there under Video Settings, but they don't do anything when I move them, used to work with 2.07.  Otherwise it looks ok so far.  Please to say that sound now works out of the box (actually, maybe it already did for 2.07, can't quite remember now).

    System is Fedora 32 XFCE.

    They don't do anything in the menu, unfortunately, but they should have a clear effect in-game.

    • Like 1
  2. If we enable depth testing and disable the custom depth culling, it looks like this:

    newjob_2020-05-04_19_04_13.thumb.jpg.9db4c425a793186a60b88d522d2307ce.jpg

    Unfortunately, while that works fine for this particular particle effect, it makes many others look very ugly (particularly steam). There is no easy and quick solution to this, I'm afraid. At least nothing that could be squeezed into 2.08.

    • Like 2
  3. Yes, it's definitely soft particles. They are drawn without depth func always, i.e. ignoring the depth buffer, and then doing their own depth test inside the shader with the depth copy which is not AA -> there's your problem.

    • Like 1
  4. 1 minute ago, stgatilov said:

    Does not help. I even tried to replace this pseudorandom with another one which I used for dither yesterday (sin + fract), it looks pretty the same.

    This fixes the problem for me.

    I see no error messages in debug context, no errors when reloading programs. The issue happens on objects which are far enough from the player, so I guess mipmapping is the culprit.

    Should I capture a trace and look at GL states during SSAO rendering?

    Thanks! Yeah, that might be a good start. I'll also try to think about what could possibly be going wrong and what else we could test to narrow it down.

    One thing you might try: in the shader, after the #fidef block from line 80 to 84, you could insert another line 'mipLevel = 1;'. This would force the shader to always look into mip level 1, and you could go through the levels individually to see if there's a specific one that looks obviously wrong, or if they are all bad.

  5. 6 minutes ago, stgatilov said:

    Yes, I reproduce it on RX 550.

    Any plan?

    Can you try to play with the random factor in ssao.frag.glsl (line 168, randomPatternRotationAngle) and e.g. modify the factor 3 at the beginning, see if that makes any difference to the pattern (even if it doesn't immediately fix it)? This is one aspect from the original Nvidia implementation that I've had trouble with before. If the patterns are caused by that, we might need to find a different random pattern or even reintroduce a noise texture to get consistent behaviour on all vendors...

    If that's not it, you can try to change line 74 to 'const int u_maxMipLevel = 0;' as that will disable the most immediate difference between ssao levels 1 and 2/3.

    If it's not that either, then it's going to be more tricky since I can't reproduce it...

  6.  

    20 minutes ago, lowenz said:

    Of course you don't see it, in "Low" mode there's no issue.

    The issue is only with "Medium" and "High". You don't see the blocks in the SECOND screenshot? 😐

    Take a look to the wall and the rooftops.

     

    Thanks for the tip about the bloom (it's unrelated to SSAO, just asking).

    I meant, I don't have the issue and cannot reproduce it, my shot is with ssao level 3. Unfortunately, even with your darkmod.cfg it doesn't happen for me (and yes, I have set r_ssao to 2 and 3, respectively). Are there any other AMD users who can reproduce this?

    • Like 1
  7. I'm not seeing it:

    stlucia_2020-05-04_14_21_43.thumb.jpg.aa2c40194e0f3f15f73729a329e6e28c.jpg

    I'll need your darkmod.cfg and console dump.

    11 minutes ago, lowenz said:

    * Why r_showFBO 5 crashes the game?

    Probably because you have bloom deactivated? Although obviously it still shouldn't crash.

  8. 1 hour ago, duzenko said:

    That should explain it

    @nbohr1more

    No, multidraws aren't worth it the way I implemented

    You need to clip to void to start getting 3% difference

    It seems the driver is doing a good job batching draw calls already

    Let's see how @cabalistic will do it in 2.08 but generally there is little chance to get capped by draw calls/driver in 2.08 already

    Hm, actually draw calls are very clearly starving the GPU because the CPU overhead per draw call is way too high. It's just that, if you have com_smp enabled, it is almost completely hidden behind the fact that the frontend takes a very similar amount of CPU time. But since the frontend also has some clear potential for improvement (mainly stronger parallelization), optimizing the drawcall overhead is definitely worth our time :) But yeah, I think I know how to approach this, so let's postpone this after 2.08.

    • Like 1
  9. 5 hours ago, stgatilov said:

    The cost of bloom itself is pretty low, but the cost of HDR is not so evident. It increases size of framebuffer color image by 100%, size of the whole framebuffer by 50%, and total memory consumed by framebuffers by 33% or 25%. If a particular scene is limited by memory bandwidth or size on particular hardware, it might be noticeable. Don't forget about integrated GPUs, which use ordinary RAM.

    Yeah, that's fair enough, although so far I haven't found an instance where HDR actually has a significant overhead. Just for fun, I tested it on my GPD Win 2 with its dual-core M3 and an HD615, and the framerate difference between 32 bit color precision and no bloom to 64 bit color precision with bloom was something like 35 to 32 fps in the beginning of "A new job". Not that that's an exhaustive test or anything, and as you say, there could be scenes that react more strongly. But my impression is that even the low end of the current hardware generation is pretty well optimized to handle floating point buffers.

    • Like 2
  10. Bloom is a screen-space effect, it has a constant cost independent of the number of lights (or bright spots). That cost is very low; I think it was below 0.5ms on a GTX 1050Ti mobile. I don't know how many fake halos you can render in that time (probably quite a few), but compared to the total cost of scene rendering, you are unlikely to notice this one, except on really old hardware perhaps :)

    • Like 3
  11. On 4/28/2020 at 8:02 PM, joebarnin said:

    Nice! How do you actually enable this on a specific light?

    In the material for your light source, just add a new blend stage, like this:

    	{
    		blend	add
    		map		textures/models/light_emissive
    		rgb     10
    	}

    Use a texture which highlights the emissive parts of your light source (i.e. the areas that shine light), then use the "rgb" factor to scale the light to your desired brightness.

    • Like 2
    • Thanks 1
  12. One thing to note in general: no matter whether you're integrating MP into TDM or start a new project, it's going to take a lot (I mean, seriously, a lot) of work to get something playable. And judging from the activity in this forum, I think there is a high probability that you'd end up with fairly empty servers - which is the death of any MP game. While I'm sure the idea of having an MP thief game, I'm not sure the player base is there to justify the enormous effort it'd take :)

  13. Sort of. What you describe more closely describes super-sampling, i.e. where you render everything at a 16x increased resolution and then scale down the final result. MSAA is a bit smarter and only uses the 16 samples along edges. But still, high resolution and high AA is a deadly combination for even the most powerful of GPUs :) (not to mention it uses insane levels of GPU memory)

    So yeah, the immediate issue here is that the settings are way too high and need to be lowered. That being said, I do feel I should warn that the TDM engine is not the most efficient engine out there, at least not for modern-day hardware. The underlying Doom3 engine was written years ago with very different requirements in mind, and the consequence is that TDM does not efficiently make use of modern multi-core CPUs and GPUs. So if you feel that the quality-per-performance ratio you get from TDM on your hardware is not quite up to par with modern AAA games, you are not wrong. We do try to improve things with time, but it'll never quite reach top-tier performance :)

    • Like 1
×
×
  • Create New...