cabalistic
-
Posts
1579 -
Joined
-
Last visited
-
Days Won
47
Posts posted by cabalistic
-
-
Yes. Very likely a compiler optimization bug...
-
If we enable depth testing and disable the custom depth culling, it looks like this:
Unfortunately, while that works fine for this particular particle effect, it makes many others look very ugly (particularly steam). There is no easy and quick solution to this, I'm afraid. At least nothing that could be squeezed into 2.08.
- 2
-
Yes, it's definitely soft particles. They are drawn without depth func always, i.e. ignoring the depth buffer, and then doing their own depth test inside the shader with the depth copy which is not AA -> there's your problem.
- 1
-
No, it looks like it's the soft_particle shader. It's operating with a copy of the depth buffer, so that may well be the problem. Not an easy fix, unfortunately.
-
@lowenz stgatilov may have found a fix for the SSAO issue, could you confirm if this fixes the issue for you, as well? Please download the attached shader file and put it into your darkmod\glprogs directory (create the glprogs folder if it doesn't exist): ssao.frag.glsl
- 1
-
1 minute ago, stgatilov said:
Does not help. I even tried to replace this pseudorandom with another one which I used for dither yesterday (sin + fract), it looks pretty the same.
This fixes the problem for me.
I see no error messages in debug context, no errors when reloading programs. The issue happens on objects which are far enough from the player, so I guess mipmapping is the culprit.
Should I capture a trace and look at GL states during SSAO rendering?
Thanks! Yeah, that might be a good start. I'll also try to think about what could possibly be going wrong and what else we could test to narrow it down.
One thing you might try: in the shader, after the #fidef block from line 80 to 84, you could insert another line 'mipLevel = 1;'. This would force the shader to always look into mip level 1, and you could go through the levels individually to see if there's a specific one that looks obviously wrong, or if they are all bad.
-
6 minutes ago, stgatilov said:
Yes, I reproduce it on RX 550.
Any plan?
Can you try to play with the random factor in ssao.frag.glsl (line 168, randomPatternRotationAngle) and e.g. modify the factor 3 at the beginning, see if that makes any difference to the pattern (even if it doesn't immediately fix it)? This is one aspect from the original Nvidia implementation that I've had trouble with before. If the patterns are caused by that, we might need to find a different random pattern or even reintroduce a noise texture to get consistent behaviour on all vendors...
If that's not it, you can try to change line 74 to 'const int u_maxMipLevel = 0;' as that will disable the most immediate difference between ssao levels 1 and 2/3.
If it's not that either, then it's going to be more tricky since I can't reproduce it...
-
20 minutes ago, lowenz said:
Of course you don't see it, in "Low" mode there's no issue.
The issue is only with "Medium" and "High". You don't see the blocks in the SECOND screenshot?
Take a look to the wall and the rooftops.
Thanks for the tip about the bloom (it's unrelated to SSAO, just asking).
I meant, I don't have the issue and cannot reproduce it, my shot is with ssao level 3. Unfortunately, even with your darkmod.cfg it doesn't happen for me (and yes, I have set r_ssao to 2 and 3, respectively). Are there any other AMD users who can reproduce this?
- 1
-
-
What do you mean, have it back? r_showfbo 4 has not been disabled...
- 1
-
1 hour ago, duzenko said:
That should explain it
No, multidraws aren't worth it the way I implemented
You need to clip to void to start getting 3% difference
It seems the driver is doing a good job batching draw calls already
Let's see how @cabalistic will do it in 2.08 but generally there is little chance to get capped by draw calls/driver in 2.08 already
Hm, actually draw calls are very clearly starving the GPU because the CPU overhead per draw call is way too high. It's just that, if you have com_smp enabled, it is almost completely hidden behind the fact that the frontend takes a very similar amount of CPU time. But since the frontend also has some clear potential for improvement (mainly stronger parallelization), optimizing the drawcall overhead is definitely worth our time But yeah, I think I know how to approach this, so let's postpone this after 2.08.
- 1
-
5 hours ago, stgatilov said:
The cost of bloom itself is pretty low, but the cost of HDR is not so evident. It increases size of framebuffer color image by 100%, size of the whole framebuffer by 50%, and total memory consumed by framebuffers by 33% or 25%. If a particular scene is limited by memory bandwidth or size on particular hardware, it might be noticeable. Don't forget about integrated GPUs, which use ordinary RAM.
Yeah, that's fair enough, although so far I haven't found an instance where HDR actually has a significant overhead. Just for fun, I tested it on my GPD Win 2 with its dual-core M3 and an HD615, and the framerate difference between 32 bit color precision and no bloom to 64 bit color precision with bloom was something like 35 to 32 fps in the beginning of "A new job". Not that that's an exhaustive test or anything, and as you say, there could be scenes that react more strongly. But my impression is that even the low end of the current hardware generation is pretty well optimized to handle floating point buffers.
- 2
-
Bloom is a screen-space effect, it has a constant cost independent of the number of lights (or bright spots). That cost is very low; I think it was below 0.5ms on a GTX 1050Ti mobile. I don't know how many fake halos you can render in that time (probably quite a few), but compared to the total cost of scene rendering, you are unlikely to notice this one, except on really old hardware perhaps
- 3
-
I copied that from the original cubemap reflection shader, no clue about its origin or purpose. To be honest, I forgot about that line, or I would have tried to see what it looks like without it
-
Depending on your needs, you could also look into DocTest: https://github.com/onqtam/doctest
It's a single-header testing library for C++ and I found it quite pleasant to work with.
-
On 4/28/2020 at 8:02 PM, joebarnin said:
Nice! How do you actually enable this on a specific light?
In the material for your light source, just add a new blend stage, like this:
{ blend add map textures/models/light_emissive rgb 10 }
Use a texture which highlights the emissive parts of your light source (i.e. the areas that shine light), then use the "rgb" factor to scale the light to your desired brightness.
- 2
- 1
-
You'll have to decide that for yourself. This is what the above screen looks like with bloom disabled:
- 1
-
21 minutes ago, nbohr1more said:
No, I tested with the one from the ingame download list. This one does show the issue, but it's also broken in 2.07 for me. Looks like that softparticle has no texture, so it's getting rendered with _default - hence why it's black.
-
7 hours ago, lowenz said:
Water sprays texture issue (Siege Shop)
Hm, looks fine on my end, Can you provide your hardware specs and darkmod.cfg?
-
- Popular Post
- Popular Post
The next 2.08 beta update is going to include a new Bloom effect and the ability to render to a floating-point (HDR) buffer. While that may sound very technical, it gives you new options to represent bright light sources in your maps. As an example, look at this screenshot courtesy of @peter_spy:
No particles effects are involved. The lamps have a simple blend add stage with a texture and an 'rgb 20' factor, which means that the color values of the texture are multiplied by 20. They will therefore exceed the normal color range and go into HDR range. And while they will eventually be clamped back down to the standard [0,1] range and thus appear white on the screen, the new Bloom effect collects these values, blurs them and adds them to the final image - that's what gives the lamps that glow and makes them look bright.
So, if you'd like to play with that, with the next 2.08 beta you'll be able to enable Bloom in the advanced settings under experimental features and also set the color precision to 64 bits (float). Hope this will give you some new options to accentuate your lights Or if you're impatient, you can try this build: https://ci.appveyor.com/api/buildjobs/at3a0wyvy3kgt0ep/artifacts/TheDarkMod.7z
(The downside, of course, is that both Bloom and 64 bit color precision are not going to be on by default in 2.08. If they are off, the lamps will just appear white without glow.)
Btw, aside from that nice glow around over-bright light sources, the higher color precision won't really make much of a difference. But it does help with banding artefacts - fog and similar things will look noticably smoother.
- 7
- 5
-
- Popular Post
- Popular Post
Sorry for resurrecting an old thread, but I thought this might interest a few people
(I accidentally applied the wrong normal map to the floor, so ignore those artefacts you may see.)
This works by means of a new custom shader, which I've attached to this post. I don't know if it'll land in the upcoming 2.08 release, but you could also bundle the shader with your map, and it should even work in 2.07.
How to use it? Well, it's admittedly not super user-friendly. You'll need the regular cubemap capture as before. But in addition you'll also need the world position from where the cubemap capture was taken, and you need to specify an axis-aligned bounding box that approximates the captured geometry in the cubemap. This means that this technique works best in rectangular-shaped areas (like in Epifire's test map from this video), and that rectangular shape should be axis-aligned. You can of course still define an axis-aligned bounding box for non-rectangular or rotated geometry, but it'll probably not look as good.
Since there's currently no support from DR or the game to get those parameters, you'll have to measure those three positions yourself (cubemap capture position and AABB min and max corners), either in DR or in the game with noclip and getviewpos. They don't have to be exact, but it'll look better if they are not totally off.
Now, in your material, you'll need to replace the default cubemap reflection stage, i.e. this part:
{ blend gl_dst_alpha, gl_one cameraCubeMap env/cubetest texgen reflect }
Replace it with the following:
{ blend gl_dst_alpha, gl_one program parallaxCubeReflect vertexParm 0 0, 40, 100, 0 // cubemap capture position vertexParm 1 -130, -240, 0, 0 // proxy AABB min vertexParm 2 130, 320, 200, 0 // proxy AABB max fragmentMap 0 cameraCubeMap env/cubetest // reflection cube map fragmentMap 1 _flat // normal map }
vertexParm 0,1,2 are the cubemap capture position and the AABB min and max corners in world space, respectively. fragmentMap 0 is the actual captured cubemap, and fragmentMap 1 is the normal map for your surface (or '_flat', if you don't want one).
- 9
-
One thing to note in general: no matter whether you're integrating MP into TDM or start a new project, it's going to take a lot (I mean, seriously, a lot) of work to get something playable. And judging from the activity in this forum, I think there is a high probability that you'd end up with fairly empty servers - which is the death of any MP game. While I'm sure the idea of having an MP thief game, I'm not sure the player base is there to justify the enormous effort it'd take
-
I think it might also have to do with the total number of reviews? 96% positive of 42,079 is more statistically significant than 99% of 275, and that might be one way to express that to a certain extent.
- 1
-
Sort of. What you describe more closely describes super-sampling, i.e. where you render everything at a 16x increased resolution and then scale down the final result. MSAA is a bit smarter and only uses the 16 samples along edges. But still, high resolution and high AA is a deadly combination for even the most powerful of GPUs (not to mention it uses insane levels of GPU memory)
So yeah, the immediate issue here is that the settings are way too high and need to be lowered. That being said, I do feel I should warn that the TDM engine is not the most efficient engine out there, at least not for modern-day hardware. The underlying Doom3 engine was written years ago with very different requirements in mind, and the consequence is that TDM does not efficiently make use of modern multi-core CPUs and GPUs. So if you feel that the quality-per-performance ratio you get from TDM on your hardware is not quite up to par with modern AAA games, you are not wrong. We do try to improve things with time, but it'll never quite reach top-tier performance
- 1
Beta Testing 2.08
in The Dark Mod
Posted
They don't do anything in the menu, unfortunately, but they should have a clear effect in-game.