Jump to content
The Dark Mod Forums

TDM Engine Development Page


Recommended Posts

Fog has a few issues. The indoor/outdoor transition being chief among them. There have been a number of clever tricks to address this but it should probably have a defined zone behavior like info location. Beyond that, I think a number of fog problems would be fixed simply by ensuring that it's one of the last things rendered. For example, in low light conditions when bloom is enabled you get horrible banding. The fix would be to render the fog after the bloom post-process. The same would fix the problems with SSAO and fog. Just render it on top. Hmm. I wonder if there's a sort number that can already accomplish this and we can just update the materials.

Edited by nbohr1more

Please visit TDM's IndieDB site and help promote the mod:

 

http://www.indiedb.com/mods/the-dark-mod

 

(Yeah, shameless promotion... but traffic is traffic folks...)

Link to post
Share on other sites
  • Replies 1.1k
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

Video #1 is up. I'll add the second one to this post in about half an hour.   http://youtu.be/iumyzCicOzM   Note to self: I really must learn to put a proper light or two in even if it's only a test m

Good news, this works out better all round.   The 2.02 release shader (that doesn't have the extra colours) uses far fewer samples than the one I was adapting above, so I've put in the extra sample to

Yes it can be fixed using currentDepth but as expected it creates a slightly calm area round the object:     Although that's probably better than the distortion.

Posted Images

Looks good :)

 

would have thought it would have a slight impact on fps but no change it seems, maybe need to try this out on something that effects the whole scene like SSAO to see any difference ?.

If it has no effect on SSAO or soft shadows fps wise then ouch :S.

 

Also have to take into account that vanilla is hardlocked at 60 fps unless you turn off vsync.

Btw i also got amd gfx, 2x R270x in xfire. Cheap card (atleast untill the bitminers started buying all of them) pretty good performer to.

Link to post
Share on other sites

Don't you just hate it when...

 

You spend 12 hours googling for methods to figure out what zFar distance your engine is using only to find that your second tactic would have given you the answer if only you'd noticed a certain hard-coded matrix needed transposing?

 

Just wanted to let people know I haven't downed tools. I've just been working on a tricky follow-up. It's all very well being able to draw the depth buffer but for some of the purposes people have been talking about above, we need to be able to compare it precisely with the depth of the next shader pass. You can't rely on the stored depth value alone for that. Precision varies enormously over different distances. At the near extreme, between 3 and 4 doom units from the eye, the value in the 24-bit depth buffer goes up by over 4 million. But by the time you get beyond 7387 units from the eye, surfaces that are a whole doom unit apart will start to z-fight.

 

It was tricky because the engine doesn't use an explicit zFar value. It specifies the distances uses a quirky projection matrix which is a hybrid between having a zFar value and having an infinite zFar. The standard formulae for calculating zFar from the matrix give you 2 answers: in our case those two answers are 5997 and infinity. None of the 3000 sites I've searched handle that.

 

Anyway I've got it laid bare in a spreadsheet now and I just need to find a single-figure approximation that fits the curve well out to a distance of several thousand units, then I can get on with trying a real use case :) And on the plus side, I've learned a crapload about depth handling and projection matrices, which can only come in handy.

 

 

 

EDIT: *drumroll* Our zNear and zFar can be constants after all. 3.0 and 10503.0 are the winners, accurate to within 1/5000th of a doom unit all the way up to 40k units distance.

 

After all that, it turns out that zFar just doesn't matter very much, provided it's a lot bigger than zNear and that you stick with whatever you choose. Every number from 500 to infinity seems to produce a consistent result to within 1% once you've converted the result back to linear coordinates. We could have plucked any old number out of thin air without any research and got as good a result on the first try. We should probably choose a lower constant than my "optimal" 10503 because that results in quite a dark image for anyone wanting to display the depth render, and no-one's going to notice the 1000th of a doom unit that might get shaved off a fake shadow if we choose a smaller, brighter number.

 

Just a quick question: http://www.renderguild.com/gpuguide.pdf specifies state.depth.range to contain the near, far and range (far-near) - are these values unusuable, or different, or actually the ones you want to use?

 

Cudos for figuring it all out, btw, love that :wub:

"The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore, all progress depends on the unreasonable man." -- George Bernard Shaw (1856 - 1950)

 

"Remember: If the game lets you do it, it's not cheating." -- Xarax

Link to post
Share on other sites

New video below.

 

Slick!

 

Tels, feel free to optimize this for me as we're getting close to proper uses :)

 

I'm honored that you think I can :)

 

What caught my eye was that you do all the calculations in tmp, then as the last step move them to the result_color. Is it possible to do away with tmp (and does that even make a difference? my experience with ARB is quite limited, so I'm not sure if the compiler will remove it anyway. But if not, it might save one MOV).

 

As for the rest of the code, it might help to spell out exactly what it tries to compute. Maybe we can either combine statements, or remove a few. F.i. in a few places zNear is added - but if this is a constant, it will be always added. Maybe this can somehow be skipped?

 

Anyway, outstanding work!

"The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore, all progress depends on the unreasonable man." -- George Bernard Shaw (1856 - 1950)

 

"Remember: If the game lets you do it, it's not cheating." -- Xarax

Link to post
Share on other sites

The issue was that global ambient lights turn the map in a big, dimly lit world with no variations. Ambient light value can vary between locations, but thats not often used, as far as I know. Still, you always get a single light level inside the entire world/location. The only way to have true dark areas is to disable the ambient light completely. The feature that was being discussed would be a way of creating a radius around the player after which detail starts to fade. A thin black fog would create this effect, everything that is distant would become darker and less detailed. I would do that in my mission, but the impact was heavy, I was also using parallel lights outside (the moonlight). So Tels and I think also Obs were experimenting with attaching a big dim light on the player, what would de facto make an ambient light redundant (and thus eliminating it form the map). That aproach would however have to be studied, because AI also use the light level set by the ambient to notice things. Anyway, this is a short summary. Your depth rendering looks pretty good because it might help with this. But its a secondary feature I guess, since appearently you can in fact use fogs, within reason.

 

I'm using a combination: ambient light (per location), which suffer from the "stand in location A and look over into location B and you see that B is now shaded using the light color from A". However, with a few transitional areas and not too much difference the "fade one global ambient light" works quite good.

 

In addition to that I bound a dim blend light to the player, which lights up the players sourrindings, simulating the world go "dark" in the distance. That gives the scene depth.

 

In addition, there is also a fog light which fades between locations and is switched on and off, so you can have foggy locations and if you enter a different location, the fog disappears.

 

f.i. if the player would come from outside and enter a house, the fog would dim and then switch of while in the hallway, as the player makesit to the main throne room, the fog is gone. As with the ambient fade, this needs transitional areas to hide the transition better (it also works without, but if you know what to look for, you can still see it).

 

I can provide a demo map, if someone really wants it.

 

 

However, a "global" fog would stil be really cool, as I have addd

"The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore, all progress depends on the unreasonable man." -- George Bernard Shaw (1856 - 1950)

 

"Remember: If the game lets you do it, it's not cheating." -- Xarax

Link to post
Share on other sites

Yeah, the player light looks really good and seems to approximate some sort of natural depth attenuation affect for how much

light reaches you. One thing I also notice when strolling in the dark is that actual feature detail becomes harder to distinguish

the darker the area. I hesitate to call it blur and perhaps it's more like how you lose subtle details when you crank the contrast

levels (which is how you'd approximate that I recon... only preserve details for very high contrast changes and average out

other areas with some gourad shading between averaged areas...etc). Hmm, I guess that could also be an optimization because

you could throw out normal map processing in these "low contrast, low detail regions".

 

A TDM demo map would be cool Tels. :)

Please visit TDM's IndieDB site and help promote the mod:

 

http://www.indiedb.com/mods/the-dark-mod

 

(Yeah, shameless promotion... but traffic is traffic folks...)

Link to post
Share on other sites

Just a quick question: http://www.rendergui...om/gpuguide.pdf specifies state.depth.range to contain the near, far and range (far-near) - are these values unusuable, or different, or actually the ones you want to use?

 

They're different, unfortunately. They hold the range that the distances get mapped to in view space, not the original distances -- so always -1, 1 unless you tell it to use a narrower range.

 

I'm honored that you think I can :)

 

What caught my eye was that you do all the calculations in tmp, then as the last step move them to the result_color. Is it possible to do away with tmp (and does that even make a difference? my experience with ARB is quite limited, so I'm not sure if the compiler will remove it anyway. But if not, it might save one MOV).

 

Another good point but we do have to use a temp variable of some kind. result.color is write-only, so it's no use for storing intermediate results.

 

As for the rest of the code, it might help to spell out exactly what it tries to compute. Maybe we can either combine statements, or remove a few. F.i. in a few places zNear is added - but if this is a constant, it will be always added. Maybe this can somehow be skipped?

 

Anyway, outstanding work!

 

Thanks! Yes I think we can definitely get rid of some of the calculations. My brother took a look at it last night and said we only need 2 operations to solve that formula for a constant zNear and zFar. One reciprocal and one multiplication-addition by constants tweaked for our constant distances. And we might even be able to get away without linearizing the distance at all. You need to do that to get a nice color scale when drawing the zBuffer, but for soft particle shading it's enough to know when the particle is "quite close" to the geometry and which side it's on. We can probably find that out to a good enough precision by comparing the original depth values without doing any divisions.

 

I'm going to work on optimising the shader well beyond where it's needed, cos I want the learning :)

 

On fog, binding a fog to the player seems like it's pretty much the same as having a global fog stretched over the entire map. In both cases, it's a light the player can always see and you fade it out in your transition zones. I do like the bound blendlight suggestion for adding depth though. Very nice idea.

Link to post
Share on other sites

Would this depth Info also allow us to solve that infamous water refraction problem that distorts the players weapons?

Link to post
Share on other sites

I think that's another sort order problem. We need something analogous to _currentRender that grabs everything rendered behind x-distance. The z-buffer would help but we'd also need the frame buffer?

Please visit TDM's IndieDB site and help promote the mod:

 

http://www.indiedb.com/mods/the-dark-mod

 

(Yeah, shameless promotion... but traffic is traffic folks...)

Link to post
Share on other sites

Thanks guys, and sorry about the quality. I couldn't see the vid at all on my work laptop! Must remember to use lights, not film in the dark and try to boost it :)

 

Would this depth Info also allow us to solve that infamous water refraction problem that distorts the players weapons?

I think that's another sort order problem. We need something analogous to _currentRender that grabs everything rendered behind x-distance. The z-buffer would help but we'd also need the frame buffer?

The thing is, anything behind the weapon wouldn't have been drawn in the first place. But it's worth a shot I think. Just had a peek and the water distortion is a post-process effect, so a distortion applied over the fully rendered image, water, weapon and all. It distorts that picture. I think the problem is when a bit of water that's just to the left of the weapon shifts left, it can copy some weapon pixels over to the left too. I could try blocking that and leaving the original pixel in place where it tries to take a sample from a spot closer than the water surface. It might make an overly flat water border just round the weapon instead, but worth trying.

Link to post
Share on other sites

Heh actually was a similar bug that was patched by copying of some pixels in quake allthough in quake,

it was bsp box models that showed a few pixels at the edge of the models as all white Oo i find it rather funny that something similar also happens in vanilla.

 

Oh and probably a good idea getting FBO's for vanilla as well, should make it a bit easier to implement more modern code.

Link to post
Share on other sites

Oh and probably a good idea getting FBO's for vanilla as well, should make it a bit easier to implement more modern code.

Do you happen to have any handy links for how that might be done?

 

I'm having a mess round with the water shader. I thought at first that we might not even need _currentDepth to fix it, because it's a PP effect and it already picks up the depth value from the sampled pixel. So we could compare that with fragment.position.z. I tried tinting the water surface red or blue according to whether the depth value of the sampled pixel was above or below the water surface, but it's always showing blue (deeper) even where it picks up a weapon pixel, so I'm not sure the sample is actually picking up anything useful. That value doesn't get used, just passed through. The weapon does show up on _currentDepth ok.

 

No, I'm an idiot. I'm comparing depth with the blue channel from _currentRender because this shader uses xyzw instead of rgba even when handling colours. Try again...

Link to post
Share on other sites

I think nbor1more has a link to a heavily modified vanilla source from justin marshall with FBO support.

Its a bit of a mess to read unfortunatly and i newer managed to get it to compile because justing used managed extentions from msvc,

the code helps a bit though. If i find anything else ill let you know :)

Link to post
Share on other sites

@SteveL

 

That fix is quite a bit better than what we have now. I'll take it :)

 

@revelator: Yeah, here's the jmarshall link, though it's quite a revision compared to vanilla as I recall:

 

https://bmgame.googlecode.com/svn/

Please visit TDM's IndieDB site and help promote the mod:

 

http://www.indiedb.com/mods/the-dark-mod

 

(Yeah, shameless promotion... but traffic is traffic folks...)

Link to post
Share on other sites

Hmm... I'll bet if you sampled the boundary pixels outward 10 or so pixels from the center of the occluder (essentially mirroring in 2d) it would look good enough as a substrate to continue the distortion. Again, this is already a massive leap in quality as is.

 

Edit:

 

On consideration, that would also risk picking up artifacts from other nearby occluders. The only other option would be to have the shader keep sampling the silhouette pixels and produce a smear substrate to distort. Probably would be a negligible improvement if any.

Edited by nbohr1more

Please visit TDM's IndieDB site and help promote the mod:

 

http://www.indiedb.com/mods/the-dark-mod

 

(Yeah, shameless promotion... but traffic is traffic folks...)

Link to post
Share on other sites

Sometimes even a small pebble can cause a great avalanche of progress: a feature frequently seen in science.

 

Good job, everyone, Steve especially. Looks like there is a lot of nice stuff coming along the next few TDM updates!

Clipper

-The mapper's best friend.

Link to post
Share on other sites

Thanks all. @revelator: I'll list you as co-author of the depth enablement when I commit it, hopefully later this week after I've tested whether the precision of the capture is affected by the different image initialisation options. That's the last thing on my to-do list for the base feature.

 

Hmm... I'll bet if you sampled the boundary pixels outward 10 or so pixels from the center of the occluder (essentially mirroring in 2d) it would look good enough as a substrate to continue the distortion. Again, this is already a massive leap in quality as is. Edit: On consideration, that would also risk picking up artifacts from other nearby occluders. The only other option would be to have the shader keep sampling the silhouette pixels and produce a smear substrate to distort. Probably would be a negligible improvement if any.

 

I do have one idea for trying to make it blend better. The heathaze shader takes samples from 3 different positions for the red, green, and blue pixel components. In the test above I blocked the distortion if any of those samples came from a foreground object. But perhaps I can selectively block the distortion only for individual colors that come from the wrong place. That might look weird or it might fix it completely :)

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


×
×
  • Create New...