Jump to content
The Dark Mod Forums

Newbie OpenGL Questions


ungoliant

Recommended Posts

I've been looking for a place to ask lots of dumb questions about opengl usage in d3, and how i can tweak it to actually do what i want in material shaders. Maybe you have been too. Well, here's the thread to ask in.

 

First topic: Blend. i understand that it uses a source and destination 'layer', the source being multiplied by a sourceblend, the destination multiplied by a destination blend, and the 2 results add together to form a result for each pixel.

 

But what are these source and destination layers it speaks of?

heres an example:

textures/darkmod/misc/junk
{	
nonsolid
noshadows
otherglobalstuff

qer_editorimage	textures/stuff/preview
{
	blend gl_dst_color, gl_zero
	map 	textures/stuff/image
}	
}

 

the srcblend/destblend are gl_dst_color and gl_zero, ok. but what is the source and dest layers?

 

modwiki says:

source - The source layer is usually the RGB color data in a texture image file. This texture is gathered from the material shader applied to the current polygon. The actual image used in the source layer is generally defined through use of the map keyword.

 

destination - The destination layer is the color data currently existing in the frame buffer.

 

Ok so the source sounds like its the diffuse map (in this case) but doesn't have to be. if i left the "map" part blank, would it use the image data from the previous stages (if any)? if theres already images in previous stages how does this work in conjunction with my new textures/stuff/image diffuse i'm blending right now along with this frame buffer. that sounds like 3 layers, not 2.

 

and what is this frame buffer? what color data is in it?

 

And these source/dest blends. totally cannot wrap my head around it. I can almost get it, and then a bunch of stuff about alpha gets thrown in, and i get completely lost. is there a general rule of thumb i can use about each of these blend types that i can use to predict or plan the results of the blend function???

Link to comment
Share on other sites

From my understanding the Source is whatever color is mapped to a vertex via a texture (etc).

 

The Frame-Buffer is a temporary storage space that stores what will be drawn in the next frame.

 

The idea is that each "pass" paints the whole frame-buffer with some effect then the next paints over it or combines with it.

 

If you use too many passes it takes too long to create the final image and therefore low FPS.

 

The vfp code allows you to pull-off multiple effects in the same pass using mathematical operations.

 

The confusion concept is that you have to imagine painting each pass pixel by pixel or quad by quad...

Edited by nbohr1more

Please visit TDM's IndieDB site and help promote the mod:

 

http://www.indiedb.com/mods/the-dark-mod

 

(Yeah, shameless promotion... but traffic is traffic folks...)

Link to comment
Share on other sites

I guess you don't need to deal with source and destination a lot. I'd like to add something to what NB said about the destination. The framebuffer stores the rasterized Scene, the final image that is to be shown on the screen. The explanation you read is just a little confusing I think. This is the normal way to go, independent of open GL.

 

I do think however NB mixed something up with the explanation of the source. The vertex color has nothing to do with it and the color that is mapped to vertices is even more misleading. The source is simply the material. The material is then mapped to the uv-space of objects/polygons.

 

The different blend methods are explained pretty good on modwiki I believe (here and here). The source and destination blend values just tell the renderer how much of the texture shall be applied to what is already stored in the framebuffer. There are aliases defined for quick usage, one example:

blend add is an alias for blend gl_one,gl_one. From the name of the alias we know instantly what this blend does. The source is multiplied with 1 and added to the destination, which is also multiplied with one. So it's simply destination=mappedSource+destination.

 

Here the Rendering Pipeline of OpenGL is explained1. It is however not important at all for understanding how the material system in D3 works. It's always nice to know such things though, so it's a good read. ;-)

 

_______________________________

1 I actually have a much better explanation of it available in the presentation of a lexture I attended. But it's all german...

Link to comment
Share on other sites

ok i've been playing around a bit, and i think its starting to make more sense as long as i think of R,G, and B channels as a composite scale from black to white, where black = 0, and white = 1.

 

blend filter, aka gl_dst_color, gl_zero == gl_zero, gl_src_color ??

either way the final result is src_color * dst_color, correct?

 

the way i visualize this is white portions of the source image are "transparent" (1*dst_color=dst_color) and as colors get darker they appear more opaque (0*dst_color=black), resulting in the image appearing to 'overlay' on top of the frame. for inverting a source grayscale image filter, i guess that gl_one_minus_dst_color, gl_zero != gl_zero gl_one_minus_src_color ?? must use the latter to get the right effect, like in textures/darkmod/decals/dirt/dripping_slime05.

 

so if i understand how blend filter/modulate works with RGB, a source pixel will only appear transparent if all 3 channels = 1(white). So a source pixel (0,0,1) would show only the blue channel of the frame, and (0,0,0) makes that pixel on the frame appear black, right?

 

 

 

Regarding blend add. if you add the source image to the destination, it just looks like a really washed out combination of the frame/source, unless your frame is black (addition capped at 1? 0.8+0.4=1.0?). Seems useless... so let me ask this. Does each stage in the material shader get added directly to the frame before the next stage is computed? Is that what each 'pass' that NB referred to is? That would seem to be the only way to make practical use of such a blend mode.

 

Still not ready to touch blend blend and alpha channels yet. need more experimentation.

Link to comment
Share on other sites

I don't know about blend add capping the maximum value, but it would make sense considering everything else is capped at 1.0 too. The rest of your assumptions seem correct to me.

 

blend blend == blend gl_src_alpha,gl_one_minus_src_alpha is the basic alpha blending function. The alpha channel tells the renderer how transparent a texture (or stage in this case) is supposed to be. Every pixel has color and alpha values, so that the final value with blend blend is computed by the function destination = alpha*src + (1-alpha)*dest

Link to comment
Share on other sites

performance question. I've been painting the living crap out of my mansion with decals (ss in WIP thread coming soon), and a thought occured to me - does blend filter, or any blend mode using src or dst (ex. gl_dst_color) technically count as post processing and make the material more performance-intensive than your regular blend diffuse/add/gl_one/gl_zero? if so does that also apply to normals and specular, or diffuse maps that make use of alpha channel? are gl_zero and gl_one computed at compile-time, and the src/dst blends computed during run-time?

 

Basically i just want to know if i should be a bit sparing with decals, or if i should just continue to go hog-wild with it, because i could still fit a couple hundred more just on the mansion-proper, let alone guest house and servants quarters, and thats not including the ivy-overgrowth.

Link to comment
Share on other sites

performance question. I've been painting the living crap out of my mansion with decals (ss in WIP thread coming soon), and a thought occured to me - does blend filter, or any blend mode using src or dst (ex. gl_dst_color) technically count as post processing and make the material more performance-intensive than your regular blend diffuse/add/gl_one/gl_zero? if so does that also apply to normals and specular, or diffuse maps that make use of alpha channel? are gl_zero and gl_one computed at compile-time, and the src/dst blends computed during run-time?

 

Basically i just want to know if i should be a bit sparing with decals, or if i should just continue to go hog-wild with it, because i could still fit a couple hundred more just on the mansion-proper, let alone guest house and servants quarters, and thats not including the ivy-overgrowth.

 

if your fps is good go with it.

Dark is the sway that mows like a harvest

Link to comment
Share on other sites

From my understanding a Post Process is anything that has to wait for the whole scene to render because it requires color data from more than one surface?

 

Usually this is a vfp applied to the result in the frame buffer.

 

Traditional blends only require per-material passes which do eat FPS but not on the order of a post process that has to wait for ALL blends before it is executed.

Edited by nbohr1more

Please visit TDM's IndieDB site and help promote the mod:

 

http://www.indiedb.com/mods/the-dark-mod

 

(Yeah, shameless promotion... but traffic is traffic folks...)

Link to comment
Share on other sites

They wont be marked as post process unless you explicitly set the sort to that (and even then it will be painted early) or if there's a fragment/vertexshader called from it.

 

If there arnt any shaderprogs you 'should' also be able to play with the sort order and try manually correct it, but you shouldn't have trouble with decal-ish stuff.

 

A trick when trying to get full color depth into a blend stage which is using alpha (i.e you're not just using add/mod to darken surfaces) that otherwise gives washed transparency is to (make a backup) take the diffuse image and halve the alpha chan and rgb then use a blend stage like:

 

Edit : terrible low quality shot, soz no resize today :(

virlater.jpg

 

    {
       blend  diffusemap // brings in macro elements which you can get lit by sources
       map add(textures/trees/derpy, textures/trees/derpy) // I'll pretend half precision doubled is ok most of the time
       alphatest 0.9 // cut out the detail you want to blend with a bit more subtlety
   }
   {
       IgnoreAlphaTest // alphablend the details you wanted
       blend  GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA
       map textures/trees/derpy
       rgb 0.07 // since this stage will not get lighting info, ambient value is usually correct, can also use alpha to control the fade better
   }

 

I'd only play around with it if you're really bored, think you could also use it to intensify other effects in the same way if they get washed out like translucent materials.

Link to comment
Share on other sites

  • 5 months later...

Quick one, can a blendLight do a diffuse blend?

 


lights/My_Custom_Light1
{

blendLight

lightFalloffImage makeIntensity( textures/lights/map_specific/My_Custom_Light1_Z ) 

{ 

forceHighQuality

blend diffuse

map textures/lights/map_specific/My_Custom_Light1_XY 

zeroClamp 

}

} 

Please visit TDM's IndieDB site and help promote the mod:

 

http://www.indiedb.com/mods/the-dark-mod

 

(Yeah, shameless promotion... but traffic is traffic folks...)

Link to comment
Share on other sites

Another:

 

Is it possible to perform an "empty" blend blend stage (pure transparent) then use the "_scratch" from that stage for the diffuse stage and set that stage to IgnoreAlphaTest in a material?

 


textures/My_Custom_texture1

{

sort decal

{

blend blend

map {some texture}

Alpha 0.9

} 

{

forceHighQuality

blend diffuse

map _scratch 

IgnoreAlphaTest

zeroClamp 


}

} 

 

Please visit TDM's IndieDB site and help promote the mod:

 

http://www.indiedb.com/mods/the-dark-mod

 

(Yeah, shameless promotion... but traffic is traffic folks...)

Link to comment
Share on other sites

Quick one, can a blendLight do a diffuse blend?

 

No. Diffusemaps are an input to the interaction shader, they are not an actual blend mode, even though the "blend" keyword is overloaded for this purpose.

 

You can, however, use any available GL blend mode (including the manually specified ones) with a blend light, not just the add/filter/modulate shortcuts. This allows you to use a blend light for unusual effects like giving a colour cast to an entire area, creating a "dark light", or increasing the contrast of contained textures (e.g. "blend gl_src_color, gl_src_color").

 

Is it possible to perform an "empty" blend blend stage (pure transparent) then use the "_scratch" from that stage for the diffuse stage and set that stage to IgnoreAlphaTest in a material?

 

I don't understand what you are trying to accomplish here, but as rich said it probably won't work.

Link to comment
Share on other sites

Thanks for the answers Rich and OrbWeaver.

 

In the first example, I was trying to create a blendLight that doesn't have an emissive quality that the normal additive blends do and it would be light-reactive. It could be used as an alternative to grime decals in the same way as Rebb's blend filter example works but (again) light-reactive. It could also be used to achieve an effect similar to vertex-blended textures. Unfortunately it appears that this wont work.

 

In the second example, I was trying to create a decal that captures the lit texture behind it and renders this as the diffuse stage. This could be used for a lot of the same applications as a render-to-texture and wouldn't require an expensive post-process cycle. I was thinking that mirrorRender could be tricked into something like this as well.

 

But, yes, the second one looks a bit like a paradox because:

 

1) The blend stages rely on Alphatest

2) The Alpha is fully transparent

3) Only the diffuse is light reactive

4) In an Alpha test material the non-alpha portions are what determine the opaque foreground

5) The diffuse is being made from the material in the scratch buffer

6) The diffuse is being asked to be opaque after getting the scratch buffer

 

:blink:

 

 

I will test when I can anyway...

Please visit TDM's IndieDB site and help promote the mod:

 

http://www.indiedb.com/mods/the-dark-mod

 

(Yeah, shameless promotion... but traffic is traffic folks...)

Link to comment
Share on other sites

In the first example, I was trying to create a blendLight that doesn't have an emissive quality that the normal additive blends do and it would be light-reactive. It could be used as an alternative to grime decals in the same way as Rebb's blend filter example works but (again) light-reactive. It could also be used to achieve an effect similar to vertex-blended textures. Unfortunately it appears that this wont work.

 

No, a blendLight is essentially just a decal applied via a 3D lightvolume rather than a planar surface. It doesn't allow you to sidestep any of the limitations of the blend operation itself, it just gives you an alternative way of applying it to level geometry.

 

In the second example, I was trying to create a decal that captures the lit texture behind it and renders this as the diffuse stage. This could be used for a lot of the same applications as a render-to-texture and wouldn't require an expensive post-process cycle.

 

This is conceptually impossible; any material that "captured" what was behind it so that it could be re-rendered would necessarily have to be implemented via a render-to-texture operation of some kind.

Link to comment
Share on other sites

I know that _currentRender is a render-to-texture which is kept as a dynamic memory image.

 

From what I've read "_scratch" collects everything from the previous rendered stage into a buffer.

 

I was trying to render that buffer as a diffuse stage.

 

Maybe "_scratchRender" ?

 

There must be a use for these things?

 

Dynamic Internal Images:

 

  • _black - A pure black image map.
  • _cubicLight - ???
  • _currentRender - The current rendered image on screen. Used primarily as the input for fragment programs.
  • _default - ???
  • _flat - A flat normal map.
  • _fog - The internal fog image.
  • _noFalloff - ??? Used as the light falloff image for lights with no falloff.
  • _pointLight1 - ???
  • _pointLight2 - ???
  • _pointLight3 - ???
  • _quadratic - ???
  • _scratch - An image buffer. Used primarily when rendering mirrors.
  • _spotlight - ???
  • _white - A pure white image map
  • _screenBlur
  • _scratchRender2
  • _scratchRender
  • _noise1Cur
  • _noise1Nxt
  • _noiseNormalMap

 

Maybe "TexGen" "Normal" would fill the scratch buffer instead of a dummy blend?

 

Edit:

 

Nope...

 

(Thanks Team Blur!)

 

 

texGen normal

Generates texture coordinates for a cube map texture by copying normal vectors.

 

 

Edited by nbohr1more

Please visit TDM's IndieDB site and help promote the mod:

 

http://www.indiedb.com/mods/the-dark-mod

 

(Yeah, shameless promotion... but traffic is traffic folks...)

Link to comment
Share on other sites

The primary use for those images is in the coding of custom fragment programs. Some of them like _black, _white, _flat, etc... are useful in material shaders but most of them aren't well suited for it.

 

You can see what each image looks like by writing a custom shader like this...

 

textures/internalimage

{
	{
		map 	_white
	}
}

 

... where you replace _white with whatever image you would like to view.

Edited by rich_is_bored
Link to comment
Share on other sites

Right...

 

Except some of those images hint at much more interesting capabilities than "_white".

 

Since we know that "_scratch" will capture a mirrorRender operation, what else can be dumped into "_scratch"?

 

And how? (other than mirrorRender)?

Please visit TDM's IndieDB site and help promote the mod:

 

http://www.indiedb.com/mods/the-dark-mod

 

(Yeah, shameless promotion... but traffic is traffic folks...)

Link to comment
Share on other sites

But you don't control what gets rendered to _scratch or any other internal image. The engine code dictates what each image is used for and _scratch, as the name implies, is a catch all that's used for multiple things. It would be a mistake to use it in a material shader as the result isn't reliable. For example at 60 seconds into a game it might be used as a buffer for the previous stage and at 70 seconds it might be used as a buffer for a mirror. I can't imagine any practical application for that.

 

Ideally what you want is for the engine to render a new internal image for your purposes but we can't do that yet.

Link to comment
Share on other sites

That is a bit boggling though...

 

Why go through the trouble of creating this accessible buffer structure yet have the content be too unstable for practical use. After reading further, thus far the camera and mirror can be relied on to populate their respective buffer state reliably.

 

My premise is that you might be able to put "something" in your material shader that reliably fills "_scratch" every time that shader is processed. I am presuming that for every call of the material it is re-processed otherwise dynamic alpha-sort images behind the material would not be blended properly?

Edited by nbohr1more

Please visit TDM's IndieDB site and help promote the mod:

 

http://www.indiedb.com/mods/the-dark-mod

 

(Yeah, shameless promotion... but traffic is traffic folks...)

Link to comment
Share on other sites

There are a couple of other internal images mentioned in the material handler, I'm just dumping from a highlighter script of mine:

_ambient
_black
_cubicLight
_currentRender
_default
_flat
_fog
_noFalloff
_pointLight1
_pointLight2
_pointLight3
_quadratic
_scratch
_scratch2
_white

 

And a few from a note I have on my desk - overlaps and whatever included (I dont think if these are accessible by materials tho):

"_default"
"_white"
"_black"
"_borderClamp"
"_flat"
"_ambient"
"_specularTable"
"_specular2DTable"
"_ramp"
"_alphaRamp"
"_alphaNotch"
"_fog"
"_fogEnter"
"_normalCubeMap"
"_noFalloff"
"_quadratic"
"_cinematic"
"_scratch"
"_scratch2"
"_accum"
"_currentRender"

 

There are some other map types as well:

cameraCubeMap
cubeMap
mirrorRenderMap
remoteRenderMap
videomap
soundmap
screen2
glassWarp
xrayRenderMap

 

I have noooooo idea if any of it works or what exactly it might do. These are just scavanged out of the binaries to better deal with some of the more 'strange' materials. So yeah if anyone has a day and wants to play around - I'd love to know which of them work/do something interesting!

 

edit: stripped code tags for, you can now reaaaad!

Link to comment
Share on other sites

Obviously we know what the color ones do.

 

These two I can only imagine that they are used in Fragment programs to normalize specular. Though any "good" specular shader replacement would not use a lookup table...

"_specularTable"

"_specular2DTable"

This is probably our old friend the "Normalization Cubemap" (again only good for a Fragment program).

 

"_normalCubeMap"

 

This one is VERY intriguing:

 

"_accum"

 

 

This one was used to create Prey style portals in Doom 3:

 

remoteRenderMap

 

 

Just some thoughts... I will have to try some of this when I get a chance...

Please visit TDM's IndieDB site and help promote the mod:

 

http://www.indiedb.com/mods/the-dark-mod

 

(Yeah, shameless promotion... but traffic is traffic folks...)

Link to comment
Share on other sites

That list of items without underscores are material shader keywords. I recognize a few of them from this list...

 

http://www.modwiki.net/wiki/Stage_material_keywords_%28Doom_3%29

 

As for internal images, most of them are used in fragment programs. That's the intended purpose. As for why you can use them in material shaders, well let's have a look at how you pass images to fragment programs...

 

	{
	Program 	heatHaze.vfp
	vertexParm 	0 	time * 0 , time * 0 // texture scrolling
	vertexParm 	1 	.5 	// magnitude of the distortion
	fragmentProgram 	heatHaze.vfp
	fragmentMap 	0 	_currentRender
	fragmentMap 	1 	textures/sfx/vp1 // the normal map for distortion 	

}

 

Yes, this is a stage from a material shader and note the reference to _currentrender.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Recent Status Updates

    • nbohr1more

      Was checking out old translation packs and decided to fire up TDM 1.07. Rightful Property with sub-20 FPS areas yay! ( same areas run at 180FPS with cranked eye candy on 2.12 )
      · 2 replies
    • taffernicus

      i am so euphoric to see new FMs keep coming out and I am keen to try it out in my leisure time, then suddenly my PC is spouting a couple of S.M.A.R.T errors...
      tbf i cannot afford myself to miss my network emulator image file&progress, important ebooks, hyper-v checkpoint & hyper-v export and the precious thief & TDM gamesaves. Don't fall yourself into & lay your hands on crappy SSD
       
      · 5 replies
    • OrbWeaver

      Does anyone actually use the Normalise button in the Surface inspector? Even after looking at the code I'm not quite sure what it's for.
      · 7 replies
    • Ansome

      Turns out my 15th anniversary mission idea has already been done once or twice before! I've been beaten to the punch once again, but I suppose that's to be expected when there's over 170 FMs out there, eh? I'm not complaining though, I love learning new tricks and taking inspiration from past FMs. Best of luck on your own fan missions!
      · 4 replies
    • The Black Arrow

      I wanna play Doom 3, but fhDoom has much better features than dhewm3, yet fhDoom is old, outdated and probably not supported. Damn!
      Makes me think that TDM engine for Doom 3 itself would actually be perfect.
      · 6 replies
×
×
  • Create New...