Jump to content
The Dark Mod Forums

Recommended Posts

While doing some render system refactoring I noticed that projected lights were not rendering correctly in lighting preview mode, and thought it was something I had broken until I regressed to earlier revisions and discovered that it has actually been broken for some time (in fact I tried checking out revisions from 3 years ago and the problem still reproduced). Projected lights are something of a "bug trap" because I think most mappers don't use them or the DR lighting preview very much and therefore they don't get a lot of testing.

The behaviour I see is that while the rendered light projection obeys the shape of the frustum (i.e. the light_up and light_right vectors), it seems to ignore the direction of the light_target vector and always points the light straight downwards. The length of the target vector has an effect (on the size and brightness of the light outline on the floor), but the direction is ignored.

projected.png.247c5a3722cf41a812e1ee59f93070c8.png

Note that the problem only applies to the rendered light itself. The frustum outline appears to be correct (as it is handled with different code). I believe the problem is with how the light texture transformation is calculated in Light::updateProjection(), although I can't be sure.

I will make an effort to debug this although projective texture transformations are near the limit of my mathematical abilities (I can understand the general concept and probably figure out the process step-by-step by taking it slowly), but might have to ask @greebo for assistance if the maths becomes too hard. Another approach is to try and revert the relevant code back to when (I think) it was working correctly after I initially got projected lights working, although that might have been 10 years ago or more so a straight git bisect isn't practical.

 

  • Like 3
Link to post
Share on other sites

@stgatilov

I mentioned this a loooong time ago but I thought it might be relevant here.

If the projection volume could show a transparent representation of the effect of both the 1D and 2D projection images.

With this in place authors could more easily control where light happens with the projection volume.

 

Please visit TDM's IndieDB site and help promote the mod:

 

http://www.indiedb.com/mods/the-dark-mod

 

(Yeah, shameless promotion... but traffic is traffic folks...)

Link to post
Share on other sites

I can help you with the maths, too. Just in case.

  • Like 1
  • Thanks 1

FM's: Builder Roads, Old Habits, Old Habits Rebuild

WIP's: Several. Although after playing Thief 4 I really wanna make a city mission.

Mapping and Scripting: Apples and Peaches

Sculptris Models and Tutorials: Obsttortes Models

My wiki articles: Obstipedia

Texture Blending in DR: DR ASE Blend Exporter

Link to post
Share on other sites

I think projection matrix for this case is computed in R_ComputeSpotLightProjectionMatrix in tr_lightrun.cpp.

On the other hand, I have no idea if DarkRadiant needs the same matrix and uses the same conventions.

  • Thanks 1
Link to post
Share on other sites

Thanks for all the support and offers of help; I didn't realise this would get such a positive response. I will post my investigations in this thread (partly for my own sanity, partly for future documentation in case this needs to be debugged again, and partly so anyone can jump in in case I'm going completely off the rails).

51 minutes ago, stgatilov said:

I think projection matrix for this case is computed in R_ComputeSpotLightProjectionMatrix in tr_lightrun.cpp.

On the other hand, I have no idea if DarkRadiant needs the same matrix and uses the same conventions.

I'm sure there will be some differences in the matrix DR needs, but this will still be helpful as an aid to understanding the process, so thanks for the pointer.

Link to post
Share on other sites

I recall working on the projected light stuff years ago. It took me quite some time (and I think 300 sheets of paper) to wrap my head around how the projection is actually working. Since knowledge is deteriorating exponentially, I'm sure I forgot 80% of it, but everything is always easier the second time. I must have left some comments for myself too in that code section - I recall the original GtkRadiant code was completely uncommented (it was pretty much a plain copy from some original id code).

So, I'll try to help if I can, just post here.

Link to post
Share on other sites

Don't know if this helps, but when I was using those type of lights I always rotated them directly instead of changing the light_target etc. spawnargs. That always worked for me.

 

EDIT: Just noticed that you were talking about an issue in DR, not TDM. Should've read more carefully. Nevertheless, the preview takes the rotation into account. So maybe it still proves useful on narrowing down the issue.

FM's: Builder Roads, Old Habits, Old Habits Rebuild

WIP's: Several. Although after playing Thief 4 I really wanna make a city mission.

Mapping and Scripting: Apples and Peaches

Sculptris Models and Tutorials: Obsttortes Models

My wiki articles: Obstipedia

Texture Blending in DR: DR ASE Blend Exporter

Link to post
Share on other sites
On 10/9/2020 at 9:39 PM, Obsttorte said:

Don't know if this helps, but when I was using those type of lights I always rotated them directly instead of changing the light_target etc. spawnargs. That always worked for me.

EDIT: Just noticed that you were talking about an issue in DR, not TDM. Should've read more carefully. Nevertheless, the preview takes the rotation into account. So maybe it still proves useful on narrowing down the issue.

That is actually very helpful, thanks.

I just did a test, and rotating the light using the rotation tool works fine, including the render preview. The rotation is not applied to the target vector, but stored in a separate rotation spawnarg which is applied to the texture matrix in a separate step, and this appears to be working fine. This would explain why mappers might not have noticed the problem: since you can change the shape of the light OK, and change its direction by rotating, manual dragging of the light_target vector may be unnecessary.

In summary:

  • The location of the light works correctly.
  • The rotation of the light works correctly.
  • The shape of the frustum (light_up and light_right) appears to work correctly, although DR allows you to drag the points into some pretty weird configurations which may or may not correspond to valid game state.
  • The length of the light_target vector works correctly.
  • The direction of the light_target vector does NOT work (it is always assumed to point downwards, before rotation).
Link to post
Share on other sites

OK, let's start from the beginning (this will be a useful intellectual exercise and perhaps create some useful documentation that might help future development.

A projected light works by mapping points in 3D space into texture coordinates, which are then used to sample a 2D light falloff texture and a separate 1D gradient texture resulting in a final color for each illuminated pixel. Every point within the light volume should be transformed into a triplet of texture coordinates, where the X and Y coordinates (conventionally called S and T in a shader) run from 0.0 to 1.0, and the Z coordinate runs from 0.5 at the origin to 0.0 at the target plane. The other half of the Z range (from 0.5 to 1.0) is actually behind the light origin but we clamp this to 0 to avoid the light projecting backwards as well as forwards.


Projection.png.66e3c1f3b03cd21e3b2035e7081168f0.png

The projected light is defined by four vectors:

  • The origin in 3D space (the tip of the pyramid).
  • A light_target vector, labelled t in the diagram, which points from the origin to the center of the target plane, and is considered the light's local -Z axis.
  • A light_right vector, labelled r in the diagram, which goes from the center of the target plane to its rightmost edge, and is considered the lights +X axis.
  • A light_up vector, labelled u in the diagram, which goes from the center of the target plane to its uppermost edge, and is considered the light's +Y axis.

Based on these vectors, and the expected resulting texture coordinates, we can assume certain properties of our projection matrix P which we need to construct:

  • P · (0, 0, 0) = (0, 0, 0.5)
    The origin point must have a Z texture coordinate of 0.5, which is the brightest part of the Z falloff texture.
     
  • For all x and y: P · (x, y, 0) = (0, 0, 0.5)
    Because of the singularity at the light origin point, any point with Z=0 should have X and Y coordinates reduced to 0.
     
  • P · t = (0.5, 0.5, 0)
    The target vector points to the center of the projected backplane, which has texture X/Y coordinates of 0.5 and a Z coordinate of 0 (the darkest point in the Z falloff texture).
     
  • P · (t + r + u) = (1, 1, 0)
    The "top right" corner, reached by travelling along the target vector then along both the right and up vector, should be at the (1.0, 1.0) edge of the 2D falloff texture.
     
  • P · (t - r - u) = (0, 0, 0)
    Likewise the "bottom right" corner, reached by travelling along the target vector than backwards along the right and up vectors, should be at the (0.0, 0.0) edge of the 2D falloff texture.

A reasonable approach to debugging, therefore, will be to print out the transformations of these vector combinations under our projection matrix P, and see how far off they are the expected texture coordinates.

Link to post
Share on other sites
On 10/12/2020 at 8:48 PM, OrbWeaver said:
  • For all x and y: P · (x, y, 0) = (0, 0, 0.5)
    Because of the singularity at the light origin point, any point with Z=0 should have X and Y coordinates reduced to 0.

Actually I think I'm wrong about this one.

It's not the texture coordinates which are 0 at the origin, it's the image size. The texture coordinates are therefore effectively infinite, but this will not be achieved by actually setting the X and Y coordinates to infinity, but by setting the projective (W) texture coordinate to 0, causing the projective texture lookup (X/W, Y/W) to blow up to infinity. I think this means that the W coordinate will vary from 0 at the light origin to 1 at the light_target plane.

In any case the next step will be to add some suitable debugging code to print out the matrix transforms on known vectors like the light_target, and then examine the engine code to see how it constructs the matrix.

Link to post
Share on other sites

I think you are missing the translation Vector in your calculations. P•(0,0,0) = (0,0,0) for any matrix P. So the structure of the transformation is P•x+(0,0,0.5)=y, where x denotes the world coordinates and y the texture coordinates.

 

I haven't looked at the DR code, but in idTech4 code it actually works like that, although it is not visible without examining the steps of the calculation as the developers haven't utilized c++ classes to replicate the mathematical notation (I've once read an interview with John Carmack where he stated, that they shifted from C to C++ during the development of Doom 3 and therefore had to familiarize with the new concepts themselves during the development, causing them to not make use of all the possibilities C++ provided).

FM's: Builder Roads, Old Habits, Old Habits Rebuild

WIP's: Several. Although after playing Thief 4 I really wanna make a city mission.

Mapping and Scripting: Apples and Peaches

Sculptris Models and Tutorials: Obsttortes Models

My wiki articles: Obstipedia

Texture Blending in DR: DR ASE Blend Exporter

Link to post
Share on other sites
3 hours ago, Obsttorte said:

I think you are missing the translation Vector in your calculations. P•(0,0,0) = (0,0,0) for any matrix P. So the structure of the transformation is P•x+(0,0,0.5)=y, where x denotes the world coordinates and y the texture coordinates.

I think that's implied though, isn't it? We have (with added W coordinates):

On 10/12/2020 at 8:48 PM, OrbWeaver said:
  • P · (0, 0, 0, 1) = (0, 0, 0.5, 0)
  • P · t = (0.5, 0.5, 0, 1) or P · t = (0.5, 0.5, 1, 1)

So the Z coordinate must go from 0.5 at the origin to either 0 or 1 at the target point (either should look the same because the Z falloff texture is symmetrical, but I guess DR should do the same as what the game does). It is definitely going to need an offset of 0.5, resulting in a formula like:

Z[tex] = 0.5 + 0.5 * (p · ) / ‖t‖

where t̂ is the normalised target vector and p is an arbitrary point within the volume whose Z texture coordinate we want to calculate.

Link to post
Share on other sites

Oops. It's not really convenient to use 4 dimensions to describe a transformation in 3 dimensions, imho, so I tend to expect the transformation matrix to be 3x3. My bad.

 

FM's: Builder Roads, Old Habits, Old Habits Rebuild

WIP's: Several. Although after playing Thief 4 I really wanna make a city mission.

Mapping and Scripting: Apples and Peaches

Sculptris Models and Tutorials: Obsttortes Models

My wiki articles: Obstipedia

Texture Blending in DR: DR ASE Blend Exporter

Link to post
Share on other sites

I think the approach I will take is to assume the D3 code is correct, and try to adapt it as best as possible to fit DR, if changes are needed. So as stgatilov helpfully pointed out, the relevant code is in R_ComputeSpotLightProjectionMatrix().

The first code block is easy to understand, although written in a slightly weird way. I wonder if this is an optimisation because calculating "inverse square root" is quicker than calculating the regular Pythagorean length using square root.

const float targetDistSqr = light->parms.target.LengthSqr();
const float invTargetDist = idMath::InvSqrt(targetDistSqr);
const float targetDist = invTargetDist * targetDistSqr;

So we now have the length of the target vector and its inverse length which we can use to calculate a unit vector, which we then do for all three of the light vectors:

const idVec3 normalizedTarget = light->parms.target * invTargetDist;
const idVec3 normalizedRight = light->parms.right * (0.5f * targetDist / light->parms.right.LengthSqr());
const idVec3 normalizedUp = light->parms.up * (-0.5f * targetDist / light->parms.up.LengthSqr());

The first one is a simple normalisation of the target vector, but the right and up lines I'm a little confused about. Why is the length of the target vector multiplied into the normalised right and up vectors? Won't this mean that the "normalised" right vector will no longer have a fixed unit length (or a length of 0.5 in texture space), but a length of half the target vector? It's not obvious to me why we would want the right and up unit vectors to scale with the target length.

Link to post
Share on other sites

In the TDM code the content of the matrix calculated in R_ComputeSpotLightProjectionMatrix()  is than used to calculate the boundaries of the light volume, that are represented in world-size coordinates (but relative to the light origin). So the volume is not clamped to [-0.5...0.5] or anything like that, but is the actual volume as defined in the spawnargs.

FM's: Builder Roads, Old Habits, Old Habits Rebuild

WIP's: Several. Although after playing Thief 4 I really wanna make a city mission.

Mapping and Scripting: Apples and Peaches

Sculptris Models and Tutorials: Obsttortes Models

My wiki articles: Obstipedia

Texture Blending in DR: DR ASE Blend Exporter

Link to post
Share on other sites
4 hours ago, Obsttorte said:

In the TDM code the content of the matrix calculated in R_ComputeSpotLightProjectionMatrix()  is than used to calculate the boundaries of the light volume, that are represented in world-size coordinates (but relative to the light origin). So the volume is not clamped to [-0.5...0.5] or anything like that, but is the actual volume as defined in the spawnargs.

Sorry I don't quite get what you mean.

As I understand it, a matrix can never actually clamp anything, because that would be a non-linear transformation. The purpose of the matrix is to transform the coordinate system so that the volume, which (as you rightly say) is specified in world coordinates like [128, 128, 64], ends up with texture coordinates like [0, 0, 0.5] or [1, 1, 1]. Coordinates outside the light volume would still be transformed in this way, but they would end up with texture coordinates above 1 or less than 0, which would then be clamped to black by the OpenGL edge clamping mode which treats all pixels outside the texture boundaries as black.

Link to post
Share on other sites
15 hours ago, OrbWeaver said:

As I understand it, a matrix can never actually clamp anything, because that would be a non-linear transformation

That's correct. I meant your assumption that the entrees of the transformation matrix are within said range.

The resulting matrix is used afterwards in R_SetLightFrustrum(...). You can take a look at R_ComputePointLightProjectionMatrix(...) (way easier code) and how it is used for the light frustrum for comparision. So the plane equations are setup so that the planes normal vector multiplied by the vector from the light origin to the point we are interested in gives us a value between -0.5 and 0.5 if we are within the light volume, at least for point lights.

 

I'll have to write down the projection matrix for the projected lights to see how the approach is there, but it seams they are transforming the pyramid that is the light volume so that the base sides equal the height (the length of the target vector). What makes things even more complicated is that the up, right and target vector don't neccessarely have to be orthogonal to each other.

FM's: Builder Roads, Old Habits, Old Habits Rebuild

WIP's: Several. Although after playing Thief 4 I really wanna make a city mission.

Mapping and Scripting: Apples and Peaches

Sculptris Models and Tutorials: Obsttortes Models

My wiki articles: Obstipedia

Texture Blending in DR: DR ASE Blend Exporter

Link to post
Share on other sites

Nope, this is beyond me. Even after trying to incorporate the D3 code into DR, all I have achieved is swapping one set of meaningless numbers for another. I could stare at this code for the next 20 years and still have no idea what it's even trying to do, much less how to fix it.

Projected lights don't work correctly in DR, and they probably never have. I guess most people don't fiddle with light_target vectors very much anyway, but if they do, perhaps stgatilov's "live update" code will allow them to check the result in game without needing to bother with the DR renderer.

Link to post
Share on other sites
7 hours ago, HMart said:

This particular sample of OpenGL_4.0 cookbook is useful in any way? Sorry if is not I'm a noob on this stuff.

Thanks for the suggestion, but it doesn't really help: that is a very different way of implementing a spotlight, which is conical in shape and implemented entirely with math in the shader, whereas our spotlights are squar(-ish) pyramids defined by three vectors and implemented by using a matrix to map two falloff textures.

The problem is that while I understand the general idea and parts of the mathematical process, it all seems to fall apart when I get down to the nuts and bolts and look at the matrix and vectors themselves. I won't abandon this work entirely but maybe poke at it from time to time in between other tasks; perhaps I'll gain a greater understanding of how the maths works while working on other areas of the renderer.

  • Like 1
Link to post
Share on other sites

I'm probably stating the obvious, but when I worked on that topic, the key for me was not to look at the source and target of the transformation, but to figure out the steps in between - like 1) move coordinate system to origin, 2) scale, 3) rotate coordinate system to match up the direction, 4) move it to target location, etc. and then often reverse all these steps to get the actual matrix. Once that is working, one could look into making things more efficient by combining transformations or other tricks, like making use of the affine-ness of the involved transformations.

One more thing that helps is to make use of the DR renderer and/or the console to get visualisations of what is happening. As you say, the vectors and matrices tend to be unhelpful when it comes to debugging stuff. Sometimes other things like the DR frontend renderer is also applying transformations on top of what you come up with, which adds complexity.

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...