Moonbo Posted March 25 Report Posted March 25 I read the release notes for 2.13 and was really impressed with the new AI lighting detection. I'm really curious how exactly it works - the notes mention a stochastic sampling of light pixels but what does that mean in practice (and why couldn't it be done before)? Are the light pixels on objects whose shaders are recording per-pixel lighting stochastically and then sending them to the CPU via compute shader? If so, does that mean values are only updated when they're being rendered on screen when the player looks at them? So many questions! Quote But you should walk having internal dignity. Be a wonderful person who can dance pleasantly to the rhythm of the universe.-Sun Myung Moon My work blog: gfleisher.blogspot.com
nbohr1more Posted March 25 Report Posted March 25 Start here: All "potentially visible" ( from AI pov ) entities run the full render pipeline for a random group of nearby pixels. Light and shadow casting is interleaved with the results added to a weighted running average. Before we had the Doom 3 source, there was no good way to sample the light textures in parallel for better performance. 1 Quote Please visit TDM's IndieDB site and help promote the mod: http://www.indiedb.com/mods/the-dark-mod (Yeah, shameless promotion... but traffic is traffic folks...)
Moonbo Posted March 25 Author Report Posted March 25 Wow, so it's possible to run the full render pipeline for a specific pixel even if it's offscreen? The thread you linked also mentions casting shadow rays. Do you know where in the source code this is all done? Might just be easier to look at it. Quote But you should walk having internal dignity. Be a wonderful person who can dance pleasantly to the rhythm of the universe.-Sun Myung Moon My work blog: gfleisher.blogspot.com
nbohr1more Posted March 25 Report Posted March 25 Source code commits: https://github.com/stgatilov/darkmod_src/commit/2d1ee8aef1102e0333d0e002e190d782660ed7fc https://github.com/stgatilov/darkmod_src/commit/196163d1a1c43ad668cbea4324d8627611733404 https://github.com/stgatilov/darkmod_src/commit/e96cfcc696d3c847936026a708456fb763bd1423 https://github.com/stgatilov/darkmod_src/commit/d825de73708ad4c9ed66b826fd689b4756d7ac30 https://github.com/stgatilov/darkmod_src/commit/ad15878f73c626a6243e796beaff63a9408f5920 https://github.com/stgatilov/darkmod_src/commit/02ab4916bfba0cb267f16db5933950d9a6d1ea06 https://github.com/stgatilov/darkmod_src/commit/7893569ca5e14610380add3a7d953dcdd34b27e6 1 Quote Please visit TDM's IndieDB site and help promote the mod: http://www.indiedb.com/mods/the-dark-mod (Yeah, shameless promotion... but traffic is traffic folks...)
Moonbo Posted March 25 Author Report Posted March 25 Thanks! Quote But you should walk having internal dignity. Be a wonderful person who can dance pleasantly to the rhythm of the universe.-Sun Myung Moon My work blog: gfleisher.blogspot.com
stgatilov Posted March 25 Report Posted March 25 5 hours ago, nbohr1more said: All "potentially visible" ( from AI pov ) entities run the full render pipeline for a random group of nearby pixels. Light and shadow casting is interleaved with the results added to a weighted running average. Before we had the Doom 3 source, there was no good way to sample the light textures in parallel for better performance. I think this is not exactly how it happens, because the algorithm is CPU-only, and it definitely works for invisible areas (e.g. guards detect bodies even if player is inside a locked room). But the issue 6546 and the aforementioned commit messages are a good place to start. Basically, the engine can now sample light intensity at any point at any moment, without any usage of graphical API. It computes all the shaderparms, conditions, settings, it uses raycasting to take hard shadow into account, it samples the light textures on CPU to take projection/falloff into account. Luckily, we only need consider to lights, we never touch surfaces. Computing surface properties on CPU would be unfeasible. The procedure is cheap for a single point, but very expensive for every screen pixel / mesh vertex. So just as you said, the meshes are sampled randomly, with samples distributed over short time. Some kind of average is computed. This spreading over time is unfortunately necessary to make results both fast and reliable. And this is the ugly part: if you request light value and use it the same frame, you'll just get zero. You need to continuously request light value of an entity in order to get meaningful value soon after you start. In principle, we can use this system to implement CPU-only lightgem (aka "weak lightgem" of the past). But its values will never match perfectly with the values of the current GPU lightgem, and I have a feeling that the switch might be traumatic due to behavior change. 2 Quote
Moonbo Posted March 25 Author Report Posted March 25 So you're basically re-creating the lighting model on the CPU and then feeding in the necessary data to reproduce what lighting for that pixel would look like if it had been handled by the GPU. That sounds like an impressive amount of work. How'd you go about it? Read the shader code and reconstruct it using CPU instructions? Regardless, really neat job. Quote But you should walk having internal dignity. Be a wonderful person who can dance pleasantly to the rhythm of the universe.-Sun Myung Moon My work blog: gfleisher.blogspot.com
stgatilov Posted March 26 Report Posted March 26 3 hours ago, Moonbo said: How'd you go about it? Read the shader code and reconstruct it using CPU instructions? Most of the shader stuff is about surfaces. For lights, it's only glprogs/tdm_lightproject.glsl, I think. And yes, now this chunk of code is duplicated in C++ code. 1 Quote
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.