Jump to content
The Dark Mod Forums

stgatilov

Active Developer
  • Posts

    6804
  • Joined

  • Last visited

  • Days Won

    234

Everything posted by stgatilov

  1. Could you make it work even when postprocessing is enabled?
  2. Put the patch into root directory of darkmod_src. Then right-button it and choose TortoiseSVN -> Apply Patch. Aside from the window that you show, you should see another small window on top of it with thelist of changes files. Click "Patch all items" there.
  3. Speaking of stencil shadows. Shadow volumes of worldspawn are loaded from .proc file ("prelight" models), that's why they don't update without dmap. This is an optimization that you can turn off by setting cvar r_useOptimizedShadows 0. Better do reloadModels after you change this cvar. With this, stencil shadows should recompute without dmap.
  4. Changing materials on brushes/patches and on models are two very different things. For models, it should already work by saving new model and doing reloadModels. If you edit material, then you do reloadDecls, and I guess you should see updated material after that. Correct me if I am wrong here. For brushes and patches it is harder, because 1) dmap reads them from .map-file, compiles them into surfaces, and writes them into .proc file, and 2) idRenderWorld loads the whole .proc file and saves these surfaces into "local models", after which it immediately creates idRenderEntity for every _areaN model (i.e. every area of the worldspawn). I'm not sure any reload mechanism is present here. So doing a honest reload is hard. But if you are willing to point to the surface, then indeed it is easy to find it and change its material. There is already a command r_showSurfaceInfo, which finds surface under cursor and displays its name. I guess I can add surface index in this command. After that adding a command to override material of this surface should be easy. I would suggest name like r_surfaceMaterialOverride or r_materialOverrideSurface, since you only change material, but not the surface itself. By the way, do you want material changes to persist in the current game? If yes, then this must console command instead of cvar. I'm afraid I don't understand your idea about reloadSurface command: in my opinion one added command should be enough. UPDATE: Ok, now I have found that reloadSurface is an already present command UPDATE: Committed changes to idRenderWorld::Trace. Most importantly, now it returns surface index, so replacing material should be easy. Note that reloading texture coordinates is a big problem, since texture coordinates are stored in the geometry itself. Aside from some functional changes like "multiply all by (Cu,Cv)" or "add (Du,Dv)", nothing else can be done easily. Generally speaking, I was thinking about this problem. We have reduced reloading problem on programming side by enabling "Edit and Continue" in Visual C++. It does not always work, but is still a big help and time saver. I was thinking of incremental dmap compilation: not clear yet how to do it. Sounds like a challenging task. Everything which changes worldspawn makes things complicated. Simple things like respawning changed entities are indeed possible (in fact, there is even some code for it, since builtin Radiant did it). Saving locations of movables during respawn is possible. Changing AI patrol paths is much harder, but probably possible too. It is even possible to make such changes in DR to immediately propagate to the game: the recently added automation protocol can be used to send commands from DR to the game.
  5. I'm not sure how taxing it is for performance. The intent was to add one render from FBO to FBO, but it might be that more work is done. It does not work when postprocessing is enabled. It it is not hard to run the tonemap shader after postprocessing, better do it. I'm not sure about multisampling and nontrivial render-scale, although I probably tested it. One uncertain thing is that with shader, gamma/brightness do not affect main menu. For me that's OK, I guess we can live with it unless we hear other opinions.
  6. Ok, it seems that with brushDef3 engine treats incoming points matrix as control points of quadratic B-spline. Points [0-2] x [0-2] define a quadratic Bezier surface, points [2-4] x [0-2] define another quadratic Bezier surface, points [4-6] x [0-2] define third Bezier surface, in the second row points [0-2] x [2-4] also define a Bezier surface, etc. So if you need B-spline knots, they always look like: "0 0 0 1 1 2 2 3 3 4 4 4" (both by U and V). If number of points in row/col is even, then the last row/col is ignored. When patchDef3 is loaded, the code evaluates a uniform grid of points on this spline. If subdivision levels are set to P and Q, then every Bezier patch gets replaced with (P+1) x (Q+1) grid of points. Of course, adjacent patch grids share one row/col of evaluated points. In fact, all points with UV-parameters u = i/P and v = j/Q are evaluated. So it turns out that if I create a quad as patchDef3 with 3 x 3 points and 2 x 2 subdivision levels, then this "subdivision" does not change it. The weird things that I see are due to dmap, I suppose
  7. I wonder what's the state of patches, i.e. DR vs Doom 3. I have stumbled upon this but trying to make a trivial quad patch. I.e. just make a plan quad with linear texcoords s = 0-1 and t = 0-1. For some reason, dmap always produces weird meshes instead of two triangles. These weird meshes are overlapping themselves, which leads to texcoords also overlapping and this is a big no-no for me. I have converted my quad patches to patchDef3, hoping that I would have more control over it, but it does not help. Is there any way to ensure that a quadratic patch gets converted to a simple quad? Is there a way to just draw a trivial quad?
  8. I think yes. They are intended for displaying in ordinary image viewers, so they are in gamma space. Imagine that you take this texture and draw it with full white lighting. If you treat texture as gamma-space and FBO is in gamma-space, then you texture data will go into FBO "as is", and will be displayed just like ordinary viewers display it. Looks like something intended.
  9. I don't see anything unexpected. Everything gets brighter, since you efficiently apply gamma conversion on the final output, which is a bit stronger than what setting r_gamma 2 in TDM gives. Of course, this is too much, so everything becomes very bright. When such gamma correction is done, colors become weaker, and can also change slightly. What did you expect? The best thing that you can achieve with SRGB is that intensities are added/blended in a more physically correct way. To achieve it, you have to mark all textures as SRGB, and all FBOs as SRGB. Then the output should be more or less the same in terms of brightness. You cannot control gamma level with SRGB in a way how tone mapping or monitor tables allow it.
  10. Attached patch for moving gamma/brightness changes to shader. I'm afraid I cannot apply it to all cases (case r_postprocess 2 is not working now as it seems). I have hardly managed to get it working without postprocessing... One side effect is that main menu is not affected by gamma/brightness changes. Not sure if it is good or bad. P.S. As for dithering, it can help only if you apply it when you draw the thing which shows color banding, e.g. when you draw sky and clouds. In the final pass, it is useless. gammashader.patch
  11. Well, I guess when I said "default FBO is gamma space", I was meaning that you must put gamma-space values there, since they are treated as such by the monitor. I did not mean that OpenGL automatically ensures that it gets gamma-space data, although I think there should be way to achieve it. I think the main explanation is that OpenGL thinks default FBO is RGB since you did not tell it otherwise and OpenGL does not do any conversions without you telling it to do them. The FBO which you are blitting from contains gamma-space values, and you told OpenGL it is SRGB (by setting image format and enabling GL_FRAMEBUFFER_SRGB), so it auto-converts its data from gamma-space to linear-space during blitting. But you did not tell OpenGL that default FBO is SRGB, so it assumes it is linear and it does not do any conversion. Looking at https://stackoverflow.com/a/46117749/556899, you have to pass some flags to WGL to ensure that default FBO is SRGB-capable, then enable GL_FRAMEBUFFER_SRGB. Did you do both of it? Same as above. You did not tell OpenGL that default FBO is gamma-space, so it assumes it is linear (while it clearly is not) and does not do any conversion.
  12. The cvars will works as usual. One thing I also want to try is to optionally add a clever dithering to make color banding unnoticeable. Not a good thing for TDM with its visual style, but perhaps some players would prefer it to obvious color banding. I'm afraid there is some sort of misunderstanding here. The "linear space" is where we can sum colors with mathematical addition and that would work like how light intensities add in the real world. It is also called RGB, but this is even more confusing, because when people hear RGB, they think it is what was always around, which is exactly opposite. The "gamma space" is also called SRGB, this is the representation which monitors expect from rendering output (omitting some minor details like different gamma value in different monitors). SRGB is what has been all around us since the beginning. Almost all image files are stored in SRGB space, they look correct when you open them in ordinary image editor since it just sends this data to monitor. Textures of a modern gamma-correct game may also be represented in linear space: in this case opening them in something stupid like paint would show them as too dark. Although they are most likely in such formats that point won't open them. The old approach of rendering is: everything is in 8-bit SRGB (gamma space). No conversion is done when opening ordinary image files, no conversion necessary after rendering is done. However, when renderer computes lighting, it adds intensities with simple addition straight in the gamma space (SRGB). It is incorrect, so lights blending and quadratic falloff look wrong. It would be correct if rendering was in linear space, but it is not. This is how Doom 3 is working, and this is how TDM normally works. The first solution to the problem is rendering everything in linear space (RGB). Then you store all textures in linear space (i.e. they look wrong when you open them in paint), you store all FBOs in linear space. At the very end of rendering, you do gamma correction as the final pass, thus converting color from linear space to gamma space in order to send it to monitor. To avoid color banding from final gamma correction, you use HDR, i.e. high-precision color formats. The second solution is to use SRGB support in OpenGL. You mark incoming textures as SRGB, and mark your FBO as SRGB. It does not magically changes anything, because in TDM they are all SRGB regardless of whether you mark them or not. But now blending and texture filtering have convert from gamma space to linear space before operation, and from linear space back to gamma space after operation. Thus, intensities are adding up properly. Moveover, when you fetch data from texture marked as SRGB in your shader, OpenGL automatically converts it from gamma space to linear space before returning it. If you write output color to SRGB buffer (and have enabled something on that FBO), then OpenGL automatically converts it from linear space into gamma space. So whatever you have in your shader is in linear space (and most likely in 32-bit float), but all data out of shader is in gamma space (and most likely 8-bit since if you use more precision than the first solution is better). So you are wrong saying that default framebuffer is linear. It always contains SRGB (gamma space) data. And our textures always contain SRGB data. The only problem is that we incorrectly deal with this SRGB data all over the renderer. And using the SRGB extension in OpenGL allows to fix that, forcing OpenGL to auto-convert to linear space and back around every operation which could add colors.
  13. Ok, how about this. We add a trivial shader, which only applies r_gamma and r_brightness to the scene. If postprocessing is off, we just run it over the whole screen. If it is on, we run it after postprocessing is over. We run it as the very final step in any case! So we can finally remove stupid monitor tables and Linux equivalents. Color banding still here, of course. Ambient tweaks and SRGB stay as experimental stuff. Speaking about color banding with SRGB. There should be no color banding if you display srgb buffer "as is". Such buffer is already in gamma space, native for the monitor. The difference from RGB is that shader automatically converts inputs from gamma space to linear space and outputs from linear back to gamma. But if you want to do any additional gamma adjustments, then you are screwed without high precision buffers.
  14. Looking at the code, used_by_inv_id spawnarg is not read anywhere. I guess that note on wiki is a king of feature request. UPDATE: I don't see inv_id either.
  15. I think it was a better idea to make @taaaki know this, maybe this is some misconfiguration on his part. Without his assistance, communication with support would be problematic anyway.
  16. Did you try to set gamma/brightness in game console? Usually you press Ctrl+Alt+Tilde to open game console. Type r_gamma 1.5 or r_brightness 2 there and hit Enter. Check if it affects the visual look.
  17. It depends on .proc file. If you use the old one, there is no "prelight" in the .proc file. As the result, it works. If you dmap it, then prelight version gets added to the .proc file. The engine sees that and chooses a different code path in one place, which results in incomplete lighting. I'll look into it.
  18. Bugs... they happen from time to time I recall how I reviewed this code and missed it. Perhaps should add a random test on this function (i.e. compare to results of generic implementation). Anyway, I have recalled where I saw a big outdoors map with huge light, I have found it in my installation. It was Noble Cave by AluminumHaste. Unfortunately, it has not portals outdoors, maybe because adding them broke shadows. Calling @AluminumHaste...
  19. Dragofer reported a problem with shadows, but it turned out to be an entirely different problem (5106). Basically, TDM 2.07 introduced a horrible bug to stencil shadows. Most likely, it only affects precomputed stencil shadows for projected/directional lights. Also, it does not affect mappers having too old CPUs. If you dmapped anything with TDM 2.07 with projected/directional lights, it might be worth to re-dmap it with TDM 2.08 when it comes out. P.S. Still, I would be happy to see some testmap with (global) parallel light broken
  20. Well, I did not achieve anything good about this problem. Only fixed popping on perfectly flat walls with rough bumpmap (by hacking shading equation). Looking here and here, I see the following solutions: Offset the front cap of the shadow volume along outer normal by some constant distance. "Front cap" is the part which matches the backfaces. Then enable lighting for backfaces (probably can be done even now with cvar). So, it should make things better, but may introduce light leaks on nearby objects. Render front faces as normal, but then additionally render backfaces where stencil value >= 2. In simple cases there is only one layer of shadow polygons in front of it, and such places will be lit by ordinary shading, which should be smooth and dark anyway. Hack shading equation, so that it makes triangle darker when angle between triangle normal and light direction is small. This is the easiest to do, but my attempts to do so made polygonization or apples even more obvious. I did not try points 1 and 2. Maybe something to consider in future. Point 2 sounds interesting.
  21. I have bumped into an old issue about glitches with directional lights: 3818. At the same time, I have recently fixed a bug in interaction culling, which happens when light source illuminates many portals areas at once. If we are lucky, that bug could be the only one It would be great to check it on some test map. But creating a testmap from scratch would be a horrible waste of time. I wonder if someone had experimented with parallel lights and vast outdoors areas and saw shadows randomly disappearing. If yes could you please share your test map?
  22. Yes, we compiled it with VC 2017. Of course, it needs proper redist package to be installed. To be honest, the only thing which surprises me is that TDM 2.07 works fine but TDM 2.07-hotfix does not. This is almost the same code built with same tools and settings. How can it be?
  23. Maybe it makes sense to define damage zones then. And significantly reduce damage on non-vulnerable bones. I agree, but I don't see any indication that creators of steambots wanted them to be invulnerable to ordinary weapons. I think not. They are "steam bots" , so they somehow work purely on steam.
  24. I second Goldwell's proposal: run the vcredist executables located in your TDM root directory. Aside from that, TDM 2.08 will no longer depend on VC redistributable (just like it was until 2.05), so you should be able to run SVN version on a fresh Windows. UPDATE: Finally, if nothing helps, you can use procmon to see where it looks for these DLLs. But this would take considerable time and effort. P.S. Upgrades "through several major versions" are rarely pain-free. Windows 10 already had 5-10 revisions released.
×
×
  • Create New...