Jump to content
The Dark Mod Forums

cabalistic

Development Role
  • Content Count

    610
  • Joined

  • Last visited

  • Days Won

    16

cabalistic last won the day on September 1 2018

cabalistic had the most liked content!

Community Reputation

388 Legendary

1 Follower

About cabalistic

  • Rank
    Advanced Member

Profile Information

  • Gender
    Male

Recent Profile Visitors

236 profile views
  1. You're right, the shadow geometry can be cached. Rendering it to the stencil buffer, however, cannot, because it's done from the camera's point of view (unlike shadow maps), and that typically changes between frames. But of course, the geometry is the more significant cost
  2. At least for classical stencil shadows, yes. You can't really implement a forward+ renderer with stencil shadows. Technically, TDM uses a trick for soft stencil shadows that would allow it, but it is detrimental to performance. And even then, stencil shadows would have the disadvantage that they need to be rerendered on every frame, whereas shadow maps can potentially be cached as long as the light itself or the shadow casters in its vicinity don't move. As for putting shadow volume generation to the GPU: that's possible, in principle. However, my experience with geometry shaders has been poor, they usually tank GPU performance quite significantly if you use them for anything other than a few simple objects. I'm not sure it's a good trade-off, and personally at least I think the kind of effort needed to implement it is better spent on improving shadow mapping. But that's just my personal opinion
  3. The performance comparison is not that simple. Yes, the GPU side is fairly cheap, but even there it has drawbacks. In particular, it requires to render each (shadow-casting) light separately, which is not ideal in modern rendering architectures and does not scale particularly well with many lights. And while the rendering itself is cheap, creating the necessary shadow geometry is not. That's traditionally done on the CPU, and it's one of several reasons why TDM is currently heavily CPU-bottlenecked. There are a couple of reasons why it's currently not the case, but I'm willing to wager that shadow maps will eventually outperform stencil in TDM
  4. This is a question for @duzenko, but I think the fullscreen resolution handling was changed such that the render window is always at full desktop resolution, but the internal render resolution is set according to your choice. So even if the app runs at 4k, rendering should not. Can you see a quality / performance difference ingame between different resolution settings?
  5. That's the dynamic index/vertex buffer running out of memory. This is fine, though, they are automatically resized. Basically, to not waste GPU memory, they start at fairly modest sizes. But the more demanding your map is, the more memory you need. If the current value is not sufficient, the buffers are resized automatically. The messages are informative only.
  6. Perhaps it's not texture-related, but rather the vertex/index buffers. That might suggest memory corruption or possibly missing/incorrect GPU fences.
  7. You people keep missing each other's points, I think. Context matters Alberto, while I agree with your practices in principle, even "universal" concepts don't apply universally to every problem. I would recommend trying to find a recording of John Romero talking about id Software's history and how they approached game design. He specifically mentions why missing assets are not treated as hard errors, and - surprise - it was specifically to enable fast iteration and getting products shipped Like I said, context matters. Missions and assets are not quite the same thing as writing code.
  8. This would basically require to develop a "new" renderer in WebGL, and possibly some other significant platform porting. I'm not stopping anyone from doing it, but this is going to be a significant amount of work. If I'm writing a new renderer, I'd rather learn Vulkan in the process than another variant of GL
  9. Well, I did get VR rendering working in my PoC, but it turned out that performance is too bad in many maps. That's why I abandoned the effort until perhaps one day enough performance bottlenecks are solved to make it more feasible. A basic stereoscopic rendering would not be as performance critical and is not terribly difficult, but there are some gotchas. In particular, some optimizations in the engine do not play nicely with stereoscopic rendering, so it's still some effort to put in. GIven that the renderer is undergoing some major changes at the moment, I don't think the time is right to add this, as TDM has enough to do for 2.08. Also, as long as that work is ongoing, the renderer is fairly unstable codewise, so that makes adding (and maintaining) stereo support more work.
  10. Stereoscopic support was only added with Doom3 BFG, but TDM is based on the older Doom3 engine. Therefore, there is no stereoscopic support whatsoever.
  11. Capture frame should just work. If it's not, I'm afraid I don't know how to help you with that. You could try the standalone version of nSight, it worked better for me than the Visual Studio integrated one.
  12. It's not about batching - it's about avoiding driver call overhead. Just to be clear, what it achieves is save CPU time in the backend, which in turn will get data to render to the GPU faster and avoid that the GPU needlessly idles. In order to profit from this, the GPU needs to be starving in the first place. If you have a poor GPU and run it at demanding settings, then yeah, it's possible you won't see much of an effect, because it wasn't the bottleneck. (It's also possible you're doing something wrong - you'd have to measure carefully with nSight to try and catch what's going on.) But in my experiments, where I replaced depth and stencil shadow draws with multi draws, the effects were pretty clear. They cut down CPU time for those areas significantly and as a result allowed the GPU to render more quickly. Of course, stencil (and particularly depth) are often not the most significant parts of the scene, so the overall effect on FPS is not gigantic (but it was still measurable).
  13. Anything that applied an offset, I rendered separately (classically). Eventually, I think we need to migrate away from using the GL functions and doing our own offsetting in vertex shaders based on parameters. For one, it enables to use them in multi draws, and for another, they would actually be predictable. The GL polygon offsets are not - their effects can vary among GPUs and drivers.
  14. Yes, you need a new set of shaders, no way around that. I'm not sure if you can make non-multipass use that same set on a GL3 feature base. Anyway, I used a single depth multi-draw command for all non-transparent objects, and then rendered the transparent ones classically on top. Since they use actual textures for drawing, it's not as easy to get them into a multi-draw call. But even if you do, since their fragment shader is much more costly, it still makes sense to have them separated from the solid ones and use a specialized fragment shader.
  15. There is no overhead - you have to transfer the model matrices, anyway. Whether by glUniform or buffer. In fact, buffer upload is most likely faster, because you only have a single upload for the batch instead of repeatedly setting the glUniform. It takes a little more GPU memory, but it's not dramatic. Note that UBOs have an array size limit of I think around 500 or so. For larger collections, you want to use SSBOs, which is what I did in my experiments.
×
×
  • Create New...