Jump to content
The Dark Mod Forums

Leaderboard


Popular Content

Showing content with the highest reputation on 04/14/20 in all areas

  1. 2 points
    I was referring to D3W advice in general, not the specific text you posted. I've seen several forum posts on D3W and I think even these forums where people have repeated the "fact" that Doom 3 has a problem with more than 3 lights hitting a surface, and advised mappers to cut up brushes to make sure they never see a cyan surface in the r_showLightCount view, even though this was a waste of time that made no difference to frame rates back when I was doing it on a Radeon 9800 XT in 2006. I suspect that the "3 light limit" comes from the descriptive text in the explanation of r_showLightCount: 1. Black= No light hitting the surface. 2. Red= 1 light hitting the surface (this is excellent). 3. Green= 2 lights hitting a surface (this is great). 4. Blue = 3 lights hitting a surface (This is getting bad. You need to keep these areas small, but it is ok to have a number of smaller areas in a room that are blue). 5. Cyan, pink, or white= 4+ lights hitting a surface this is bad and you should only have this in very small, isolated areas. It would be easy to interpret these descriptions as implying that something bad happens between 2 lights ("this is great") and 3 lights ("this is getting bad"), even no there is no such limit and these textual descriptions were (as far as I can tell) just off-the-cuff phrases that never actually reflected any empirical performance testing. There is no such limit for maximum lights, because light count is just one of many factors influencing performance, along with poly count, shadow count, shadow mesh complexity, number of shadow-casting lights, number and size of open/closed visportals and many other things. Giving suggested limits for particular scene elements is counter productive because these limits tend to get enshrined in documents and forum posts and propagated through time as a sort of "cargo cult", where people believe that as long as they have fewer than a certain number of lights or polygons then their scene will be fast to render, even though there may be many cases where having more lights/polygons would be fine if other elements of the scene are simpler, and vice versa. The only meaningful way to address performance is based on empirical testing, with guided optimisations based on this data. For example, if you have a scene that is particularly slow to render, and you find that you have a large surface hit by 20 different lights and they all cast shadows, this might be the thing to optimised, whereas in another scene you might find that you have 10 million polygons in view due to a reflection surface and too many open visportals, and these are the things you need to fix. But in no case does this mean that if you stick to 19 different lights and only 9 million polygons, you will never have a problem with performance. No need to get defensive; I wasn't criticising you, just pointing out that some of that old advice from D3W and other forums isn't particularly accurate and shouldn't be relied upon by today's mappers, particularly when it asserts particular numeric limits that quickly become obsolete due to the evolution of hardware, and might not have been correct to begin with. Nothing wrong with looking at light counts in conjunction with other items (but you must also consider the area of the over-lit polygons, not just the count, and consider the effect of light scissoring which considerably restricts the actual area that is illuminated based on the size of the light, even if that light hits a large polygon), provided you don't become attached to certain fixed "ideal light counts" which are just too crude a measure to be useful. Overall my advice with light count and everything else (polygons, shadows etc) is: Minimise everything as far as you possibly can without compromising the artistic intent. Use in-game diagnostics and statistics as a guide, without being hung up on particular numeric thresholds and values. Test on a variety of hardware (especially mediocre hardware) if you can, to avoid producing a map which only runs well on your super high-end rig.
  2. 2 points
    Alright, looks like I found the solution: besides patches and func_emitters there exists a 3rd method for generating particles: func_smoke entities. Like scripts, they produce particles that exist in the world independently of any entity, so they'll stay where they were made regardless of where the func_smoke goes afterwards. Based on preliminary tests it seems to do what I need perfectly. Wiki: World Particle System
  3. 1 point
    Yes completely agree, you are totally right there I just didn't thought that through. And if I came of has defensive that was not my intention at all that particular phrase was more a disclaimer than anything, but reading it again I get why it can feel like i'm being defensive. Is the problem of text not being that good at passing states of mind.
  4. 1 point
    It is highly unlikely that the presence of any particular -devel package on your system is going to cause DR to crash, unless that devel package is so horribly broken that it actually breaks software that is compiled while it is installed (which is very unlikely in the first place, and such a package would almost certainly be removed by the distribution).
  5. 1 point
    I just did a rewrite of the wiki article on Caulk Please take a look. In particular, I added a section on Caulk Sky (which was new to me), with a small placeholder paragraph about Atmospheric Fog. Does anyone know if there's still a difference these days (given the Portal Sky changes in 2.06) between how fog (and its boundaries) works with portal sky versus caulk sky? If so, please revise it yourself or enlighten me. Thanks.
  6. 1 point
  7. 1 point
    I don't like that you are talking about performance of rendering as a binary thing: engine either renders an object or not. It is much more complex. Aside from draw calls (which are very important), there are other things involved. For instance, pixel fill rate (think about pixels you have to fill with data) and other read/writes to memory. Also, the complexity of the per-pixel visual effects (which are done in pixel = fragment shaders). So it is all more complicated: the engine is big, and it is very hard to fully understand all the performance details. First I'll try to correct some misunderstanding here (well, at least I think so): An entity cannot be culled out only because it is occluded by some set of entities. So if there is a candle behind a crate (or several crates) in a room, the candle will be rendered regardless of crates presence. In my opinion, there is no efficient way to check that the candle is fully behind a crate or a barrel, that's why I think so. Quake 1 really has software renderer with zero overdraw, but Doom 3 renderer does not have. If engine renders several surfaces which are located on the same place on the screen (and thus occlude each other), then even occluded pixels would take some time to render. However, Doom 3 performs "early depth pass": it renders all the geometry without any lighting/coloring/texturing at the very beginning of the frame to produce depth map. After that it renders everything again, but all the complex math for lighting/texturing (i.e. fragment shader) is done only for the pixels which are actually visible and not occluded by anything. For the pixels which are occluded, only depth calculation and comparison is done (which is much cheaper than full rendering). So you pay for visible pixels, and pay much less for occluded pixels.As you see, the only way to not pay for triangles and draw calls is to cull the surface completely in frontend. This is what I'll try to explain now: let's go back to portals culling. Let's call arbitrary convex polygon on screen "winding". For instance, every portal that you see with r_showPortals is rectangular, and its winding (which is a 2D thing) is a 4-gon. Every area can be seen by player through a sequence of portals (let's call it "portal path"). For the area where the player is located there are zero portals in this sequence. Given a portal path, you can intersect the windings of its portals, and you'll get a polygonal winding through which you see the area at the end of the path. For instance, on this picture the outside area is visible through two portals, which together yield 5-gonal winding (marked by orange color): Now the main rules are: If the windings of the portal path have empty intersection, then it is culled out. if one of the portals is sealed (maybe closed door), then the portal path is culled out.If all portal paths leading to an area are culled out, then it is not rendered and you don't pay for it. If there is a single remaining portal path into an area, then its winding (recall that it is intersection of path's portals) is used to cull out the entities in the area: if an entity's bounding box is not visible through the winding, then entity is not rendered (so you do not pay for it). If there are several portal paths leading into an area, an entity is drawn only if its bbox gets into at least one winding (I think so). Unfortunately, I cannot say how the winding of the portal path affects the world geometry, but I can imagine that surfaces (sorry, I don't know if "surface" is a known concept/term in DR) which are surely not visible through the winding are culled out too. This is not over yet. There is also an OpenGL thing called "scissor test", which allows to skip rendering of pixels outside of a specified (axis-aligned) rectangle on the screen. Doom3 uses it heavily: when rendering an area, it sets scissor rectangle to be minimal rectangle bounding the windings of all the portal paths leading to the area (usually one portal path leads to an area, but not always). As you see, even portals which can be seen through by player (drawn as green when you enable r_showPortals) can help a lot with culling. Note that the description above explains the final effect of portals culling, but does not exactly describe how the implementation works internally. Also, I think that the overhead introduced by portals is pretty low: you should not be afraid that setting 2x more portals would waste CPU time (I suppose so). Note that there are also shadows, which are culled by different code. For each shadow-casting light, a similar procedure is started from the light itself. It does not matter whether portal is sealed or not here. The algorithm for culling by portal path seems to be completely different in this case, but it also takes into account the fact that even visible portals limit the range through which the light goes. Having said all that, I'd like to ask a question: has anyone seen a case when adding a portal (or several portals) seriously reduces performance?
  • Newsletter

    Want to keep up to date with all our latest news and information?
    Sign Up
×
×
  • Create New...