Jump to content
The Dark Mod Forums


  • Posts

  • Joined

  • Days Won


Everything posted by greebo

  1. I managed to port some of the shadow mapping code over to DR, it can now support up to 6 shadow-casting lights. Of course, it'll never be as pretty as TDM, and nowhere near as fast, but it's a start.
  2. The GUI text placement code is definitely not perfect, I've been reverse-engineering it back in 2010. If you have a concrete example, I can look into it and compare to it to the TDM sources, which we have available in the meantime.
  3. It's the way they are internally stored to reduce draw calls, but they are indeed similar. I implemented the IWindingRenderer first, since that was the most painful spot, and I tailored it exactly for that purpose. The CompactWindingVertexBuffer template is specialised to the needs of fixed-size Windings, and the buffer is designed to support fast insertions, updates and (deferred) deletions. I guess it's not very useful for the other Geometry types, but I admit that I didn't even try to merge the two use cases. I tackled one field after the other, it's possible that the CompactWindingVertexBuffer can now be replaced to use some of the pieces I implemented for the lit render mode - there is another ContinuousBuffer<> template that might be suitable by the IWindingRenderer, for example. It's very well possible that the optimisation I made for brush windings was premature and that parts of it can be handled by the less specialised structures without sacrificing much performance. The model object is not involved in any rendering anymore, it just creates and registers the IRenderableSurface object. The SurfaceRenderer is then copying the model vertices in the large GeometryStore - memory duplication again (the model node needs to keep the data around for model scaling). The size of the memory doesn't seem to be a problem, the data is static and is not updated very often (except when scaling, but the number of vertices and indices stays the same). The thing that makes surfaces special is their orientation, they have to be rendered one after the other, separated by glMultMatrix() calls. Speaking about writing the memory allocator: I was quite reluctant to write all that memory management code, but I saw no escape routes for me. It must have been the billionth time this has been done on this planet. Definitely not claiming that I did a good job on any of those, but at least it doesn't appear in the profiler traces.
  4. Yes, this is interesting. It's achievable, with some cost, of course. Right now, the Shaders themselves implement the interfaces IWindingRenderer, IGeometryRenderer and ISurfaceRenderer. A different authority could implement these interfaces, but it needs to map the objects to the Shaders somehow (likely by using a few std::maps). The renderer then calls that authority to deliver that information, this way we can separate that information. The fullbright backend renderer needs that info when processing the sorted shader passes. Currently they ask their owning shader to draw its surfaces, this has to be moved elsewhere. The lighting mode renderer is using the objects as delivered by the render entities, this is not involving the Shader doing the housekeeping. So this renderer is already heading more towards this direction.
  5. I suspect it's all about the draw calls. In fullbright mode DR is now invoking much fewer GL calls compared to lit render mode. There's not so much difference when it comes to the oriented model surfaces, these are really almost the same (and are using the same vertex storage too), here it's the brushes and patches. Lit mode is grouping by entity - to regain the advantage of submitting everything in one go, it would need to dissolve that grouping information, which has to happen every frame. I think this is going to be too taxing. Maybe when lit mode is more optimised, we can try to merge the two modes. You've made a correct observation about the backend data storage though: the geometry and winding renderers are (at the moment) not sharing their vertex data between the two modes, memory is duplicated and copied around often. That's not good. The main reason for this duplication is the chronological order I adjusted the renderer. I was chewing by this starting with fullbright mode, first brushes, then patches, then models, finally the visual aids like lines and points. After that I was moving forward to do the research on lit mode, and all that reflects in the code. I admit that I took this approach on purpose: when starting, I didn't have the full grasp of what is going to be necessary, I had to learn along the way (and aim for not getting burnt out half-way through). Now the full picture is available, the thing can be further improved, and the storage is probably among the first things that need to be optimised.
  6. First of all, thanks for taking the time to respond, it has been getting wordier than I anticipated. Yes, the distinction is in the shaders now. There is still a possibility to distinguish these two, since the VolumeTest reference provides the fill() check, so some onPreRender() methods are reacting to this and prepare different renderables. It's still necessary at this point, since some wireframe renderables are calling for a different appearance. This doesn't mean that it can't get any simpler though. The objects are still calling for a coloured line shader, like <0 0 1> for a blue one. In principle, now that the vertex colour is shipped along with the geometry data, the colour distinction in the shader itself is maybe not even necessary anymore. There could be a single line shader, used to draw stuff in the orthoview.
  7. Lighting Mode Approach All EntityNodes are registering themselves in the RenderSystem on scene insertion. Same for the Lights, they register themselves as RenderableLight objects, so the OpenGLRenderSystem knows about every IRenderEntity and each RendererLight. The frontend phase is using the same onPreRender() call. All scene nodes are aware of their parent IRenderEntity (this had already been done before), which enables them to attach IRenderableObjects to their IRenderEntity. This way every entity knows about the renderable objects it is a parent of. During back end rendering, the algorithm makes use of the IRenderEntity::foreachRenderableTouchingBounds() method, to select those objects that are intersecting with a light's volume. IGeometryStore Every IRenderableObject has its vertex data stored in the central IGeometryStore owned by the render system. It doesn't know where that is exactly, but it will receive a Slot Handle to be able to update the geometry or remove it later. The IRenderableObject::getStorageLocation() method will expose the storage handle and enables the back end renderer to access the data by object. The geometry store handles two RAM buffers that are protected by glFenceSync, in preparation of moving all that data to a GPU buffer and not running the risk of altering the data while it's still in use. The number 2 can be increased to a higher number if needed (TDM is using 3 of them). Changes and updates to the geometry buffer are recorded during the front-end render pass and are propagated to the secondary buffer when switching between frames, to keep the amount of vertex data copied around reasonably low. The back end is currently processing all the IRenderableObjects one by one, it will use the same glDrawElementsBaseVertex call for every encountered object (so there's room for optimisation here, possibly bunching the calls together and then using glMultiDrawElementsBaseVertex). Windings Windings are special again, and not very optimised as of yet. BrushNodes don't do anything, it's the Shader (in their role as WindingRenderer) that is grouping the windings per entity and clustering them into one large IRenderableObject per entity. It is likely to intersect with far too many lights in the scene, so there's room for using a flexible space partitioning system here. Geometry and Surfaces The base implementations provide a convenient attachToEntity() method which takes care of the bureaucracy. The nodes just need to call it with the correct IRenderEntity* argument. Backend I tried to use the TDM renderer as blue print. There's a dedicated RenderSystem::renderLitScene() method which is called by the CamWnd when in lighting mode. The steps are (see render/backend/LightingModeRenderer.cpp): For every known light, check each known entity and intersect the objects Every intersecting object will produce an interaction, objects are sorted by entity and material All collected objects will be considered for the depth fill pass: only the suitable materials provide that pass Interaction Pass: draw the objects per light and entity, using the correct GLSL program Blend Pass: draw all other passes like blend stages or skyboxes. The cubemap program needed to render skyboxes has been implemented using GLSL. It doesn't handle reflective stages yet, only regular cubemaps. Results Everything is still pretty rough and not optimised yet, but it's working. Particle rendering, skyboxes, blend stages and regular light interactions are properly showing up, so it's at least at the same feature level before the changes, which was what I've been aiming for in this branch.
  8. Fulllbright Approach The front-end render pass is reduced to a single call: onPreRender(const VolumeTest&) - when invoked, each node has the ability to check for any updates that have been happening since the last frame, like material changes or changed texture coordinates, new target lines. Nodes are no longer submitting renderables to the collector. Instead, they grab a reference to the Shader from the RenderSystem (like before), and attach their geometry to it. The geometry will stay attached to the shader until it is updated or removed by the Node during a future onPreRender call or if it's removed from the scene. Shaders provide a specialised API for the most common use cases: an API for brush windings (IWindingRenderer), an API for general purpose geometry (path boxes, target lines, vertices, quads) called IGeometryRenderer and an API for triangulated, oriented surfaces (models) called ISurfaceRenderer. The Nodes will not know how the shader is dealing with their data, but they will receive a numeric Slot Handle that will allow them to update or remove their geometry later. The above IWhateverRenderer implementations are designed to internally combine as many objects as possible. No distinction between Orthoview rendering and Camera rendering (renderWireframe and renderSolid are gone). It's all about the shaders, they know whether they are suitable for rendering in one of these view types, or both. The Shader implementation provide a drawSurfaces() method that is invoked by a shader pass during the back end rendering phase. This will set up the glEnableClientState() calls and submit the data through glDrawElements. Windings To achieve fewer draw calls, all windings of a given size (more than 90% of the faces have 4 vertices) will be packed together into a single CompactWindingVertexBuffer that stores all windings of that material into a single large, indexed vertex array. Winding removal and re-addition is fast the buffer will keep track of empty slots and is able to re-fill them quickly with a new winding of the same size. Index generation is using a templated WindingIndexer class that is creating indices for GL_LINES, GL_POLYGON and GL_TRIANGLES. It is up to the Shader to decide which indexing method is used, orthoview shaders are using GL_LINES, while camera preview is using GL_TRIANGLES. Every winding is specified in world coordinates. Geometry This is the API used by patches, entity boxes, light volumes, vertices, etc. Objects can choose the GeometryType they are rendering: Lines, Points, Triangles and Quads. The Shader will internally sort the objects into separate buffers for each primitive type, to submit a single draw call for all the objects sharing the same type. All Geometry is using world coordinates. Surfaces This API is similar to the Geometry API, but here no data is actually submitted to the shader. Instead, IRenderableSurface objects are attached to the shader, which provide a getSurfaceTransform() method that will be used to set up the model matrix before submitting the draw calls. Surface vertices are specified in local coordinates. Highlighting The shader API provides an entry point to render a single object when it is selected. This is going to be much slower than the usual draw calls, but the assumption is that only a small portion of all map objects is selected at the same time. Vertex Storage While the data is now stored in the shader, it's still in the main RAM. No VBOs have been used yet, that would be a logical next optimisation step. Results With the above changes, the amount of draw calls in a fairly sized map when from 80k down to a few hundred. While the first attempts of combining the brushes doubled the frame rate of my benchmark map (using the same position and view angles, drawing it 100 times), this later went down to a 30% speed improvement after migrating the model surfaces. It turns out that rendering the models using display lists is really fast, but it violated the principle of moving the calls to the backend. It has to be taken into account that after the changes, the vertex data is still stored the main memory, not in the VBO.
  9. I'm opening this topic to summarise the technical changes that have been made to DR's renderer and get some feedback from my fellow coders. I'd love to get a peer review on the code changes, but going through that by looking at a pull request of that renderer branch would be a terrible experience, I assume, so instead I'd like to give an overview over what is done differently now. General things to know about DR's renderer DarkRadiant needs to support three different render views or modes: orthographic view, editor preview (fullbright) and lighting preview. Each of them has very different needs, but the lit preview is the most complex one, since it ideally should resemble what the TDM engine is producing. Apart from the obvious things like brush faces and model geometry, it needs to support drawing editor-specific things like path connection lines, light volumes, manipulators (like the rotation widget) or patch vertices. Nodes can be selected, which makes them appear highlighted: they display a red overlay and a white outline in the camera preview, whereas the orthoview shows selected item using a thicker red dashed line to outline selected items. DarkRadiant cannot specialise its renderer on displaying triangles only. Path lines for instance are using GL_LINE_STRIPs, Single brush faces (windings) are using GL_POLYGON for their outline (triangulation of brush faces in the ortho view or the camera (when selected) introduce a lot of visual noise, we just want the outline), patches want to have their control mesh rendered using GL_QUADS. Model surfaces (like .ASE and .LWO models) on the other hand are using GL_TRIANGLES all the way. Almost every object in DarkRadiant is mutable and can change its appearance as authors are manipulating the scene. CPU-intensive optimisations like generating visportal areas is not a likely option for DR, the scene can fundamentally change between operations. The Renderer before the changes DR's rendering used to work like this: all the visible scene nodes (brushes, patches, entities, models, etc.) were collected. They have been visited and were asked to forward any Renderable object they'd like to display to a provided RenderableCollector. The collector class (as part of the frontend render pass) sorted these renderables into their shaders (materials). So at the end of the front end pass, every shader held a list of objects it needed to display. The back end renderer sorted all the material stages by priority and asked each of them to render the objects that have been collected, by calling their OpenGLRenderable::render() method. After all objects rendered their stuff, the shader objects were emptied for the next frame. Culling of invisible objects has been happening by sorting objects into an Octree (which is a good choice for ortho view culling), some culling has been done in the render methods themselves (both frontend and backend calls). The problems at hand Doing the same work over and over again: it's rare that all the objects in the scene change at once. Usually prefabs are moved around, faces are textured, brushes are clipped. When flying through a map using the camera view, or by shifting the ortho view around, the scene objects are unchanged for quite a number of frames. Separation of concerns: every renderable object in the scene has been implementing its own render() method that invoked the corresponding openGL calls. There were legacy-style glBegin/glEnd rendering (used for path nodes), glDrawElements, glCallList, including state changes like enabling arrays, setting up blend modes or colours. These are render calls that should rather be performed by the back end renderer, and should not be the responsibility of, let's say, a BrushNode. Draw Calls: Since every object has been submitting its own geometry, there has been no way to group the calls. A moderately sized map features more than 50k brush faces, and about half as many patch surfaces. Rendering the whole map can easily add up to about 100k draw calls, with each draw call submitting 4 vertices (using GL_POLYGON). Inconsistent Vertex Data: since each object was doing the rendering on its own, it has been free to choose what format to save its data in. Some stored just the vertex' 3D coordinate, some had been adding colour information, some were using full featured vertices including normal and tangents. State Changes: since every object was handled individually, the openGL state could change back and forth in between a few brush windings. The entity can be influencing the shader passes by altering e.g. the texture matrix, so each renderable of the same material triggered a re-evaluation of the material stage, leading to a massive amount of openGL state changes. Then again, a lot of brushes and patches are worldspawn, which never does anything like this, but optimisation was not possible since the backend knew nothing about that. Lighting mode rendering: Lighting mode had a hard time figuring out which object was actually hit by a single light entity. Also, the object-to-entity relationship was tough to handle by the back end. Seeing how idTech4 or the TDM engine is handling things, DR has been doing it reversed. Lighting mode rendering has been part of the "solid render" mode, which caused quite a few if/else branches in the back end render methods. Lighting mode and fullbright mode are fundamentally different, yet they're using the same frontend and backend methods. The Goals openGL calls moved to the backend: no (frontend) scene object should be bothered with how the object is going to be rendered. Everything in terms of openGL is handled by the back end. Reduced amount of draw calls: so many objects are using the same render setup, they're using the same material, are child of the same parent entity, are even in almost the same 3D location. Windings need to be grouped and submitted in a single draw call wherever possible. Same goes for other geometry. Vertex Data stored in a central memory chunk: provide an infrastructure to store all the objects in a single chunk of memory. This will enable us to transition to store all the render data in one or two large VBOs. Support Object Changes: if everything should be stored in a continuous memory block, how do we go about changing, adding and removing vertex data? Changes to geometry (and also material changes like when texturing brushes) is a common use-case and it must happen fast. Support Oriented Model Surfaces: many map objects are influenced by their parent node's orientation, like a torch model surface that is rotated by the "rotation" spawnarg of its parent entity. A map can feature a lot of instances of the same model, the renderer needs to support that use-case. On the other hand, brush windings and patches are never oriented, they are always using world coordinates. Unified vertex data format: everything that is submitted as renderable geometry to the back end must define its vertex data in the same format. The natural choice would be the ArbitraryMeshVertex type that has been around for a while. All in all, get closer to what the TDM engine is doing: by doing all of the above, we put ourselves in the position to port more engine render features over to DR, maybe even add a shadow implementation at some point.
  10. That's a nice comparison in that talk, and some points made are very interesting. Some of them are perfectly applicable to TDM editing, while others not (like the keyboard shortcuts example - I guess most editors provide customisable hotkeys). The func_instance is a nice concept that might be applicable to prefabs. If it weren't for the map file structure getting into the way this might have been worthwile investigating, but I assume this not easy to accomplish while maintaining compatibility with the .map format.
  11. That case is handled, it doesn't create an encompassing group if this outer group already exists.
  12. That checkbox is unrelated to the format the prefabs are stored in. It just groups the imported piece together as a whole, whether there are sub-groups or not. No support in TDM, at least not yet. The portable format has been introduced for two reasons: first to make it easier for other technologies to read the map format, since XML parsers are widely available. Second, to fix the problem with clipboard data losing the group info when copy/pasting stuff in and between DarkRadiant itself - when copying data to the clipboard the mapx format is used (you can easily see that when copying map parts and pasting them into a text editor like Notepad).
  13. As far as the file syntax is concerned, there's no difference between a Doom 3 .map and a Doom 3 .pfb. When DR is saving a .pfb file, no corresponding .darkradiant file is going to be created, i.e. all map meta data like layer and group info is lost. There's no difference in the XML structure of the mapx or pfbx file contents. No .darkradiant file will be created for neither of them since it's not necessary. Layer and group information is saved in the mapx/pfbx file itself. A .pfb file is using the decl-style syntax of Doom 3 maps, while .pfbx is an XML file. The XML file is taking up more space on disk than the decl-based one, if anybody likes to care about that. Yes, these two commands are different, as expected: Export selected as Map will create a regular map file without adding any additional stuff like those sealing brushes, or moving player start around, it's just for saving the selected part of the map - it's really just what the name says. The region feature has the goal of making a part (the regioned one) pass beyond the dmap flood fill phase, hence the wall brushes.
  14. Yep, when building from source you're linking the DR binaries against specific libraries that were present on your system at that time. They become dependencies - removing them will cause the DR binaries to no longer work. Just re-build and you should be fine.
  15. Yes, I'm pretty confident that recompiling DR will fix this. Did you compile from source before or did you install it from a package?
  16. Yes, that's one way. But you don't even need to overwrite the current installation, you can also run the new version side by side with the old one, using a separate folder for 2.14. That way you can switch back to the old version if you don't like it. When switching back and forth, depending on how old your current version is, you might lose a keyboard shortcut or two. (If the old version is 2.12 or 2.13, I think no shortcuts will be lost.)
  17. Nothing to worry about, this commit has been merged to master after the release. The 2.14.0 release is based on 9006af6a0. Plenty of time to fix it in master like any other regression.
  18. Oh, missed that one. I added it to the list! And thanks for the words
  19. DarkRadiant 2.14.0 is ready for download. This release focused on DarkRadiant's texturing abilities, the Texture Tool and some of the Surface Inspector algorithms have been completely rewritten. A new model importer UI has been added with the ability to convert FBX models into a format compatible to the game (it can also convert LWO, ASE and OBJ models). The EntityInspector can now deal with more than one selected entities, showing the shared key values in the list. Copy/Paste Textures across angled faces: Texture Tool Rotate Tool (use "R" hotkey to switch) Surface Inspector Harmonise Scale / Linked Scaling Surface Inspector Normalise EntityInspector Multi-Selection Support For more things that have changed or fixed, see the list below. Windows and Mac Downloads are available on Github: https://github.com/codereader/DarkRadiant/releases/tag/2.14.0 and of course linked from the website https://www.darkradiant.net Thanks go out to all who helped testing this release! Please report any bugs or feature requests here in these forums, following these guidelines: Bugs (including steps for reproduction) can go directly on the tracker. When unsure about a bug/issue, feel free to ask. If you run into a crash, please record a crashdump: Crashdump Instructions Feature requests should be suggested (and possibly discussed) here in these forums before they may be added to the tracker. Changes since 2.13.0 Feature: Texture Tool Improvements Feature: Texture Tool: Add Manipulation Panel to shift/scale/rotate selection Feature: Show shared keyvalues when multiple entities are selected Feature: Texture Browser Filter: match multiple words (using 'AND' logic) Feature: Skin Chooser shows materials of the model Feature: Surface Inspector: Add buttons to harmonise Horizontal and Vertical scale values Feature: Improved pasting textures to angled faces sharing an edge Feature: XY view zoom is centered at cursor Feature: Texture Tool: Constrain operations to axes by holding down Shift Feature: Texture Tools: rotate function Feature: Texture Tool: UI contrast Feature: Model Conversion UI Feature: Add FBX model importer Feature: add IQM format support into lib/picomodel Feature: Spawnarg type icon not shown for inherited properties Improvement: New Game Connection GUI Improvement: "Replace Selection with exported Model" preserves spawnargs Improvement: automatically reload exported models Improvement: Search function: don't start searching while still typing Improvement: MediaBrowser toolbar: clear filter text when texture is selected through MMB or Texture Browser Improvement: Merge "Create player start" and "Move player start" options Improvement: Patch Texture Rotation should take aspect ratio into account Improvement: Texture Tool: use aspect ratio of material Improvement: Step-rotating textures through the Surface Inspector should be using the center as pivot Improvement: Surface Inspector: Option to change horizontal and vertical scale values proportionally Improvement: Apply textures to surfaces using "normalized" scaling. Improvement: Normalise button brings texture coordinates closer to 0,0 Improvement: Prevent Texture Tool "face jump" on rescaling textures Improvement: Move modifier hints out of the status bar Improvement: Flip Texture: Prevent huge face UV coordinate translations Improvement: Double click on list elements should auto-close dialogs Improvement: Texture Tool: Select items by clicking the UV space they cover Improvement: Texture Tool: Grid lines are getting too dense when zooming out a lot Improvement: Texture Tool: intercept keystrokes for grid resizing & snap to grid Improvement: Model Exporter: warn if Output Format and extension in File Path don't match Improvement: Change Quake3 map exporter to write "legacy" brush syntax Fixed: Q3 Legacy BrushDef parser sometime produce some wrong texture rotation Fixed: "Replace Selection with exported Model" assigns result to Default layer Fixed: All scene graphs connect to the same undo system, causing interference Fixed: Remove Floating Layout Fixed: EntityInspector allows to set an entity's name to an empty value Fixed: modelDefs folder starts expanded after changing selection Fixed: Particle Editor: wireframe does not render Fixed: Drag-select while in texture tool window gets stuck. Fixed: Some brushes change shape or disappear when rotated or duplicated Fixed: Texture Tool: drag operation doesn't capture the mouse Fixed: Ctrl-S does not work when focus is on inputs Fixed: Autosave filename unhelpfully overwrites 'save copy as' filename Fixed: Merge Maps: can't hide changed entities/primitives Fixed: Merge Maps: can't center orthoview/camera on changed entities Fixed: Merge Maps UI remains if DR is closed while a merge is in progress Fixed: Merge Maps: "Details" text doesn't use full width of window Fixed: Brushes colour schemes not saving Fixed: Fit Texture fields do not allow values below 1.0 Fixed: PatchDefExporter: do not write trailing white space after shader name Fixed: LWO2 Model Exporter doesn't write vertex colours Fixed: Objective components not correctly renumbered after removing a component Fixed: Applying a skin to a model entity no longer works under 2.14pre1 Fixed: Spawnarg types and tooltips not reliably inherited in entityDefs Fixed: Crash when saving map or prefab without a file extension Fixed: Texture Tool crashes when creating a new brush Fixed: "Texture tool" grid cannot decrease under 1 Fixed: Texture Tool: dragged vertices snap to grid even though it's switched off Fixed: Sound chooser not pre-selecting the inherited value of snd_* keys of an entity Fixed: User Guide (Local) doesn't work Fixed: Restore GL_LINEAR_MIPMAP_LINEAR texture filtering Fixed: Objective components not correctly renumbered after removing a component Tweak: Surface Inspector vertical shift / vertical scale arrows Tweak: Surface Inspector's minimum width is too large The list of changes can be found on the our bugtracker changelog. Have fun mapping!
  20. Perfectly sensible approach to describe it on the forums first, to clarify whether it's an actual bug or not. It can still be converted to an actual issue later.
  21. Is the situation really that gloomy? From what it sounds like, I think it's not a DR bug, but something related to the shader, so it might indeed get some more attention in the editor's guild. Generally, to be of assistance with issues like this I'd need to look deeper into it, like setting up a test map and check out the materials. I didn't do that since I was occupied with other stuff, and I assume that applies to all of the other people who didn't respond here.
  22. A new pre2 build is available with a list of bug fixes is available in the first post.
  23. I couldn't track down where this is coming from. I'm on Dockable layout all the time, and I tried what you suggested, but it's always working. I made the cursor-centered-scrolling an option in the most recent builds (like here), so you can switch it off. Still baffling what might cause this, maybe we'll find out some time. I didn't change anything in that area at all. @OrbWeaver, have there been any changes to texture filtering?
  24. It's trying to auto-detect the mode. If the texture is brighter than 50% on average, the dark theme is set. I can make the auto-detect feature optional. Never tried to go to infinity, but I guess you're right Please add a tracker entry.
  • Create New...