Jump to content
The Dark Mod Forums

OrbWeaver

Active Developer
  • Posts

    8629
  • Joined

  • Last visited

  • Days Won

    65

Everything posted by OrbWeaver

  1. Oh yes, sound radii definitely should not be opaque, since these are typically much larger than the relatively small light center points.
  2. I actually like them opaque. It makes them easier to see and less likely to be confused with other wireframe objects (such as the light boundaries).
  3. Sure, I'll create a bug for it. I did try turning the _geometryStore member into a unique_ptr and explicitly resetting it in shutdownModule(), but this did not solve the problem. However it's possible I did not do it in the correct order with respect to other members which need to be cleaned up. I also noticed that there is a potential race condition during the shutdownModule calls themselves, because we don't actually take dependencies or initialisation order into account during shutdown — we just shut down modules in the order they appear in the _initialisedModules map (which I guess is alphabetical). This was causing my HeadlessOpenGLModule to be shutdown before the OpenGLShaderSystem which I was convinced was the cause of the problem... but even fixing this did not help.
  4. The Linux segfault is definitely not merge related, since I can reproduce it in your branch prior to the merge commit. It looks like a problem with OpenGL being called during shutdown (maybe after the context has been destroyed or invalidated?). Here is the stacktrace: The actual line which crashes is: -> 31 glDeleteBuffers(1, &_buffer); which then jumps to address 0x0000, which I think can only mean that the glDeleteBuffers function pointer itself has been set to null. My first thought was that headless OpenGL simply won't work on Linux, but this doesn't really make sense as an explanation because the crash happens in an OpenGL call during shutdown, which must have been preceded by several other calls (which did not crash) during initialisation and running the tests. Also there is a HeadlessOpenGLContextModule which has been used by tests for 18 months without crashing, so it can't be the case that no OpenGL commands work in tests. I'm guessing this must be something related to the order of destruction of some of the new rendering-related objects (as commented within the OpenGLRenderSystem destructor), but I'm not sufficiently up to speed on how the new objects fit together to identify an obvious root cause.
  5. Initial source merged on Linux. Renderer changes look great. Shadows are working, and it's nice to see what look like more game-correct colours (I wonder what we were doing wrong in the previous shaders which made the colours and brightness different, although I suspect the answer lies in some mathematical detail that I would struggle to understand). The new shadow toggle button confused me though, because it looks like another option in the existing set of "radio buttons" which control the render mode, but this is a separate toggle which is independent of the other buttons. I would suggest making it mutually exclusive for consistency with the others — we could save space by getting rid of the untextured all-white solid mode, which I'm pretty sure is completely useless for most mapping tasks (I suspect that the wireframe one is occasionally useful for some people, although only in specific situations). There are some post merge segfaults in unit tests which I need to investigate to determine if they are Linux specific or caused by some merge conflict.
  6. It won't get forgotten because the whole point of the bug tracker is to keep track of open bugs. However it is not fixed and will probably not be fixed in the upcoming release. I did some initial examination of the code but did not identify any quick solution, then suspended work on this to avoid creating merge conflicts with Greebo's extensive and ongoing changes to the renderer.
  7. Right, I'm assuming it would need to be a completely different build mode set via CMake, which could then set whatever different options were necessary with regard to static vs dynamic CRT or other dependencies. In theory I think it should be possible to expose pretty much anything this way — the interface might be a bit more cumbersome to use, but you can always provide a header file with convenient (but optional) C++ wrapper classes which implement more familiar RAII and object-based semantics. Even lists and maps can be exposed, e.g. struct TDMStringList; // opaque // Get list of maps TDMStringList* tdm_installation_get_map_list(TDMInstallation* inst); // Manipulate list int tdm_stringlist_get_item_count(TDMStringList* list); const char* tdm_stringlist_get_item(TDMStringList* list, int index); void tdm_stringlist_free(TDMStringList* list); Obviously I wouldn't want to write code in this style all day, but using it just to traverse a DLL boundary and possibly wrapped in some C++ helper classes would be manageable. Apparently that's not trivial on Linux: https://stackoverflow.com/questions/6617679/using-dlopen-on-an-executable If you want to use the executable as a library it seems you need to use the PIE (position-independent executable) compiler option(s), at which point you might as well just build a DLL anyway (unless the binary is always going to be build PIE). No disagreement from me there. It seems like a hacky system with little regard to best practices for translation, which is why I've never made any effort to replicate it in DR. Sure, I wouldn't expect the DLL to already be there, it would be something we have to integrate ourselves. Although at that point it might be better to skip the DLL altogether and just do source code or static library integration.
  8. I'd probably approach from a slightly different direction: rather than trying to extract and isolate small parts of the TDM code and call these in a DLL from both the game and DR (which introduces problems with dependencies on other parts of the code), I would try adding a DLL-style build mode for the whole game binary — perhaps chosen with a CMake option — so that you could choose to build either the game itself or a DLL containing most of the same code. You'd then need a suitable DLL interface on the game side, which I would suggest should be pure C and as simple as possible so that it isn't necessary to expose all of the idLib stuff and deal with the complexities of C++ binary interfaces. So you might end up with an interface a bit like original GTK: struct TDMInstallation; // opaque type // Initialise new installation and return object owned by the DLL TDMInstallation* tdm_installation_new(const char* path); // Compile a given map, return a status code int tdm_installation_compile_map(TDMInstallation* installation, const char* mapName); // Properly dispose of the installation object void tdm_installation_free(TDMInstallation* installation); This way you effectively have full encapsulation of the DLL code, and an essentially object-oriented interface using C functions and opaque pointers instead of C++ classes with private members and public methods. If we were ever going to try this I'd suggest starting with something very simple and self contained. Compiling maps is probably OK, or maybe exposing Tels' i18n system which has never been ported into DR and results in DR not being able to show internationalised names for difficulty settings. Rendering of course would be a much more difficult task.
  9. An amazing leap forward for the DR renderer. Although all of this manual synchronisation work makes me think that it would be really nice to have some of the common code split into a DLL which could be used from DR as well as the game engine, allowing both editor and game to behave the same without needing a whole bunch of duplicated code. But of course that introduces difficulties of its own, especially when the two projects are using entirely different source control systems.
  10. You know you can immediately select lockpicks (and toggle between them) by pressing P, right? No need to find them by scrolling through the whole inventory. I'm not in a position to test right now and I don't recall whether there is a dedicated shortcut for health potions, but I wouldn't be surprised if there was one. Maybe check your key binding preferences to see what inventory shortcuts are available and what they are bound to.
  11. Even Thief 1/2 had crystals instead of full arrows if you found them during a mission. They were a sort of low-resolution pointed cylinder shape, in the colour of their element (with gas particles rising in the case of gas crystals). It was never explained how the player somehow turned these into arrows, but presumably it should be understood that he carries some empty shafts to attach the crystals to. Having to do this manually in game would be annoying and add no gameplay value, unless it was opening up some new possibilities like crafting unusual combination arrows with multiple crystal types.
  12. The visual design looks good but legibility of the foreground text is suffering. Since the location/level of detail in the background image is not predictable, it would be better to use something more visible for the text than transparent dim grey over slightly darker transparent grey.
  13. That would be something worth profiling, for sure. I actually have no idea what is better for performance: setting a single glColor and then rendering all vertices without colours, or passing each colour per-vertex even if they are all the same colour. Perhaps it varies based on the GPU hardware. That's perfectly reasonable of course. I probably would have approached things the same way. Minimising divergent code paths is good for future maintainability but it doesn't need to happen right away, and can be implemented piecemeal if necessary (e.g. the Brush class still has separate methods for lit vs unlit rendering, but they can delegate parts of their functionality to a common private method). Yes, that's what I would imagine to be the hurdle with const shaders — the mapping between Shader and objects has to happen somewhere, and if it isn't in the shader itself then some external map needs to be maintained, which might be a performance issue if relatively heavyweight structures like std::maps need to be modified thousands of times per frame. I would certainly give consideration to whether the windings and geometry could use the same implementation, because it does seem to me that their roles are more or less the same: a buffer of vertices in world space which can be tied together into various primitive types. This is something that VBOs will handle well — it should be possible to upload all the vertex data into a single buffer, then dispatch as many draw calls using whatever primitive types are desired, making reference to particular subsets of the vertices. This could make a huge difference to performance because once the data is in the VBO, you don't need to send it again until something changes (and even then you can map just a subset of the buffer and update that, rather than refreshing the whole thing). Ah, I didn't spot the difference in coordinate spaces. That is one fundamental difference between models and other geometry which might merit keeping a separate implementation. So I guess we might end up with a TransformedMeshRenderer for models and a WorldSpacePrimitiveRenderer for everything else, or some distinction like that. Unfortunately this is one of the times when manual memory management really is necessary: if we want to (eventually) put things in a VBO, the buffer has to be managed C-style with byte pointers, offsets and the like. I certainly don't envy you having to deal with it, but the work should be valuable because it will transition very neatly into the sort of operations needed for managing VBO memory.
  14. Overall these changes sound excellent. You have correctly (as far as I can tell) identified the major issues with the DR renderer and proposed sensible solutions that should improve performance considerably and leave room for future optimisations. In particular, trying to place as much as possible in a big chunk of contiguous RAM is exactly the sort of thing that GPUs should handle well. Some general, high-level comments (since I probably haven't even fully understood the whole design yet, much less looked at the code). Wireframe versus 3D I always thought it was dumb that we had different methods to handle these: at most it should have been an enum/bool parameter. So it's good to see that you're getting rid of this distinction. Unlit versus lit renders As you correctly point out, these are different, particularly in terms of light intersections and entity-based render parameters (neither of which need to be handled in the unlit renderer), so it makes sense to separate them and not have a load of if/then statements in backend render methods which just slow things down. However, if I'm understanding correctly, in the new implementation almost every aspect will be separate, including the backend data storage. Surely a lot of this is going to be the same in both cases — if a brush needs to submit a bunch of quads defined by their vertices, this operation would be the same regardless of whatever light intersection or GLSL setup calculations were performed first? Even if lighting mode needs extra operations to handle lighting-specific tasks, couldn't the actual low-level vertex sorting and submission code be shared? If double RAM buffers and glFenceSync improves performance in lit mode, wouldn't unlit mode also benefit from the same strategy? I guess another way of looking at is is: could "unlit mode" actually be a form of lit mode where lighting intersections were skipped, submitted lights were ignored, and the shader was changed to return full RGB values for every fragment? Or does this introduce performance problems of its own? Non-const shaders I've never liked the fact that Shaders are global (non-threadsafe) modifiable state — it seems to me that a Shader should know how to render things but should not in itself track what is being rendered. Your changes did not introduce this problem and they don't make it any worse, so it's not a criticism of your design at all, but I wonder if there would be scope to move towards a setup whereby the Shaders themselves were const, and all of the state associating shaders with their rendered objects was held locally to the render operation (or maybe the window/view)? This might enable features like a scrollable grid of model previews in the Model Selector, which I've seen used very effectively in other editors. But perhaps that is a problem for the future rather than today. Winding/Geometry/Surface Nothing wrong with the backend having more knowledge about what is being rendered if it helps optimisation, but I'm a little unclear on the precise division of responsibilities between these various geometry types. A Winding is an arbitrary convex polygon which can be rendered with either GL_LINES or GL_POLYGON depending on whether this is a 2D or 3D view (I think), and most of these polygons are expected to be quads. But Geometry can also contain quads, and is used by patches which also need to switch between wireframe and solid rendering, so I guess I'm not clear on where the boundary lies between a Winding and Geometry. Surface, on the other hand, I think is used for models, but in this case the backend just delegates to the Model object for rendering, rather than collating the triangles itself? Is this because models can have a large variation in the number of vertices, and trying to allocate "slots" for them in a big buffer would be more trouble than it's worth? I've never had to write a memory allocator myself so I can can certainly understand the problems that might arise with fragmentation etc, but I wonder if these same problems won't rear their heads even with relatively simple Windings. Render light by light Perfect. This is exactly what we need to be able to implement things like shadows, fog lights etc (if/when anybody wishes to work on this), so this is definitely a step in the right direction. Overall, these seem like major improvements and the initial performance figures you quote are considerable, so I look forward to checking things out when it's ready.
  15. I'm going to have to check that actually: I had been assuming that it worked for any directory with files in, but the docs say models or materials only, which means it might not actually work for def files after all (or at least hasn't been tested with them). If it's going to require code changes anyway, then perhaps implementing an editor_hidden spawnarg would actually be a better solution for entity defs.
  16. You can use assets.lst to hide entity defs too, but you have to do it at the file level, not the individual entity level. This isn't as restrictive as it sounds, because neither the game nor DR care what file each entity is defined in, so you can freely move all the entities you want to hide into a single file (e.g. "hidden.def") and then specify this file in the assets.lst.
  17. I'm pretty sure you'll need GTK3. GTK2 is obsolete.
  18. This has nothing to do with "censorship". Google is auto-correcting "stew" to "soup" because the words are similar in meaning and there are many more hits for soup than stew, so Google assumes this is what you really meant. Google ceased to be a pure text-string search engine many years ago, and now takes into account meanings, associations and other higher-level concepts. If you want to search for something literally, you can put it in quotes. In this case there are many hits for "cheeseburger stew". https://www.google.com/search?q="cheeseburger+stew"
  19. Oh, that is definitely weird behaviour then. I can't see any situation in which that would be desired. Either it should loop the first randomly-selected sound, or it should play all of the sounds one after the other in a random sequence.
  20. I think we're mixing up different issues here. The fact that s_looping 0 when explicitly set on a speaker does not override the sound shader looping property, as discussed in the "Stubborn looping speaker" thread, is in my opinion a bug which should be fixed. However in this thread you quoted a report from Spooks which says: I'm not clear what the expected behaviour is here. As far as I know, the looping property does not mean "play sounds from the playlist in sequence", it means "play a single sound file in a continuous loop". If you are playing a single sound in a continuous loop, you never stop playing it, which means there is no obvious point at which the engine can switch to playing a different sound. As far as I can see there are two possible ways to make looping and multiple sounds work together: Pick a random sound from the playlist and play it till the end. Then pick another random sound and do the same. Keep doing this forever (but do not "loop" any individual sound). This would require that each individual sound is designed to seamlessly transition into any other sound from the playlist. Pick a random sound from the playlist and play this single sound in a continuous loop. Since this loop will never end, there will be no automatic transition into another sound. However, if the speaker is deactivated and triggered again, a different random sound could be chosen to loop continuously.
  21. DEF files only govern the behaviour of entities that you insert with Right Click -> Create Entity. When you do Right Click -> Create Speaker you create a generic "speaker" entity with a configurable sound shader. There are no DEF files involved at this point, other than the DEF for the "speaker" entity itself which does not specify a particular sound shader. In order to use entityDef properties as defaults for sound shaders, you would need to create a specific entityDef for each and every sound shader (e.g. "speaker_hum01", "speaker_hum02", "speaker_ambientdogbark" or whatever). This seems like a lot of work for very little gain, and would clutter the entity tree considerably.
  22. I've never known looping to switch between different sounds in a shader. Are you absolutely sure this behaviour was in previous versions in the game? "Looping" means "Play this sound in a loop forever". "Forever" does not end, so how would the game know when to switch to another sound, and how would it do this seamlessly? Does the engine just pick a random number of seconds to play each sound for, then fade to another one? How does the player control this process, since as far as I know there are no keywords to specify how long each sample in a shader should be played for? Or is the expectation that the randomly-chosen sound should play forever until the speaker is deactivated (i.e. because the player moves out of range), and then it should randomly choose a different sound to loop when the speaker is triggered again?
  23. It seems like a bug that explicitly setting s_looping 0 does not override the looping keyword in the sound shader. The sound shader keywords should be defaults that can always be overridden on a per-speaker basis.
  24. Background music in missions can be adjusted with the slider in settings. I typically have it around 50% because ambient music does tend to be too loud by default.
×
×
  • Create New...