Jump to content
The Dark Mod Forums

OrbWeaver

Active Developer
  • Content Count

    7740
  • Joined

  • Last visited

  • Days Won

    28

Everything posted by OrbWeaver

  1. My motherboard died in December so I'm now running a Ryzen 3700X (with just the stock cooler) on a B450 motherboard and I love it. Intel seem like the Levi's designer boxer shorts of the CPU world — you're just paying for the name, but the product isn't any better (and according to many reviews is actually worse). Unless AMD seriously drop the ball I can't see myself going back to Intel processors any time soon.
  2. Dependencies on the render system in particular are one of the things which I would suggest decoupling is quite important, because I know from experience at work that unit tests which rely on rendering are the most likely to break, especially if they need to run as part of a build process in which full access to the graphical environment may not be available. With regard to this specific example, a couple of options spring to mind: Although the m_lightList pointer is initialised in the constructor, it isn't actually used. The constructor could instead initialise the pointer to nullptr, then all other usages of the pointer elsewhere in the class could be replaced by a call to an internal method getLightList() which contains the initialisation code currently implemented in the constructor. This is a case of "kicking the can down the road" in that it doesn't actually remove the dependency on the render system, but delays the call until the result is actually needed (which it might never be, in the unit test). Instead of calling GlobalRenderSystem() itself, the BrushNode constructor could accept a reference to an abstract RenderSystem interface object, which in the unit test is just a mock object implementing the interface but not doing anything, rather than the real implementation used in DarkRadiant. If RenderSystem is too large and complex to mock in this way, the interface could be split up: a new interface LightListProvider could provide just the virtual methods used here (e.g. attachLitObject()), and this new interface passed to the BrushNode constructor instead of the whole RenderSystem (of course the RenderSystem would also need to be modified to derive from LightListProvider as well). It's worth pointing out that there is a difference between a unit test, which is generally understood as an isolated test of just a single class (maybe even a single method), and an integration test which is a much broader test of a large number of subsystems at once. Both types of test are useful: a unit test reveals specific problems in methods or classes which are localised to those methods or classes (and more easily fixed), whereas an integration test will confirm that "everything is [not] working" but probably won't offer much information as to exactly where a problem lies. I think what you're aiming for here is much closer to an integration test than a unit test. There is nothing wrong with this strategy, which will certainly be useful for confirming that saving code hasn't been broken by some refactoring or change, but if you really want to be able to isolate particular problems with automated tests (i.e. "this method of BrushNode is returning a wrong value", rather than "saving is broken"), then more granular unit tests are ultimately going to be necessary, and unfortunately this does mean that a different set of dependencies may need to be mocked in each unit test. However, the test fixture system can help with this, by allowing sets of mocked dependencies can be re-used across multiple tests, so you don't need to implement them from scratch each time. Yes, that will almost certainly be necessary, and I did this for the PK4 testing. In my case this was fairly easy but it was for a very simple test setup: basically just checking that a model file would appear with a "hidden" property if there was an assets.lst file in the appropriate place. For testing full-scale saving and loading I suspect a lot more resources will be needed, included several entity definitions, materials and objects, but probably nothing very large in terms of megabytes. Even if you need actual texture images, these could be very low resolution e.g. 16x16 TGAs which would take up very little space in the repository. I don't suppose you will need any sound files and everything else should compress pretty well in the PK4.
  3. Right. That was something I noticed early on in my experiments with testing: trying to set up actual modules is basically impossible because of the complexity and interdependence. That is not a bad approach, and might be the design which presents the maximum possible testing functionality. However, before committing to such a large task, it would be worth considering whether there are simpler approaches which allow you to test an isolated class without needing a complete module system. The problem is that a particular class might call GlobalFoo() which returns a module which then need to instantiate GlobalBar() and everything else. But there are sometimes refactorings you can make which might improve this, for example: Does the class really need the whole of GlobalFoo(), or is it just looking up some data (like "which image file extensions can I load?"). If it just looking for some simple data, the class under test could be refactored to accept this data as a construction parameter, rather than calling GlobalFoo() and getting the data itself. You can then pass in the data explicitly in the unit test without needing the Foo module to exist. Does the class only need to call GlobalFoo() for certain operations (like OpenGL rendering), which you are not actually testing? If so, the usage of GlobalFoo could be wrapped in a callback or std::async so it doesn't actually get called until the relevant operation is performed. Could a mock Foo module be created in the test suite, that performs much simpler operations and/or doesn't rely on any other modules? I'm sure I used all of these techniques when testing the PK4 file parsing, although I appreciate that this is a much simpler part of DarkRadiant than testing the whole map loading system which undoubtedly relies on loads of other modules for loading various assets. No objections from me. I did briefly look at Google Test as I recall, and didn't notice any specific problems with it; I think I went with Boost.Test mainly for convenience because we are already heavily using Boost. But if you find Google Test provides better functionality then I don't see a problem with integrating it.
  4. On that subject, I added some basic unit tests (using Boost.Test) last year when I was working on the asset deprecation functionality, which you might find useful as a basis or guide to implementing new unit tests for saving and loading. The Boost macros and test fixture system is quite nice to use; the challenge of course is being able to split up the code so that it can be tested in isolation without needing to pull in dozens of other module dependencies which don't work in the test environment.
  5. If a binary is requesting a version symbol from a library and that symbol cannot be found, this implies that the binary was built against a different version of the library that the one it is linking with at runtime. This might occur if you have more than one version installed, or if you are building against a separate tree of development libraries that don't match the system-wide libraries used to run applications. To be honest I don't think there should be a bugtracker entry for this, unless there is some evidence of a problem in the DarkRadiant build scripts. We don't embed any version symbols like "WXU_3.0.5" in the source code; we just ask the wx-config script to return the appropriate system-specific include and library paths. If your wx-config script is returning paths to a wxWidgets library which doesn't match the system-wide library, that is a problem with the build machine, not something we can fix at the source level. You can see which include and library paths will be used at build time by running the wx-config script manually, e.g. $ wx-config --libs -L/usr/lib/x86_64-linux-gnu -pthread -lwx_gtk2u_xrc-3.0 -lwx_gtk2u_html-3.0 -lwx_gtk2u_qa-3.0 -lwx_gtk2u_adv-3.0 -lwx_gtk2u_core-3.0 -lwx_baseu_xml-3.0 -lwx_baseu_net-3.0 -lwx_baseu-3.0 $ wx-config --cxxflags -I/usr/lib/x86_64-linux-gnu/wx/include/gtk2-unicode-3.0 -I/usr/include/wx-3.0 -D_FILE_OFFSET_BITS=64 -DWXUSINGDLL -D__WXGTK__ -pthread It is quite possible that if you run these commands in your build environment they will point somewhere other than at /usr/lib/libwx_gtk3u_core-3.0.so.0, which would explain the version mismatch.
  6. There's nothing particularly wrong with such ideas for multiplayer Dark Mod-like games, but it doesn't make much sense to try to base the implementation on the Dark Mod code itself. The proposed game you describe would be so completely different from the Dark Mod that the existing codebase would be entirely unsuitable as a starting point — for example, TDM does not have any functional networking code, whereas much of the code which does exist is for things like AI patrols and the light gem which would have little or no usage in your game. If somebody wanted to implement this they would be better off starting with a more modern game engine which already has viable network-based multiplayer, and then implementing the more Thieflike mechanics on top of this (you could even import TDM models and textures if there was a suitable format conversion available). I seem to recall there was an attempt to do something like this in Unreal Tournament many years ago, although I don't know what happened to the project.
  7. If you want a sphere with 32 sides, you are probably better off not using a brush. Because of the way brushes are represented, very complex geometry can cause mathematical issues, and I wouldn't be surprised if such a complex brush gives some kind of problem (missing triangles after dmap etc).
  8. I must be misunderstanding something. Why are you trying to play the latest version of an actively-developed game on a retro gaming PC? Why would you expect that to give you good results? Isn't the purpose of retro-computing to play games from the same time period as the retro PC (e.g. building an ultimate Voodoo 3D gaming rig from 1998 to play Thief 2)?
  9. When I'm doing development I always build DR with a custom path, e.g.: $ ./configure --prefix=/tmp/dr $ make -j8 install This will completely avoid any conflicts with existing installation, and also provides the benefit that root privileges are not required at any point (because the /tmp directory is world-writeable). Obviously this is no use as a permanent solution (unless you use a persistent custom path like /opt/darkradiant-1.8) but is useful for testing and development purposes. In any case, I'm glad you finally got it working on your system.
  10. Also worth mentioning: if you do think that light count might be an issue, there is (or at least was) an argument to dmap which caused the compiler to automatically split brushes at light boundaries, effect paying for reduced light counts by increasing the number of polygons. You might want to try this option and see if there is any noticeable difference in FPS in your problematic scene.
  11. I was referring to D3W advice in general, not the specific text you posted. I've seen several forum posts on D3W and I think even these forums where people have repeated the "fact" that Doom 3 has a problem with more than 3 lights hitting a surface, and advised mappers to cut up brushes to make sure they never see a cyan surface in the r_showLightCount view, even though this was a waste of time that made no difference to frame rates back when I was doing it on a Radeon 9800 XT in 2006. I suspect that the "3 light limit" comes from the descriptive text in the explanation of r_showLightCount: 1. Black= No light hitting the surface. 2. Red= 1 light hitting the surface (this is excellent). 3. Green= 2 lights hitting a surface (this is great). 4. Blue = 3 lights hitting a surface (This is getting bad. You need to keep these areas small, but it is ok to have a number of smaller areas in a room that are blue). 5. Cyan, pink, or white= 4+ lights hitting a surface this is bad and you should only have this in very small, isolated areas. It would be easy to interpret these descriptions as implying that something bad happens between 2 lights ("this is great") and 3 lights ("this is getting bad"), even no there is no such limit and these textual descriptions were (as far as I can tell) just off-the-cuff phrases that never actually reflected any empirical performance testing. There is no such limit for maximum lights, because light count is just one of many factors influencing performance, along with poly count, shadow count, shadow mesh complexity, number of shadow-casting lights, number and size of open/closed visportals and many other things. Giving suggested limits for particular scene elements is counter productive because these limits tend to get enshrined in documents and forum posts and propagated through time as a sort of "cargo cult", where people believe that as long as they have fewer than a certain number of lights or polygons then their scene will be fast to render, even though there may be many cases where having more lights/polygons would be fine if other elements of the scene are simpler, and vice versa. The only meaningful way to address performance is based on empirical testing, with guided optimisations based on this data. For example, if you have a scene that is particularly slow to render, and you find that you have a large surface hit by 20 different lights and they all cast shadows, this might be the thing to optimised, whereas in another scene you might find that you have 10 million polygons in view due to a reflection surface and too many open visportals, and these are the things you need to fix. But in no case does this mean that if you stick to 19 different lights and only 9 million polygons, you will never have a problem with performance. No need to get defensive; I wasn't criticising you, just pointing out that some of that old advice from D3W and other forums isn't particularly accurate and shouldn't be relied upon by today's mappers, particularly when it asserts particular numeric limits that quickly become obsolete due to the evolution of hardware, and might not have been correct to begin with. Nothing wrong with looking at light counts in conjunction with other items (but you must also consider the area of the over-lit polygons, not just the count, and consider the effect of light scissoring which considerably restricts the actual area that is illuminated based on the size of the light, even if that light hits a large polygon), provided you don't become attached to certain fixed "ideal light counts" which are just too crude a measure to be useful. Overall my advice with light count and everything else (polygons, shadows etc) is: Minimise everything as far as you possibly can without compromising the artistic intent. Use in-game diagnostics and statistics as a guide, without being hung up on particular numeric thresholds and values. Test on a variety of hardware (especially mediocre hardware) if you can, to avoid producing a map which only runs well on your super high-end rig.
  12. It is highly unlikely that the presence of any particular -devel package on your system is going to cause DR to crash, unless that devel package is so horribly broken that it actually breaks software that is compiled while it is installed (which is very unlikely in the first place, and such a package would almost certainly be removed by the distribution).
  13. There is nothing problematic in that stack trace, which makes me think that it is showing a thread which hasn't actually crashed (but there may be another thread which has crashed). If you use the command info threads, gdb should list all the threads which are currently running. Does any thread in that list have associated text indicating it has crashed ("stopped", "aborted" etc)?
  14. Be very careful with the years-old advice from Doom3World/idDevNet. There is some absolute garbage floating around on there, including the myth that there is some kind of "limit" of 3 lights hitting a single surface (there isn't), that having a light count of 4 or more causes a massive performance drop (it doesn't), and that you should manually lower light counts by cutting up your brushes (you can if you really want to but it's a waste of time and will likely make no difference to your performance). Some of that advice might have been useful once but is no longer relevant on modern hardware. Some of it (as far as I can tell) was just made up out of thin air, like the "3 light limit". Some of it was based on a misunderstanding of how the engine works, such as the advice to cut up brushes which completely ignores the effect of light scissoring (which effectively does this for you at render time).
  15. You don't need to install debug packages to get a backtrace of DarkRadiant. Since the error is most likely in DR's own code, not system libraries, there is no need to have debug packages available for those system libraries. What you do need to do is issue the "bt" command after gdb stops, so you get a full backtrace of the stack, rather than just the reason for stopping.
  16. Nothing has changed regarding the translation behaviour. If you leave the difficulty names at their defaults, they will be translated by the mod as before. DarkRadiant does not write anything into the map in this case. If you use a custom name, that name will be written into the map and used exactly as written, as Greebo says. If you use a completely new name, like "Wakka wakka", I assume the mod is not going to be able to translate that because it won't have a string table entry, but you are free to enter a suitable string table ID if you want to use a modified name that is already recognised by the mod. These are the strings I could find in the mod definitions, which suggest (although I have not tested) that all of these string table entries could be used for appropriate translated difficulty names: "#str_03000" "Easy" "#str_03001" "Casual" "#str_03002" "Novice" "#str_03003" "Beginner" "#str_03004" "Medium" "#str_03005" "Normal" "#str_03006" "Challenging" "#str_03007" "Expert" "#str_03008" "Master" "#str_03009" "Veteran" "#str_03010" "Hardcore" "#str_03011" "Difficult" "#str_03012" "Hard" "#str_03013" "Apprentice" "#str_03014" "Professional" "#str_03015" "Braggard"
  17. No problem ­— all three issues are now added and are showing up in the release roadmap. I agree that creating issues just for refactoring would be excessive, and wouldn't really serve any purpose. I suppose the question we should ask is "Would a user be interested in seeing this change in the changelog?", and if the answer is "Yes", then an issue should be created so that the changelog can be appropriately generated. That's exactly my company's policy as well (we actually enforced it with an SVN pre-commit hook originally, but then we switched to Git and it is impossible, so we just have to rely on people abiding by the policy). Although it does not exclude the possibility of doing cleanups and refactoring too; we just consider such refactoring "in scope of" the current work, and tag it with the same tracker issue we are using to fix the final problem. Oh, that is nice. I'll definitely start doing that from now on. I assume (from looking at your commit log) that all that is needed to make this work is to prefix the commit summary with a issue number e.g "#12345: fixed something"?
  18. I'll create the bug tracker entries — I didn't before because these always felt like simple issues I was only casually working on, but I forgot about the release changelog aspect which of course makes it important to have entries even for relatively simple changes.
  19. I didn't create any bugtracker entries for these, but if you pull my latest repo you'll get the following: Modal dialogs (e.g. difficulty and objectives editors) no longer have 'X' close buttons, so users are less likely to accidentally close the dialog without saving changes. Difficulty dialog now shows English difficulty names, rather than meaningless string table entries. Difficulty dialog now allows editing of the current difficulty name and saving the modifications onto the worldspawn.
  20. Seeing as I was poking around in the Difficulty editor anyway, I went ahead and implemented editing of difficulty names, which is apparently supported by the mod but hadn't yet made it into the DR interface. The layout is slightly changed to use a dropdown instead of tabs, which makes it possible to add a custom edit button: Clicking on the edit button shows a simple dialog in which you can edit the current name: The changes are then reflected in the dialog and saved into the current map: I haven't tested it in game but I assume it should work because I'm writing the changes to the spawnargs specified by the wiki (and already used by DR to display the difficulty names, even if you couldn't edit them).
  21. My understanding (and I might well be out of date on this) is that image_useNormalCompression should default to 0, because we do not supply precompressed normal maps and compression of normal maps tends to look bad, but image_useCompression should be left on (1), because compression of regular images is normal practice and these are supplied as pre-compressed DXT files anyway (in fact they may ONLY be supplied as precompressed DXT files, which means that some combinations of disabling image compression will result in all-black environments since the non-compressed original images cannot be found).
  22. Many people would say the same thing about beef or pork. I've often heard it said that if the average person saw what happened in a slaughterhouse, they would give up eating meat in a heartbeat. Animals are reported to scream in terror and pain as they are half-stunned, beaten to death or even skinned alive by overworked, emotionally numbed operatives who have long lost any capacity to care about the humane treatment of the animals they are paid to slaughter by the truckload. But imagine the chaos that would ensue if every ethical vegan made a thread on the forum asking us to mass-report a YouTuber who posted a video about cooking steak.
  23. Dark Mod forums are not your personal army.
  24. This actually solved the slow loading problem for me, so thanks for the suggestion. I guess that raises the question of why image_useNormalCompression is defaulting to 1 in the first place, given the problems it causes. Perhaps it is intended as a "safe" default for people with low-memory GPUs.
  25. I do remember having at least one discussion about this kind of issue, and I was concerned about it back then. I think somebody wanted to use a library of royalty-free sounds which could be used provided you "integrated" them into some larger production (i.e. by mixing them into a song or a movie soundtrack), but specifically prohibited distributing the sounds by themselves. But we are distributing assets by themselves, because we literally have a subversion repository from which people can download individual assets and use them however they like. And even if we only distributed assets as part of a complete level, it is trivial for people to extract individual assets from the mission archive, and the CC-BY-SA license explicitly allows them to do so provided they maintain the same CC-BY-SA terms when redistributing.
×
×
  • Create New...