Jump to content
The Dark Mod Forums

OrbWeaver

Active Developer
  • Posts

    8420
  • Joined

  • Last visited

  • Days Won

    54

Posts posted by OrbWeaver

  1. 19 hours ago, Zerg Rush said:

    It's clear that forking the Duke Nukem code to make a imitación of it, also infringe the copyright, but is it a infrigement to use it to make a different game'

    Yes. Copyright infringement comes from using the copyrighted code, it has nothing to do with what you use it for. Even if you copy that code into an Android alarm clock app, you have committed copyright infringement.

    19 hours ago, Zerg Rush said:

    The copyright is also for the script of the idea. less of the engine or basic code.

    Nope. Copyright applies in full to the engine and the basic code. It does not apply to the generic "idea", although it can apply to the idea at a more specific level (so you can't make another game called "Duke Nukem", but you can make another FPS featuring a musclebound dudebro protagonist).

    19 hours ago, Zerg Rush said:

    How many games are out there using the saame engine and very similar content of the game, even they are different companies?

    If they are using the same engine then they have licensed that engine. They are not using stolen code. If they did, they would be sued into oblivion.

    19 hours ago, Zerg Rush said:

    .. Mostly all of them had copied things one from another in their games, in shooter games, RPG, platformer, sidescroller, selled as own game but only forks from others with different backgronds and protagonists, more o less complex.

    Again, you are confusing "ideas" with code. Anyone can make a generic shooter, RPG or platform game without infringing copyright, as long as it's not called "Super Mario Brothers".

    19 hours ago, Zerg Rush said:

    The basic code and idea of Mario Bros and the Dino from Google is the same.

    The basic idea might be the same. That has nothing to do with the code. You don't need to steal code to implement a game which has similar ideas.

    Claiming that software which does similar things must be using stolen code is the sort of thing Darl McBride came up with in the SCO vs IBM lawsuit. It failed spectacularly because his assumption was wrong. There was never any stolen code.

    19 hours ago, Zerg Rush said:

    If you make a game, let's say a shooter, using your own engine, but with the same development and maps as DOOM, copied 1 to 1 you commit copyright infringement,

    Correct.

    19 hours ago, Zerg Rush said:

    but not if a big company does it

    Not correct. Copyright law applies to large companies just as it applies to individuals, and since they are prominent companies with deep pockets, they are much more likely to get sued. That's why they have a legal department to make sure they are using all IP correctly. They do not steal code.

    19 hours ago, Zerg Rush said:

    Copyright is a very complex theme and nearly inexistent in asiatic game companies.

    I suppose if you're talking about some fly-by-night Chinese rip-off company, then they might be stealing code (since enforcing IP against companies in China is notoriously difficult). But if we're talking about AAA game companies in the US or Europe, using stolen engine code is not normal behaviour, although there may be occasional instances where it happens.

    • Like 2
  2. I remember when the Blender limit was 19 characters. At least the developers of Blender have generously allowed us a whole 63 characters to play with, although why the hell anyone thinks it is acceptable to have hard-coded name length limits in 2022 is anyone's guess.

    The aforementioned "skin trick" was something I came up with as well: each model had a material name like "sk/my_model" which then used a skin to map the model surfaces onto the real textures. But this is only useful for models which have their own custom texture; it is not so convenient if you want the model to use a regular, arbitrary Dark Mod texture.

    I'll have to check the state of the import/export scripts. The custom property approach should certainly solve the problem and I'm sure it has been discussed before, but I don't recall if it actually made it into the code. Perhaps if it hasn't already been implemented, the time to do so is now, given that the 63-character limit is clearly causing problems for some people.

    • Like 3
    • Thanks 1
  3. As @Zerg Rushsays, "good" and "free" are not a happy combination for VPNs.

    I used ProtonVPN Free for a while, but it took months to get off the waiting list and given an account, the performance was terrible, and any kind of file sharing (e.g. BitTorrent) was completely blocked.

    I now have the basic package for $5/month, which I consider extremely good value even though I don't use it all that frequently. You have a big choice of servers with good performance, and some of those servers support BitTorrent (I use the ones in Iceland).

    Be aware that most VPNs don't support IPv6, so if you have an IPv6-enabled ISP, your IP address can "leak" even while the VPN is active. Some VPN software (including ProtonVPN) will therefore include an IPv6 "kill switch" to prevent this from happening, but you should always check it's working by going to a "What's my IP address" site and ensure that no IPv6 address is visible.

    • Like 2
  4. 12 hours ago, vozka said:

    Perhaps I was unclear because that is not the case. I am complaining that the way the delay is manifested in the game looks wrong, not about the delay itself. 

    The delay itself is probably pretty realistic, but in real life the guard would not react by completely ignoring something he's seen and going on with is life for a few seconds before suddenly turning around and attacking the character. He would probably stop and be confused before realizing what's happening and then attack. 

    Not just a cosmetic issue in fact — it would be better gameplay for a guard to stop and enter a "Huh?" state for a couple of seconds, giving the player a cue that he had been spotted and giving time to run away, rather than have the guard act as if nothing was wrong and then suddenly enter a combat state.

    Perhaps this could be solved fairly straightforwardly by setting different delays for the various levels of state transitions. I.e. transition from 0 (unalerted) to 1 (huh?) could be instant, but the further transition from 1 to 2 or more (actual attack) could be significantly longer.

    • Like 4
  5. There are things I like about UEFI. Being able to install Windows after Linux and not completely trash the bootloader is nice (the two OS's just add their own entries to the system partition and appear in the list alongside each other). GPT was also a big step forward over the horrible 1980s primary/extended partitions (although I think you might be able to use GPT without using UEFI).

    Also for some reason Ubuntu loads so much faster in UEFI mode. I have no idea why. When I was using BIOS it took something like 2.5 minutes before I saw a login screen.

  6. There is not currently any such feature.

    It probably wouldn't be a huge amount of work to implement, but to me it seems very much a stop-gap measure of limited utility. The objective of the lighting mode render is to look as similar to the game as possible by rendering objects in the same way; if you want a more visible but less accurate view, the unlit render mode is designed for this purpose. When the lighting mode does not look the same as the game, this can be considered a bug in the DR renderer which should eventually be fixed (and Greebo has been doing some amazing work in DR 3.0 closing the gap between game and editor rendering).

    I don't think that manually creating "editor-only skins", simply to fake the correct rendered appearance, is a good use of either mapper time or developer time. Mappers should create content the way they want it to look in the game, and DR should do the best job it can to render that content in an accurate way.

  7. On 4/7/2022 at 7:57 PM, greebo said:

    @OrbWeaver  unit tests should be working again in Linux, at least they're not immediately crashing anymore.

    Confirmed. All tests are now passing on Linux.

             ~BufferObject()
             {
    -            glDeleteBuffers(1, &_buffer);
    +            if (_buffer != 0)
    +            {
    +                glDeleteBuffers(1, &_buffer);
    +            }
    +
                 _buffer = 0;
             }
    

    D'oh. I actually saw that _buffer was 0 in the debugger, but thought it wasn't important. Although the docs say that glDeleteBuffers should silently ignore 0, so I don't know if it's actually related to the crash.

    • Like 1
  8. Sure, I'll create a bug for it.

    I did try turning the _geometryStore member into a unique_ptr and explicitly resetting it in shutdownModule(), but this did not solve the problem. However it's possible I did not do it in the correct order with respect to other members which need to be cleaned up.

    I also noticed that there is a potential race condition during the shutdownModule calls themselves, because we don't actually take dependencies or initialisation order into account during shutdown — we just shut down modules in the order they appear in the _initialisedModules map (which I guess is alphabetical). This was causing my HeadlessOpenGLModule to be shutdown before the OpenGLShaderSystem which I was convinced was the cause of the problem... but even fixing this did not help.

    • Like 1
    • Thanks 1
  9. The Linux segfault is definitely not merge related, since I can reproduce it in your branch prior to the merge commit. It looks like a problem with OpenGL being called during shutdown (maybe after the context has been destroyed or invalidated?). Here is the stacktrace:

    Spoiler
        frame #0: 0x0000000000000000
      * frame #1: 0x00007ffff4d65c6a libradiantcore.so`render::BufferObjectProvider::BufferObject::~BufferObject(this=0x0000555555df91f0) at BufferObjectProvider.h:31:28
        frame #2: 0x00007ffff4d86910 libradiantcore.so`void __gnu_cxx::new_allocator<render::BufferObjectProvider::BufferObject>::destroy<render::BufferObjectProvider::BufferObject>(this=0x0000555555df91f0, __p=0x0000555555df91f0) at new_allocator.h:152:4
        frame #3: 0x00007ffff4d866c7 libradiantcore.so`void std::allocator_traits<std::allocator<render::BufferObjectProvider::BufferObject> >::destroy<render::BufferObjectProvider::BufferObject>(__a=0x0000555555df91f0, __p=0x0000555555df91f0) at alloc_traits.h:496:4
        frame #4: 0x00007ffff4d8621f libradiantcore.so`std::_Sp_counted_ptr_inplace<render::BufferObjectProvider::BufferObject, std::allocator<render::BufferObjectProvider::BufferObject>, (__gnu_cxx::_Lock_policy)2>::_M_dispose(this=0x0000555555df91e0) at shared_ptr_base.h:557:35
        frame #5: 0x00005555556ca814 drtest`std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_release(this=0x0000555555df91e0) at shared_ptr_base.h:155:6
        frame #6: 0x00005555556c6197 drtest`std::__shared_count<(__gnu_cxx::_Lock_policy)2>::~__shared_count(this=0x0000555555df8108) at shared_ptr_base.h:730:4
        frame #7: 0x00007ffff4d2f518 libradiantcore.so`std::__shared_ptr<render::IBufferObject, (__gnu_cxx::_Lock_policy)2>::~__shared_ptr(this=0x0000555555df8100) at shared_ptr_base.h:1169:7
        frame #8: 0x00007ffff4d2f538 libradiantcore.so`std::shared_ptr<render::IBufferObject>::~shared_ptr(this=0x0000555555df8100) at shared_ptr.h:103:11
        frame #9: 0x00007ffff4d7b654 libradiantcore.so`render::GeometryStore::FrameBuffer::~FrameBuffer(this=0x0000555555df7f90) at GeometryStore.h:28:12
        frame #10: 0x00007ffff4d7b6b5 libradiantcore.so`void std::_Destroy<render::GeometryStore::FrameBuffer>(__pointer=0x0000555555df7f90) at stl_construct.h:98:7
        frame #11: 0x00007ffff4d76fca libradiantcore.so`void std::_Destroy_aux<false>::__destroy<render::GeometryStore::FrameBuffer*>(__first=0x0000555555df7f90, __last=0x0000555555df8458) at stl_construct.h:108:19
        frame #12: 0x00007ffff4d71136 libradiantcore.so`void std::_Destroy<render::GeometryStore::FrameBuffer*>(__first=0x0000555555df7f90, __last=0x0000555555df8458) at stl_construct.h:137:11
        frame #13: 0x00007ffff4d6c581 libradiantcore.so`void std::_Destroy<render::GeometryStore::FrameBuffer*, render::GeometryStore::FrameBuffer>(__first=0x0000555555df7f90, __last=0x0000555555df8458, (null)=0x0000555555dd5680) at stl_construct.h:206:15
        frame #14: 0x00007ffff4d6892d libradiantcore.so`std::vector<render::GeometryStore::FrameBuffer, std::allocator<render::GeometryStore::FrameBuffer> >::~vector(this=0x0000555555dd5680) at stl_vector.h:677:15
        frame #15: 0x00007ffff4d67594 libradiantcore.so`render::GeometryStore::~GeometryStore(this=0x0000555555dd5678) at GeometryStore.h:11:7
        frame #16: 0x00007ffff4d615ea libradiantcore.so`render::OpenGLRenderSystem::~OpenGLRenderSystem(this=0x0000555555dd5500) at OpenGLRenderSystem.cpp:70:41
        frame #17: 0x00007ffff4d86744 libradiantcore.so`void __gnu_cxx::new_allocator<render::OpenGLRenderSystem>::destroy<render::OpenGLRenderSystem>(this=0x0000555555dd5500, __p=0x0000555555dd5500) at new_allocator.h:152:4
        frame #18: 0x00007ffff4d86517 libradiantcore.so`void std::allocator_traits<std::allocator<render::OpenGLRenderSystem> >::destroy<render::OpenGLRenderSystem>(__a=0x0000555555dd5500, __p=0x0000555555dd5500) at alloc_traits.h:496:4
        frame #19: 0x00007ffff4d857ff libradiantcore.so`std::_Sp_counted_ptr_inplace<render::OpenGLRenderSystem, std::allocator<render::OpenGLRenderSystem>, (__gnu_cxx::_Lock_policy)2>::_M_dispose(this=0x0000555555dd54f0) at shared_ptr_base.h:557:35
        frame #20: 0x00005555556ca814 drtest`std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_release(this=0x0000555555dd54f0) at shared_ptr_base.h:155:6
        frame #21: 0x00005555556c6197 drtest`std::__shared_count<(__gnu_cxx::_Lock_policy)2>::~__shared_count(this=0x00005555560498a8) at shared_ptr_base.h:730:4
        frame #22: 0x00005555556c1ece drtest`std::__shared_ptr<RegisterableModule, (__gnu_cxx::_Lock_policy)2>::~__shared_ptr(this=0x00005555560498a0) at shared_ptr_base.h:1169:7
        frame #23: 0x00005555556c1eee drtest`std::shared_ptr<RegisterableModule>::~shared_ptr(this=0x00005555560498a0) at shared_ptr.h:103:11
        frame #24: 0x00007ffff4ca4230 libradiantcore.so`std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::shared_ptr<RegisterableModule> >::~pair(this=0x0000555556049880) at stl_pair.h:208:12
        frame #25: 0x00007ffff4ca7486 libradiantcore.so`void __gnu_cxx::new_allocator<std::_Rb_tree_node<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::shared_ptr<RegisterableModule> > > >::destroy<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::shared_ptr<RegisterableModule> > >(this=0x00007fffffffdbc0, __p=0x0000555556049880) at new_allocator.h:152:4
        frame #26: 0x00007ffff4ca71ef libradiantcore.so`void std::allocator_traits<std::allocator<std::_Rb_tree_node<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::shared_ptr<RegisterableModule> > > > >::destroy<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::shared_ptr<RegisterableModule> > >(__a=0x00007fffffffdbc0, __p=0x0000555556049880) at alloc_traits.h:496:4
        frame #27: 0x00007ffff4ca6b17 libradiantcore.so`std::_Rb_tree<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::shared_ptr<RegisterableModule> >, std::_Select1st<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::shared_ptr<RegisterableModule> > >, std::less<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::shared_ptr<RegisterableModule> > > >::_M_destroy_node(this=0x00007fffffffdbc0, __p=0x0000555556049860) at stl_tree.h:642:24
        frame #28: 0x00007ffff4ca59c7 libradiantcore.so`std::_Rb_tree<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::shared_ptr<RegisterableModule> >, std::_Select1st<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::shared_ptr<RegisterableModule> > >, std::less<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::shared_ptr<RegisterableModule> > > >::_M_drop_node(this=0x00007fffffffdbc0, __p=0x0000555556049860) at stl_tree.h:650:2
        frame #29: 0x00007ffff4ca4b7c libradiantcore.so`std::_Rb_tree<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::shared_ptr<RegisterableModule> >, std::_Select1st<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::shared_ptr<RegisterableModule> > >, std::less<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::shared_ptr<RegisterableModule> > > >::_M_erase(this=0x00007fffffffdbc0, __x=0x0000555556049860) at stl_tree.h:1920:4
        frame #30: 0x00007ffff4ca4b59 libradiantcore.so`std::_Rb_tree<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::shared_ptr<RegisterableModule> >, std::_Select1st<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::shared_ptr<RegisterableModule> > >, std::less<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::shared_ptr<RegisterableModule> > > >::_M_erase(this=0x00007fffffffdbc0, __x=0x0000555555df1510) at stl_tree.h:1918:4
        frame #31: 0x00007ffff4ca4b59 libradiantcore.so`std::_Rb_tree<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::shared_ptr<RegisterableModule> >, std::_Select1st<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::shared_ptr<RegisterableModule> > >, std::less<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::shared_ptr<RegisterableModule> > > >::_M_erase(this=0x00007fffffffdbc0, __x=0x0000555556085750) at stl_tree.h:1918:4
        frame #32: 0x00007ffff4ca4b59 libradiantcore.so`std::_Rb_tree<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::shared_ptr<RegisterableModule> >, std::_Select1st<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::shared_ptr<RegisterableModule> > >, std::less<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::shared_ptr<RegisterableModule> > > >::_M_erase(this=0x00007fffffffdbc0, __x=0x0000555555e27a50) at stl_tree.h:1918:4
        frame #33: 0x00007ffff4ca4cd4 libradiantcore.so`std::_Rb_tree<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::shared_ptr<RegisterableModule> >, std::_Select1st<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::shared_ptr<RegisterableModule> > >, std::less<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::shared_ptr<RegisterableModule> > > >::clear(this=0x00007fffffffdbc0) at stl_tree.h:1271:2
        frame #34: 0x00007ffff4ca4518 libradiantcore.so`std::map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::shared_ptr<RegisterableModule>, std::less<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::shared_ptr<RegisterableModule> > > >::clear(this=0x00007fffffffdbc0) at stl_map.h:1133:9
        frame #35: 0x00007ffff4ca23ef libradiantcore.so`module::ModuleRegistry::unloadModules(this=0x0000555555df1360) at ModuleRegistry.cpp:47:15
        frame #36: 0x00007ffff4ca3792 libradiantcore.so`module::ModuleRegistry::shutdownModules(this=0x0000555555df1360) at ModuleRegistry.cpp:216:15
        frame #37: 0x00005555556c1b89 drtest`test::RadiantTest::TearDown(this=0x0000555555ddad50) at RadiantTest.h:129:49
        frame #38: 0x0000555555b32f21 drtest`void testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) [inlined] void testing::internal::HandleSehExceptionsInMethodIfSupported<testing::Test, void>(location=<unavailable>, method=<unavailable>, object=<unavailable>)(), char const*) at gtest.cc:2433:27
        frame #39: 0x0000555555b32f1a drtest`void testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void>(object=<unavailable>, method=<unavailable>, location="TearDown()")(), char const*) at gtest.cc:2469
        frame #40: 0x0000555555b27f55 drtest`testing::TestInfo::Run() at gtest.cc:2684:14
        frame #41: 0x0000555555b27f40 drtest`testing::TestInfo::Run(this=0x0000555555d85d70) at gtest.cc:2657
        frame #42: 0x0000555555b2803d drtest`testing::TestSuite::Run() at gtest.cc:2816:31
        frame #43: 0x0000555555b27fa2 drtest`testing::TestSuite::Run(this=0x0000555555d85ed0) at gtest.cc:2795
        frame #44: 0x0000555555b2855c drtest`testing::internal::UnitTestImpl::RunAllTests(this=0x0000555555d85120) at gtest.cc:5338:47
        frame #45: 0x0000555555d71d40 drtest
        frame #46: 0x0000555555b16630 drtest at gtest.cc:5036:1
    

     

    The actual line which crashes is:

    -> 31  	            glDeleteBuffers(1, &_buffer);

    which then jumps to address 0x0000, which I think can only mean that the glDeleteBuffers function pointer itself has been set to null.

    My first thought was that headless OpenGL simply won't work on Linux, but this doesn't really make sense as an explanation because the crash happens in an OpenGL call during shutdown, which must have been preceded by several other calls (which did not crash) during initialisation and running the tests. Also there is a HeadlessOpenGLContextModule which has been used by tests for 18 months without crashing, so it can't be the case that no OpenGL commands work in tests.

    I'm guessing this must be something related to the order of destruction of some of the new rendering-related objects (as commented within the OpenGLRenderSystem destructor), but I'm not sufficiently up to speed on how the new objects fit together to identify an obvious root cause.

  10. Initial source merged on Linux. Renderer changes look great. Shadows are working, and it's nice to see what look like more game-correct colours (I wonder what we were doing wrong in the previous shaders which made the colours and brightness different, although I suspect the answer lies in some mathematical detail that I would struggle to understand).

    The new shadow toggle button confused me though, because it looks like another option in the existing set of "radio buttons" which control the render mode, but this is a separate toggle which is independent of the other buttons. I would suggest making it mutually exclusive for consistency with the others — we could save space by getting rid of the untextured all-white solid mode, which I'm pretty sure is completely useless for most mapping tasks (I suspect that the wireframe one is occasionally useful for some people, although only in specific situations).

    There are some post merge segfaults in unit tests which I need to investigate to determine if they are Linux specific or caused by some merge conflict.

    • Like 1
  11. It won't get forgotten because the whole point of the bug tracker is to keep track of open bugs. However it is not fixed and will probably not be fixed in the upcoming release. I did some initial examination of the code but did not identify any quick solution, then suspended work on this to avoid creating merge conflicts with Greebo's extensive and ongoing changes to the renderer.

    • Thanks 2
  12. 20 hours ago, stgatilov said:

    One of the problems with DLL-based build is having to use dynamic CRT (recall the nightmare of two separate CRTs when Doom 3 was closed source).

    Maybe it is possible to have an option (CMake-only) to build something dynamic, but that option would probably get broken regularly, because everyone would forget about it.

    Right, I'm assuming it would need to be a completely different build mode set via CMake, which could then set whatever different options were necessary with regard to static vs dynamic CRT or other dependencies.

    20 hours ago, stgatilov said:

    Yes, it makes sense to keep interface pure-C. You won't be able to dynamically load exported C++ function because of name mangling anyway. But are you sure you will be able to expose much stuff without using complex types?

    In theory I think it should be possible to expose pretty much anything this way — the interface might be a bit more cumbersome to use, but you can always provide a header file with convenient (but optional) C++ wrapper classes which implement more familiar RAII and object-based semantics.

    Even lists and maps can be exposed, e.g.

    struct TDMStringList; // opaque
    
    // Get list of maps
    TDMStringList* tdm_installation_get_map_list(TDMInstallation* inst);
    
    // Manipulate list
    int tdm_stringlist_get_item_count(TDMStringList* list);
    const char* tdm_stringlist_get_item(TDMStringList* list, int index);
    void tdm_stringlist_free(TDMStringList* list);

    Obviously I wouldn't want to write code in this style all day, but using it just to traverse a DLL boundary and possibly wrapped in some C++ helper classes would be manageable.

    20 hours ago, stgatilov said:

    By the way, it is not necessary to create any kind of DLL to do that. In principle, DarkRadiant can import the EXE file as module and use its exported functions! After all, entry point is the only difference between DLL and EXE. Of course, doing so would mean mapping 18 MB file onto DR's virtual memory, but I don't think it is a big problem (well, maybe code cache would suffer because relevant TDM functions would be less localized).

    Apparently that's not trivial on Linux: https://stackoverflow.com/questions/6617679/using-dlopen-on-an-executable

    If you want to use the executable as a library it seems you need to use the PIE (position-independent executable) compiler option(s), at which point you might as well just build a DLL anyway (unless the binary is always going to be build PIE).

    20 hours ago, stgatilov said:

    Ehm... To be honest... we discussed that we should probably reimplement most of this system 😭
    But I agree that it would be very hard to force myself doing that 😁

    It seems that writing English full-text in translation files instead of relying on these #str_XXX keywords would be more comfortable for mappers in 99% of the cases, and additional "labels" added to the original text would resolve ambiguity in the rest 1%.

    No disagreement from me there. It seems like a hacky system with little regard to best practices for translation, which is why I've never made any effort to replicate it in DR.

    20 hours ago, stgatilov said:

    And adding optional build with DLLs but deploying fully static build would mean that you'll have to distribute this DLL yourself, i.e. you cannot load this DLL from user's TDM installation.

    Sure, I wouldn't expect the DLL to already be there, it would be something we have to integrate ourselves. Although at that point it might be better to skip the DLL altogether and just do source code or static library integration.

  13. I'd probably approach from a slightly different direction: rather than trying to extract and isolate small parts of the TDM code and call these in a DLL from both the game and DR (which introduces problems with dependencies on other parts of the code), I would try adding a DLL-style build mode for the whole game binary — perhaps chosen with a CMake option — so that you could choose to build either the game itself or a DLL containing most of the same code.

    You'd then need a suitable DLL interface on the game side, which I would suggest should be pure C and as simple as possible so that it isn't necessary to expose all of the idLib stuff and deal with the complexities of C++ binary interfaces. So you might end up with an interface a bit like original GTK:

    struct TDMInstallation; // opaque type
    
    // Initialise new installation and return object owned by the DLL
    TDMInstallation* tdm_installation_new(const char* path);
    
    // Compile a given map, return a status code
    int tdm_installation_compile_map(TDMInstallation* installation, const char* mapName);
    
    // Properly dispose of the installation object
    void tdm_installation_free(TDMInstallation* installation);

    This way you effectively have full encapsulation of the DLL code, and an essentially object-oriented interface using C functions and opaque pointers instead of C++ classes with private members and public methods.

    If we were ever going to try this I'd suggest starting with something very simple and self contained. Compiling maps is probably OK, or maybe exposing Tels' i18n system which has never been ported into DR and results in DR not being able to show internationalised names for difficulty settings. Rendering of course would be a much more difficult task.

  14. An amazing leap forward for the DR renderer.

    Although all of this manual synchronisation work makes me think that it would be really nice to have some of the common code split into a DLL which could be used from DR as well as the game engine, allowing both editor and game to behave the same without needing a whole bunch of duplicated code. But of course that introduces difficulties of its own, especially when the two projects are using entirely different source control systems.

    • Thanks 1
  15. 4 hours ago, Anderson said:

    Because it's a real hassle having over 9999 items cycled through the inventory when I need something right now - such as a health potion or a lockpick and there's a guard coming. Especially with lockpicks, it's all about timing.

    You know you can immediately select lockpicks (and toggle between them) by pressing P, right? No need to find them by scrolling through the whole inventory.

    I'm not in a position to test right now and I don't recall whether there is a dedicated shortcut for health potions, but I wouldn't be surprised if there was one. Maybe check your key binding preferences to see what inventory shortcuts are available and what they are bound to.

    • Like 1
  16. Even Thief 1/2 had crystals instead of full arrows if you found them during a mission. They were a sort of low-resolution pointed cylinder shape, in the colour of their element (with gas particles rising in the case of gas crystals). It was never explained how the player somehow turned these into arrows, but presumably it should be understood that he carries some empty shafts to attach the crystals to. Having to do this manually in game would be annoying and add no gameplay value, unless it was opening up some new possibilities like crafting unusual combination arrows with multiple crystal types.

    • Like 3
  17. The visual design looks good but legibility of the foreground text is suffering.

    nro3zLE.png.fa31ee10b476c4a071160a7ce6d7cdaf.png

    Since the location/level of detail in the background image is not predictable, it would be better to use something more visible for the text than transparent dim grey over slightly darker transparent grey.

    • Like 2
  18. 16 hours ago, greebo said:

    The objects are still calling for a coloured line shader, like <0 0 1> for a blue one. In principle, now that the vertex colour is shipped along with the geometry data, the colour distinction in the shader itself is maybe not even necessary anymore. There could be a single line shader, used to draw stuff in the orthoview.

    That would be something worth profiling, for sure. I actually have no idea what is better for performance: setting a single glColor and then rendering all vertices without colours, or passing each colour per-vertex even if they are all the same colour. Perhaps it varies based on the GPU hardware.

    16 hours ago, greebo said:

    The main reason for this duplication is the chronological order I adjusted the renderer. I was chewing by this starting with fullbright mode, first brushes, then patches, then models, finally the visual aids like lines and points. After that I was moving forward to do the research on lit mode, and all that reflects in the code. I admit that I took this approach on purpose: when starting, I didn't have the full grasp of what is going to be necessary, I had to learn along the way (and aim for not getting burnt out half-way through). Now the full picture is available, the thing can be further improved, and the storage is probably among the first things that need to be optimised.

    That's perfectly reasonable of course. I probably would have approached things the same way. Minimising divergent code paths is good for future maintainability but it doesn't need to happen right away, and can be implemented piecemeal if necessary (e.g. the Brush class still has separate methods for lit vs unlit rendering, but they can delegate parts of their functionality to a common private method).

    15 hours ago, greebo said:

    Yes, this is interesting. It's achievable, with some cost, of course. Right now, the Shaders themselves implement the interfaces IWindingRenderer, IGeometryRenderer and ISurfaceRenderer. A different authority could implement these interfaces, but it needs to map the objects to the Shaders somehow (likely by using a few std::maps). The renderer then calls that authority to deliver that information, this way we can separate that information.

    Yes, that's what I would imagine to be the hurdle with const shaders — the mapping between Shader and objects has to happen somewhere, and if it isn't in the shader itself then some external map needs to be maintained, which might be a performance issue if relatively heavyweight structures like std::maps need to be modified thousands of times per frame.

    15 hours ago, greebo said:

    It's the way they are internally stored to reduce draw calls, but they are indeed similar.

    I implemented the IWindingRenderer first, since that was the most painful spot, and I tailored it exactly for that purpose. The CompactWindingVertexBuffer template is specialised to the needs of fixed-size Windings, and the buffer is designed to support fast insertions, updates and (deferred) deletions. I guess it's not very useful for the other Geometry types, but I admit that I didn't even try to merge the two use cases. I tackled one field after the other, it's possible that the CompactWindingVertexBuffer can now be replaced to use some of the pieces I implemented for the lit render mode - there is another ContinuousBuffer<> template that might be suitable by the IWindingRenderer, for example.

    I would certainly give consideration to whether the windings and geometry could use the same implementation, because it does seem to me that their roles are more or less the same: a buffer of vertices in world space which can be tied together into various primitive types. This is something that VBOs will handle well — it should be possible to upload all the vertex data into a single buffer, then dispatch as many draw calls using whatever primitive types are desired, making reference to particular subsets of the vertices. This could make a huge difference to performance because once the data is in the VBO, you don't need to send it again until something changes (and even then you can map just a subset of the buffer and update that, rather than refreshing the whole thing).

    15 hours ago, greebo said:

    The model object is not involved in any rendering anymore, it just creates and registers the IRenderableSurface object. The SurfaceRenderer is then copying the model vertices in the large GeometryStore - memory duplication again (the model node needs to keep the data around for model scaling). The size of the memory doesn't seem to be a problem, the data is static and is not updated very often (except when scaling, but the number of vertices and indices stays the same). The thing that makes surfaces special is their orientation, they have to be rendered one after the other, separated by glMultMatrix() calls.

    Ah, I didn't spot the difference in coordinate spaces. That is one fundamental difference between models and other geometry which might merit keeping a separate implementation. So I guess we might end up with a TransformedMeshRenderer for models and a WorldSpacePrimitiveRenderer for everything else, or some distinction like that.

    15 hours ago, greebo said:

    Speaking about writing the memory allocator: I was quite reluctant to write all that memory management code, but I saw no escape routes for me. It must have been the billionth time this has been done on this planet. Definitely not claiming that I did a good job on any of those, but at least it doesn't appear in the profiler traces.

    Unfortunately this is one of the times when manual memory management really is necessary: if we want to (eventually) put things in a VBO, the buffer has to be managed C-style with byte pointers, offsets and the like. I certainly don't envy you having to deal with it, but the work should be valuable because it will transition very neatly into the sort of operations needed for managing VBO memory.

  19. Overall these changes sound excellent. You have correctly (as far as I can tell) identified the major issues with the DR renderer and proposed sensible solutions that should improve performance considerably and leave room for future optimisations. In particular, trying to place as much as possible in a big chunk of contiguous RAM is exactly the sort of thing that GPUs should handle well.

    Some general, high-level comments (since I probably haven't even fully understood the whole design yet, much less looked at the code).

    Wireframe versus 3D

    I always thought it was dumb that we had different methods to handle these: at most it should have been an enum/bool parameter. So it's good to see that you're getting rid of this distinction.

    Unlit versus lit renders

    As you correctly point out, these are different, particularly in terms of light intersections and entity-based render parameters (neither of which need to be handled in the unlit renderer), so it makes sense to separate them and not have a load of if/then statements in backend render methods which just slow things down.

    However, if I'm understanding correctly, in the new implementation almost every aspect will be separate, including the backend data storage. Surely a lot of this is going to be the same in both cases — if a brush needs to submit a bunch of quads defined by their vertices, this operation would be the same regardless of whatever light intersection or GLSL setup calculations were performed first? Even if lighting mode needs extra operations to handle lighting-specific tasks, couldn't the actual low-level vertex sorting and submission code be shared? If double RAM buffers and glFenceSync improves performance in lit mode, wouldn't unlit mode also benefit from the same strategy?

    I guess another way of looking at is is: could "unlit mode" actually be a form of lit mode where lighting intersections were skipped, submitted lights were ignored, and the shader was changed to return full RGB values for every fragment? Or does this introduce performance problems of its own?

    Non-const shaders

    I've never liked the fact that Shaders are global (non-threadsafe) modifiable state — it seems to me that a Shader should know how to render things but should not in itself track what is being rendered. Your changes did not introduce this problem and they don't make it any worse, so it's not a criticism of your design at all, but I wonder if there would be scope to move towards a setup whereby the Shaders themselves were const, and all of the state associating shaders with their rendered objects was held locally to the render operation (or maybe the window/view)?

    This might enable features like a scrollable grid of model previews in the Model Selector, which I've seen used very effectively in other editors. But perhaps that is a problem for the future rather than today.

    Winding/Geometry/Surface

    Nothing wrong with the backend having more knowledge about what is being rendered if it helps optimisation, but I'm a little unclear on the precise division of responsibilities between these various geometry types.

    A Winding is an arbitrary convex polygon which can be rendered with either GL_LINES or GL_POLYGON depending on whether this is a 2D or 3D view (I think), and most of these polygons are expected to be quads. But Geometry can also contain quads, and is used by patches which also need to switch between wireframe and solid rendering, so I guess I'm not clear on where the boundary lies between a Winding and Geometry. 

    Surface, on the other hand, I think is used for models, but in this case the backend just delegates to the Model object for rendering, rather than collating the triangles itself? Is this because models can have a large variation in the number of vertices, and trying to allocate "slots" for them in a big buffer would be more trouble than it's worth? I've never had to write a memory allocator myself so I can can certainly understand the problems that might arise with fragmentation etc, but I wonder if these same problems won't rear their heads even with relatively simple Windings.

    Render light by light

    Perfect. This is exactly what we need to be able to implement things like shadows, fog lights etc (if/when anybody wishes to work on this), so this is definitely a step in the right direction.

    Overall, these seem like major improvements and the initial performance figures you quote are considerable, so I look forward to checking things out when it's ready.

    • Thanks 1
×
×
  • Create New...