Jump to content
The Dark Mod Forums

NagaHuntress

Member
  • Posts

    39
  • Joined

  • Last visited

Reputation

23 Good

Recent Profile Visitors

532 profile views
  1. If you're trying to use OSS for sound out on Ubuntu, you're probably running into the problem of Pulse Audio monopolising /dev/dsp . You could disable the Pulse Audio service, but you'll run into sound not working in other applications, or sound not being shared. A better solution is to use a OSS emulation wrapper that outputs to a sound system compatible with Pulse Audio. Install OSS emulation for ALSA with: sudo apt-get install alsa-ossOnce installed, run the game with: aoss ./thedarkmod.x86
  2. Well, I finally got some time to try the map moved to an X,Y of 10k,10k with fixes disabled, and I'm afraid to report that the player can get stuck on that ridge. The normal fix stops get the player from getting stuck as expected, but the player still stumbles (possibly less often, but that's very subjective at the moment). Adding the epsilon fix stops the stumbling (again, expected).
  3. In plain C, NULL is actually defined as ((void *)0), but in C++ it can be defined as 0, as C++ uses 0 as a null pointer. However, in GCC it defines NULL as "__null", as tested by the following code: #include <stdio.h> #define STRINGIFY(x) #x #define STRINGIT(x) STRINGIFY(x) main () { printf("A NULL is: " STRINGIT(NULL) "\n"); } $ gcc -g nulltest.c && ./a.out A NULL is: ((void *)0) $ gcc -g nulltest.cpp && ./a.out A NULL is: __null__null is internal to GCC and used to help generate warnings. Typically NULL is used when you want a null pointer (i.e. a pointer to nothing), so if you're trying to implicitly cast it to an integer, you're either doing pointer black magic wrong, or have a bug on your hands, so GCC's warning is a good thing. Using NULL with OpenAL's handles is just abusing C++'s definition of NULL, and I regard it as bad coding practice.
  4. I saw this when doing my 64 bit patch. The problem is that there is documentation out there say that NULL should be used for Open AL's integer handles. In practice people should be using AL_NONE instead.
  5. The game is locked down to single precision in the sense that it's locked down to 32-bits. With enough time and effort baked in floating point assumptions can be overcome. With a general conversion of floating point types to double, there will be a need to have it converting from double to float when the game engine passes data to the renderer. Hmm... When I get time, I'll have to try the test map at an X,Y of 10k,10k and see if it glitches on me when the fixes I've done are disabled. I suspect the fix is good enough for your needs, but it nags in my mind that this is not the correct solution and it merely patches over the symptoms of a real problem. At the moment I can't say with confidence if the fixes I've made fixes a rare edge case exacerbated by large coordinates. Does the first fix, correct a bug or an edge case in the edge-edge collision detector? or is it just fixing up a false positives that end up pointing the wrong way? Have the changes simply changed a bug that manifests 1 in 100 times, to 1 in 100_000 times? If I can prove and rationalise it's behaviour in the domain of interest, I would be more confident in the fix, but I can't yet, so I'm worryworting over the issue. From looking at the compiler options for MSVS, it seems there isn't anything for disabling SSE or forcing the FPU when compiling to 64 bit. _controlfp() won't help much for a 64 bit build, as we can't get 80-bit floats due to the compiler not allowing x87 instructions. FLT_EVAL_METHOD (if it's supported) will only tell us it's compile time policy, which will probably be 0 (no promotion of intermediate types). It does look like making a 64 bit build under MSVC will require upgrading floating type and maths, either in controlled location or overall.
  6. I haven't looked closely at how the renderer is structured yet, but ultimately things will need to be converted to single precision floats by the time it hits OpenGL. Unique structures that are used for just the render can be kept as float, but if they use Vec3 and other similar structures then those uses will need to be replaced with a float only equivalent.
  7. Ideally you'd increase precision for just the collision, but the difficulty is keeping all the extra precision isolated to just that section without lots of re-engineering. You might get improvement by changing idPluecker to use doubles and its use seems to be isolated to just collision detection. Some floats in cm/CollisionModel_* files could be modified to use doubles instead of float. However, it's possible that won't be enough and you'll need to start changing Vec3 classes, which can have carry on effects to the rest of the code base, or creating a "DoubleVec3" and add extra code to convert to and from Vec3 in collision handling.
  8. I installed "libx11-6:i386" to get X11 libraries. If you already have a required library installed, check that the symlinks are setup correctly. I remember one library was missing it's symlink for the .so file.
  9. Interesting. I wouldn't have expected the performance penalty to of been sever, as it's already working double precision already but just reducing its results down to floats when done. I suppose the extra data transfers from the larger data types are causing a performance hit. Moving the whole world around the player strikes me as a rather processor intensive way to do things (unless they're referring to moving it in blocks, which is not quite the same as being at 0,0,0 all the time). I imagine what's more likely happening is that collision and other localised geomtry operations are moved to around 0,0,0 to extract the collision, visibility, etc. data and then displace it back to it's world coordinates. My approach would be to assume they're all significant until proven otherwise. However that means potentially wading through and suppressing a lot of false positives before the interesting ones make themselves known. The other problem is that as the idea stands, it only catches cancellation failures. I've found an interesting article that discusses how MSVC (and GCC) handles intermediate precision. https://randomascii.wordpress.com/2012/03/21/intermediate-floating-point-precision/ When you subtract two floating point number of about the same value, you can run into the problem where they cancel significant digits. If both share 20 significant bits, then a subtract will cancel those 20 bits, leaving you with the remaing bits in the mantissa. For regular floats that's 4 bits (24 - 4) for doubles it would be 33 bits (53 - 20). The use of double for intermediate calculations does stave off the problems with cancellation by giving you more digits you can safely lose, as witnessed in your example. Cancellation issues aside, the getting stuck problem witnessed in motorsep's example I suspect is due to quantization, which is brought on by operating at such large coordinates, which tie up most of a float's significant bits. The way to solve that is either move the coordinates to near 0,0,0 and do collision detection there, so most of it's bit are available for collision detection, before moving it back; or to bump up the important data types to double so nothing important is lost in the process.
  10. Well, as you can see above, the fixes used were a bit of logic and a bit of constant tuning. However, I think the proper fix is adjusting the coordinate space to be closer to zero when doing a collision trace, but such a fix is likely to mean extra calculation steps, and will require touching a lot of code to make sure it's done right. Plus in the back of my mind, I have a nagging thought that there maybe lingering artifacts due to distorted normals and plane equations being calculated at such extreme coordinates. I've been thinking on this and been trying to figure out a good way to solve it. The simplest method would be to admit "defeat" and just convert almost all floats to doubles. It means extra memory being used for storage and a possible performance penalty due to extra data shovel, but the calculation speed should be largely unchanged, as it already uses lond doubles to due calculations. An alternative would be to create typedefs like: typedef float tdmFloat; typedef double tdmDouble;Which would replace regular usage of float and double in the code. These typedef would used in release code, but in development code they can be substituted with something like: class tdmFloat { protected: float value; public: tdmIntermediateFloat operator*tdmFloat &a); /*rest of operator overloads go here*/ } class tdmIntermediateFloat { protected: long double value; public: tdmIntermediateFloat operator*tdmFloat &a); tdmIntermediateFloat operator*tdmIntermediateFloat &a); /*rest of operator overloads go here*/ } class tdmDouble { protected: double value; public: tdmIntermediateFloat operator*tdmFloat &a); tdmIntermediateFloat operator*tdmIntermediateFloat &a); tdmIntermediateFloat operator*tdmDouble &a); tdmIntermediateFloat operator*tdmIntermediateDouble &a); /*rest of operator overloads go here*/ } class tdmIntermediateDouble { protected: long double value; public: /*operator overloads go here*/ }With this automated cancellation detection could be implemented like so: tdmIntermediateFloat tdmIntermediateFloat::operation- (tdmIntermediateFloat &a) { tdmIntermediateFloat r; int expv, expa, expr; /*Extract the exponent component of the input float.*/ frexp(value, &expv); frexp(a.value, &expa); /*Compute the result.*/ r.value = value - a.value; /*Extract the exponent component of the result.*/ frexp(r.value, &expr); /*Check if too many bits were cancelled in the add/subtract.*/ assert(r.value != 0 && ((expv > expa) ? expv : expa) - 20 < expr) /*Return the result.*/ return r; }This would allow the checking for cancellation anywhere tdmFloat and kin are used without the need to add special logic or checks where they're considered.Advantages: Can be used by just add tdmFloat where needed.No change in logic to use but may need add function/macro calls to cast too and from float and tdmFloat.Disadvantages:The classes tdmFloat and kin need to be written and tested.Lots of floats will need to be changed to tdmFloat.It might be that some sections expect cancellation normally, and excpetion mechanisms would need to be added for them.An obvious performance penalty when used, but should be restricted to development builds.If it's possible to get away with storing extra data in the classes, it might be possible to add meta data to help detect cases where desired precision is lost or degraded. Beyond the above suggestions, I'm still thinking about the problem.
  11. I've gone and tested that map with everything moved to around 0 in the X and Y coordinates (Z remains unchanged), and have witnessed no problems, even after reverting both the CONTACT_EPSILON and the edge normal fixes described previously. This does indicate that the problem ultimately stems from distortions introduced by such large coordinates and the limited storage precision used. I'm more worried about them falling off that small platform.
  12. It's the one that's enabled in the patch ("// make sure the collision plane faces the direction of the trace"). I haven't witnessed any problems with AI in regular FMs with these changes, but I haven't tried testing anything against the ridge. I suspect that the AI would have been vulnerable to original problem of getting stuck. I'm not sure if they would have stumbled, as I think player movement is handled by a different class from AI movement, which might not react to these glitches.
  13. Okay, I've looked at it a bit more, and certainly stemming from it jumping back and forth between thinking it's on the ground and in the air. I've fixed this behaviour in the test map by changing "CONTACT_EPSILON" to "CONTACT_EPSILON * 4" in "game/physics/Physics_Base.cpp". This makes it test a bit furter downwards for gravity based contacts on the ground. I've play-tested the Thief's Den with this change and have not observed any problems due to it. I'm not sure if it's the proper fix, as it might only patch over this test case but fail again at even larger coordinates, or under different geometry, and it's possibly it has undiscovered sideffects. Plus the magic "* 4" needs needs to be removed, and to do so it needs to be decided if constant CONTACT_EPSILON should be multiplied by 4 and affect all other related uses of it, or if a seperate constant should be used instead for that particular instance.
  14. I've done some more testing and found that I could get in the Thief's Den when using the polygon plane method. I switched to the direction of movement method I didn't get stuck. (The point I got stuck on was on top of Creep's house, right before crossing over the peak of the roof top and falling off of the map.) I think this bug manifests where there are only (or mostly) edge on edge collisions. The only time you see that normally is when you have a topside edge, like in the ridge of the test map. I haven't debugged it deeper yet, but that does seem to be a likely cause. The other possibility that come to mind is that the collision seems momentarily like a wall, so it tries to come to a stop. Another thing to try is the same geometry, but moving closer to the origin, and see if movement glitches while moving on the ridge.
  15. It's Python. I think Import() and Return() are regular functions provided by SCons, and don't do what 'import' and 'return' do.
×
×
  • Create New...