Jump to content
The Dark Mod Forums

Loss of precision when using float


Recommended Posts

I opened this issue on the bugtracker: http://bugs.angua.at/view.php?id=266

 

As already discussed some time ago in the forums, DarkRadiant is using float, whereas DoomEdit is using double. So far, this hasn't been much of a problem, but it will certainly become one (one occurrence has already been observed in the bonehoard map).

 

As float has 6 significant digits, all items placed with the smallest grid size of 0.125 will lose information when their absolute position in 3D space is more than 999 units away from the origin, as in this brushdef:

 

(0 0 -1 3128.13)

 

It's obvious that the .125 fracture is being rounded to 0.13 because the 4 of the 6 significant digits are being eaten up by the 3128 part.

 

We should look into porting the objects to use double precision, problems will get more and more frequent the larger the maps are growing.

Link to comment
Share on other sites

I agree. How in the world anybody decided that using single-precision floats for complex 3D manipulations was appropriate is completely beyond me.

 

As a first step, simply changing Vector3 to BasicVector<double> instead of BasicVector<float> should work, but some other things might need changing such as GL calls which expect a float.

Link to comment
Share on other sites

I just tried to change the Vector3 definition, and this definitely won't be easy. I guess the main reason why the Vector3 was chosen to be of type float were the OpenGL calls themselves. There are hundreds of them where the Vector3 is implicitly cast to a float[3] array.

 

As the double is taking up twice the space of a float, I assume it's probably not possible to perform a reinterpret_cast onto this float-array to pass the Vector3 itself as an argument to the OpenGL functions.

 

Perhaps this won't be necessary in the first place, because I saw that the Plane3 class is already using doubles to store their parameters. So, as a first step I will try to adapt the mapdoom3 module to take advantage of this double precision - brush planes are using Plane3 to define their stuff.

Link to comment
Share on other sites

As the double is taking up twice the space of a float, I assume it's probably not possible to perform a reinterpret_cast onto this float-array to pass the Vector3 itself as an argument to the OpenGL functions.

 

Absolutely not. reinterpret_cast is a no-no in practically every situation except when writing memory allocators.

 

Perhaps this won't be necessary in the first place, because I saw that the Plane3 class is already using doubles to store their parameters. So, as a first step I will try to adapt the mapdoom3 module to take advantage of this double precision - brush planes are using Plane3 to define their stuff.

 

The ideal solution is to make use of GL calls which accept a double array instead of a float array (e.g. glVertex3dv()), however not all calls have these versions. These calls would either require a conversion (which might be a performance issue, but not necessarily), or to use two different vector classes with two different precisions.

Link to comment
Share on other sites

I've committed a first compiling version to a new branch. I could already locate and remove a few issues (involving a reinterpret_cast(&matrix) call in the OpenGL initialisation which of course did not work out correctly for double matrices), but a few critical issues remain:

 

- Lighting mode is broken

- Texturing of models seems to be smeared (probably forgot to replace some GL_FLOAT or TexCoord2f)

- All colours in the Orthoview are white/lightgrey.

 

Frankly, I was a bit surprised that it didn't crash immediately after treating the entire codebase for two hours, so I'm confident that we can get this thing running with double precision.

Link to comment
Share on other sites

I could fix the lighting mode issue (there was another reinterpret_cast() around).

 

However, I can't seem to find the problem with the RenderablePicoSurface, they are still rendered as flatshaded (untextured). MD5 models are rendered fine, so it definitely is specific to the PicoModels.

 

I checked the texture coordinates that are submitted to the DisplayLists, those are fine (I first feared they would be something like 0,0 or -1.#INF or something). The GL commands are already adapted to double, and I double-checked the data types.

 

I also put a test render command (with arbitrary texcoords) right before the display list is called to check whether the texture is properly bound:

		glBegin(GL_TRIANGLES);
	glTexCoord2f(-1.0f, 0.5f); 
	glVertex3d(-100,-100,0);
	glTexCoord2f(1.0f, 0.5f);
	glVertex3d(-100,100,0);
	glTexCoord2f(1.0f, 2.5f);
	glVertex3d(100,100,0);
	glTexCoord2f(2.0f, -0.5f);
	glVertex3d(100,-100,0);
	glEnd();

	glCallList(_normalList);

The rendered faces appear textured, so this works ok. Is there something special to watch out when using displaylists in combination with double?

picomodel_problem.jpg

Link to comment
Share on other sites

I can't honestly say what the problem is -- if the coordinates are being submitted and they have the same values as previously, I don't see why this wouldn't work.

 

Are you using the gl*dv() versions, that take a double array? Perhaps you should test with the standard 3-valued double version, to make sure there is no problem with the conversion. Obviously if you try to convert a float* to a double* you will not get the desired results.

Link to comment
Share on other sites

Yeah, I already adapted the GL commands:

// Get the vertex for this index
	ArbitraryMeshVertex& v = _vertices[*i];

	// Submit attributes
	glNormal3dv(v.normal);
	glTexCoord2dv(v.texcoord);
	glVertex3dv(v.vertex);

I already tried to swap the glTexCoord and glNormal command (didn't change anything - didn't expect it to either). I will try to do a conversion to a float vector before submitting them to the display list and see what happens.

Link to comment
Share on other sites

I wondered if that could be the case -- the parsing library (picomodel) probably uses floats, and knowing the high-quality portable designs used in Radiant, there are probably reinterpret_casts going on here as well.

Link to comment
Share on other sites

Surprisingly not. :D (reinterpret_cast is C++, isn't it?)

 

In the first replacement sweep, I just changed the data types from float to double, but not the parsers, I'll have to figure out the according C functions, I reckon.

 

edit: fixed!

Link to comment
Share on other sites

I wondered if that could be the case -- the parsing library (picomodel) probably uses floats, and knowing the high-quality portable designs used in Radiant, there are probably reinterpret_casts going on here as well.

I think is absolutly ok for picomodel to use floats for vertices, since it loads staticmeshes that just get instantiated and not manipulated in radiant (or do you use it for other things too?).

Storing the position and orientation of these models as double-values should be sufficient.

This is just a vague memory, but I think the maya converter in the doom3 sdk generates

its vertices from single precision data, so you wouldn't win any precision by using doubles here.

 

Brushes and patches going double is a good decision, esp considering brushes with non planar surfaces. :)

Link to comment
Share on other sites

I agree that floats would be enough for models, but as I changed the global data types which are used throughout the entire app I had to change all parsers as well that write to variables of these types.

 

Anyway, it's done now. I'll do some smaller refactoring and cleaning up the code and then I'll merge back the changes into the trunk. I also have to make sure that the written brush data is taking advantage of the double precision.

Link to comment
Share on other sites

The refactor from the branch is now merged into the trunk. Expect a full compile as this affected quite a few header files.

 

All vectors and matrices are using double instead of float. The precision when saving map files is a configurable setting in the registry now (doom3.game: game/mapFormat/floatPrecision).

 

I also tested the loading/saving behaviour at the smallest grid size and I tested to load the Bonehoard, everything seems to be ok.

 

Could you test if this compiles and runs fine on Linux, OrbWeaver?

Link to comment
Share on other sites

  • 3 years later...

Revisiting that. I'm not fully convinced anymore that this was ever a problem.

 

I just performed a quick test using a simple brush room at the far top right of the orthoview, at around +50000,+50000,0. I switched to the smallest grid (0.125) pushed the brushes off by a few odd units and saved the map in DR:

 

 

 

// primitive 0

{

brushDef3

{

( 0 0 1 -704.75 ) ( ( 0.0029296875 0 90.7623291015625 ) ( 0 0.0048828125 125.2872314453125 ) ) "textures/darkmod/stone/brick/tiling_1d/old_worn_greybrick" 0 0 0

( 0 1 0 -57425.125 ) ( ( 0.0029296875 0 158.2276611328125 ) ( 0 0.004111842252314091 2.897820711135864 ) ) "textures/darkmod/stone/brick/tiling_1d/old_worn_greybrick" 0 0 0

( 1 0 0 -54008.375 ) ( ( 0.0029296875 0 90.7623291015625 ) ( 0 0.004111842252314091 2.897820711135864 ) ) "textures/darkmod/stone/brick/tiling_1d/old_worn_greybrick" 0 0 0

( 0 -1 0 56401.125 ) ( ( 0.0029296875 0 100.7723388671875 ) ( 0 0.004111842252314091 2.897820711135864 ) ) "textures/darkmod/stone/brick/tiling_1d/old_worn_greybrick" 0 0 0

( -1 0 0 52984.375 ) ( ( 0.0029296875 0 168.2376708984375 ) ( 0 0.004111842252314091 2.897820711135864 ) ) "textures/darkmod/stone/brick/tiling_1d/old_worn_greybrick" 0 0 0

( 0 0 -1 640.75 ) ( ( 0.0029296875 0 90.7623291015625 ) ( 0 0.0048828125 120.7127685546875 ) ) "textures/darkmod/stone/brick/tiling_1d/old_worn_greybrick" 0 0 0

}

}

 

 

Then I hacked the mapdoom3 exporter to add a static_cast to the writeDoubleSafe() method. The result was the same, no loss of precision observable in the map file (except at the far end of the texture matrix, where at the 7th or 8th position things started changing).

 

Additional points to consider:

 

- I suspect the original problem (which was tackled back in 2007) was due to the map exporter code not passing the precision hint to the output streams.

- Doom 3 uses floats internally. Anything saved in double precision won't have much effect when being loaded in D3.

- Both maps (the one saved using doubles and the one using floats) compiled fine and didn't leak.

- The D3 map compiler is also using floats. If rounding errors were a problem this would show pretty quickly.

- The size info code in DarkRadiant's orthoview is irritatingly rounding things to tenths: 57425.125 becomes 57425.1 but this is only a display error.

- DarkRadiant's memory usage: doubles take 64 bits, single-precision floats take 32 bits.

- DarkRadiant's processing speed: this is an unconfirmed assumption, but I doubt that doubles are performing the same as floats. Also doubles somewhat block the door for DarkRadiant when it comes to SSE/SSE2 instructions if we ever go down that road for the culling algorithms. Two doubles fit into a 128 bit MMX register, compared to 4 single-precision floats.

 

Thoughts?

Link to comment
Share on other sites

I am not going to argue for or against doubles, I simply do not know enough. However, a few things to consider.

 

- Doom 3 uses floats internally. Anything saved in double precision won't have much effect when being loaded in D3.

 

I'd say this is the strongest point in not using doubles - there would be no point I guess.

 

- DarkRadiant's memory usage: doubles take 64 bits, single-precision floats take 32 bits.

 

Compared to the GUI overhead, the textures etc. I don't think this makes any difference whatsoever.

 

- DarkRadiant's processing speed: this is an unconfirmed assumption, but I doubt that doubles are performing the same as floats.

 

I haven't looked, but I think that the difference might not be much (both are handled in special hardware inside the CPU, anyway) and the difference might not make any practical difference, either way. How many million floats does it crunch, compared to how many million subroutine calls (which are much more expensive)?

 

Also doubles somewhat block the door for DarkRadiant when it comes to SSE/SSE2 instructions if we ever go down that road for the culling algorithms. Two doubles fit into a 128 bit MMX register, compared to 4 single-precision floats.

 

Thoughts?

 

I think SSE/SSE2 is a bit overrated, as it might only be relevant if you have a lot of numbers, in the same format, and the same number crunching to do. And even then it isn't giving you more than a factor of 2..4 anyway.

 

Last but not least, the strongest point in favour would be in "create huge maps that are supported by D3". Why limit ourselves to 54000 units across? Why not 100000? Sure, we need open source D3 and it probably needs to have double support, but if we limit the editor to floats now, we might hit a problem later.

 

However, I do not know if the above is really a problem. Maybe a 1000000 unit map already works just fine?

 

Edit: I just read the thread (shame on me :blush: ) and see you already have working doubles. I'd say then leave it be, converting back to float won't give (IMO) enough to make all the work again. And it future-proofs the editor.

"The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore, all progress depends on the unreasonable man." -- George Bernard Shaw (1856 - 1950)

 

"Remember: If the game lets you do it, it's not cheating." -- Xarax

Link to comment
Share on other sites

I can confirm now that the precision problem we'd been facing originally was caused by a missing precision() call to the output stream in mapdoom3. At the same time I converted the floats to doubles I also set the precision hint (probably because I noticed that the doubles weren't written in higher precision when passed to the std::ofstream). I'm feeling a bit of a fool in retrospect as I did all the conversion without checking whether the precision() call would have helped with floats too.

 

@Tels: I know that DarkRadiant's front end render pass is doing a lot of math operations, mainly for plane culling.

 

About 60-70% of DarkRadiant's render time is spent on calls to the OpenGL API. While the number one optimisation approach is to reduce these calls wherever possible why would I want to keep double-to-float conversions on a regular basis - just because a single mapper might use a 1 million unit map some years from now in the future? Such a map will very likely hit different limits like memory or entity processing limits or whatever. That's ridicolous and most likely insane from a 80:20 point of view.

 

Here are some links, in case you have time.

 

http://www.gamasutra...tform_simd_.php

http://gamedev.stack...2d-vector-class

http://my.safaribook...gl/ch02lev1sec5

http://stackoverflow...aphics-hardware

Link to comment
Share on other sites

I looked into converting stuff from double back to float and indeed there is a noticeable difference in the framerate. When rendering a full view of the gathers map (cubic clipping off, a few things filtered out), the time needed to process a frame improves from ~63 msec to ~48 msec. This is just due to the conversion to float, nothing else was changed. I have to confirm that nothing got broken between the lines and stuff maybe isn't rendered at all, but I'm positive.

Link to comment
Share on other sites

I'd say go for it, I had wondered about this a few months back but thought that the reasoning and re-testing might be a bit annoying to do. The gain of being able to more easily work with the additional instruction sets is a pretty big bonus, even if it's not utilized immediately.

Link to comment
Share on other sites

Well facts don't lie; if testing reveals that there is a performance impact, then I guess it should be changed. Having become more experienced in GL and reading that GL performance with doubles can be very poor (on some implementations it will just convert it to float anyway, and if it leaves it as double the GPU might process it much slower), I have to agree that this might not have been a change for the better, but I guess you live and learn.

 

I wonder how the performance impact changes with modern 64-bit processors though, at least on the CPU side it might not be so bad dealing with 64-bit doubles. A lot of it will be down to the compiler though.

Link to comment
Share on other sites

@greebo: Well if you think it is worth it, then do it. It's your project, I was just offering "general insight" (which might have been garbage, anyway).

 

However, I am a bit surprised in the difference, that means that either:

 

* the compiler is not good

* your CPU (32 bit mode?) is not good (and/or only one CPU core is used, anyway, the other one/two/four are idling)

* the renderer is not good (as in "it does waay too much work, the bulk should be on the GPU, not the CPU")

* OR: there really is so much work to be done with floats that converting this to doubles sucks performance

 

Anyway, I would still be a bit cautious, because the improvement might be big right now (and on your system), but it might not be that big in the future.

 

In any event, performance is one thing, and if you are confident that floats are enough, well, as I said, do the conversion back. Afterall, if you change it once, it is possible to change it back again :)

"The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore, all progress depends on the unreasonable man." -- George Bernard Shaw (1856 - 1950)

 

"Remember: If the game lets you do it, it's not cheating." -- Xarax

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Recent Status Updates

    • nbohr1more

      Was checking out old translation packs and decided to fire up TDM 1.07. Rightful Property with sub-20 FPS areas yay! ( same areas run at 180FPS with cranked eye candy on 2.12 )
      · 1 reply
    • taffernicus

      i am so euphoric to see new FMs keep coming out and I am keen to try it out in my leisure time, then suddenly my PC is spouting a couple of S.M.A.R.T errors...
      tbf i cannot afford myself to miss my network emulator image file&progress, important ebooks, hyper-v checkpoint & hyper-v export and the precious thief & TDM gamesaves. Don't fall yourself into & lay your hands on crappy SSD
       
      · 3 replies
    • OrbWeaver

      Does anyone actually use the Normalise button in the Surface inspector? Even after looking at the code I'm not quite sure what it's for.
      · 7 replies
    • Ansome

      Turns out my 15th anniversary mission idea has already been done once or twice before! I've been beaten to the punch once again, but I suppose that's to be expected when there's over 170 FMs out there, eh? I'm not complaining though, I love learning new tricks and taking inspiration from past FMs. Best of luck on your own fan missions!
      · 4 replies
×
×
  • Create New...