The C++ Standards Committee Hate Me

This is the only possible reason I can think of for the fact that the default "precision" value for an ostream is 6. What does this mean? It means that any floating-point number it prints out will be rounded to 6 significant digits.

What?! Why?! Let's put this into perspective. A double-precision floating-point number commonly has a 56-bit mantissa. Let's be generous, and say that each decimal place requires 4 bits to represent it. We'll even ignore the hidden bit. That's still 14 digits. Even if you then go and assume that your calculations have destroyed half the precision, pretty much a worst case unless you're either doing something wrong or are in hardcore numerical-analysis land, that's still 7 significant figures. And that's a very generous worst-case analysis.

Let's look at it another way. 32-bit ints go to 10 significant figures. They can be stored exactly in normal doubles. They can be printed directly, and that's fine, but if you cast them through a double first, the default will truncate them when you print them.

I have a program that reads in numbers and prints them out again. I assumed sensible defaults for precision. I lost. I have had to fix the precision on my ostreams before writing to them, and I have no idea why. I'm sure there are situations where 6 significant figures are what you want. On the other hand, I would very much like it if that were something you had to select, and the defaults weren't incredibly stupid for default use.

Pah.

Posted 2008-12-07.