Ken Muse

What Developers Should Know About Floating Point Numbers


Let’s do some basic math. What is 9.99 + 0.01? If you said 10, you might actually be wrong! What if I told you 9.99999977089 might be the answer? What if I told you that (0.1 + 0.2) != 0.3 will return true in most programming languages? In fact, JavaScript will tell me that 0.1 + 0.2 is 0.30000000000000004. If you’re confused, you’re not alone. The secret life of floating point values is a fascinating topic that is often overlooked by developers.

When I was a developer, I remember an intriguing issue my team was seeing. They relied on a third-party web service to perform some complex mathematical operations that was essential to the product. The service was frequently returning incorrect results, and the team was struggling to understand why. Working with the vendor, they also noticed an interesting issue: in some cases, the values being sent (or received) didn’t match the values logged by the sending system. Neither team could understand why the numbers were changing.

The root of the problem was that the web service relied on serializing floating point values. The developer of the service that understood floating point math was faster, but they didn’t realize that came with an important tradeoff. In this post, I’ll explain what was happening and how to avoid those problems. I’ll use C# for some of the explanations, but the concepts apply to any language that uses floating point values. There’s a LOT more to this topic, but I’ll try tp keep it simple.

Precisely imprecise

In C#, the float type is documented as a precision of 6-9 decimal places and double has a precision of 17-19. In truth, that’s not quite accurate. It’s a bit of a simplification that starts with the fact computers are optimized for integers. The IEEE 754 standard standardized an approach to working with floating point values and for storing the values. The approach relies on storing an integer and a power of two. If you multiply the integer by that power of two, you get the floating point value. Unfortunately, not all numbers can be represented this way. Fractional values are stored as powers of 2, where the first bit is 1/2, the next means 1/4, and so on. Some fractional values can’t really be represented that way, such as 1/10. It becomes an infinitely repeating value (similar to what happens with 1/3 in base 10). The IEEE specification provides a standardized way to handle those situations.

Most platforms rely on storing the values in registers that have more precision than the values being represented. When the value is displayed, it is rounded to the precision of its data type. The float data type uses 4 bytes (32 bits). It uses 1 bit of the sign, 23 bits for the integer (mantissa), and 8 bits for the power (exponent). The double data type uses 8 bytes (64 bits). It uses 1 bit of the sign, 52 bits for the integer (mantissa), and 11 bits for the power (exponent). You’ll notice that both of these can fit within the 80-bit extended precision registers used by the x86 architecture. The extra precision minimizes the rounding errors that occur with floating point path.

Under the covers, the actual value being stored is approx 9.98999977112 (or 5237637 x 2-¹⁹). For displaying values in .NET, floats (singles) are rounded to 7 digits and doubles are rounded to 15 digits. More on that in a moment. The value is then formatted in the shortest possible form. Continuing with the example, let’s start with the first 8 digits of 9.98999977112, or 9.9899997. To reduce this to 7 digits, we round the value The new number is 9.990000. This is equivalent to 9.99, which is the shortest equivalent value.

Similarly, 0.01 is stored as 0.00999999977 (or 5368709 x 2⁻²⁹). Rounding that to 7 digits, this becomes 0.010000, which is displayed as 0.01. Most debuggers will show you the rounded number. To add to the confusion, higher precision values can be used in calculations, depending on the processor and platform. As long as the runtime honors the maximum precision, it can use higher precision values in calculations.

As a result, adding these two values together is approximately 9.99999977089. Of course, the debugger will show 10 because we’re rounding to 7 digits. So why would (0.99f + 0.11f) == 10f always return false? It turns out that the value 10 can be precisely represented as 5 x 2¹. As a result, you’re comparing an imprecise result from a calculation to a precise value.

This gets more interesting when you exceed the number of significant digits. For example, try to add 999999.11f to 1.11f. The answer is a surprising 1000000.25! Because the values are imprecise and have more significant digits than can be represented, the sum is rounded to the nearest representable value. In essence, the value 999999.125 is added to 1.1100000, creating an answer of 100000.23500000. When converted to a single precision float, the loss of precision alters the value to 1000000.25.

You may notice that in this case, it’s more than 7 digits. How is that possible? The answer is that the rounding process is actually more complex. It involves some logic to ensure the values can roundtrip. That is, it ensures the string representation can be converted to an equivalent floating point value. For example, 100000.235 cannot be represented with single precision values, but 1000000.25 can. There is also logic to identify the number of digits to show after the decimal point. If you want to understand how .NET does this in more depth, check out this PR which tries to create the shortest roundtrip string representation for floating point values.

These rules are different in other languages, but the principles are the same. This is part of where things get challenging. While the rules are standardized, some aspects are not. That includes how different programming languages present the values, including any rounding. That also means that serializing and deserializing the displayed values between platforms can result in numbers that are slightly different.

Double or nothing

Now let’s look at another fun math problem. What is 22.1234d + 77.876d? If you answered 99.994d, then you must be using decimal instead of double! In this case, the increased precision is actually working against us. In both # and JavaScript, the sum is 99.99940000000001. I bring this up to point out that switching to doubles won’t necessarily eliminate the problem. The numbers remain imprecise, so mathematically operations can introduce slight errors. In most cases, these errors can be rounded or ignored. If you’re performing multiple operations (or operations with a mix of large and small numbers), it can compound the errors.

In short, if you’re using floating point numbers, you need understand whether your operations will result in significant errors and handle those appropriately. In our example, we could manually round the result to a specific number of significant digits. In some cases, it may mean understanding whether the issues will be impactful. It make not make much of a difference with a 3D rendering, but if you’re calculating financial transactions or building missle defense systems, it could be a big deal.

Services, serialization, and precision oh my!

When these values are serialized for a web service, things can get more unusual. Depending on the serializer, either the rounded value or the stored full-precision value can be sent. This means that the values that is intended to be sent may be different from the value received. This problem can be compounded if one side is using precise data types – such as decimal in C# – and the other side is using floating point values. Because of the precision issues, the values may no longer match. In addition, I described specific behaviors for .NET. Each platform and language (and even versions of .NET) may have different behaviors. This can include smaller registers, different rounding behaviors, and different precision guarantees.

In the case of the team I was working with, the issues were aggravated because the values were multiplied by large numbers during certain steps. This meant that values which were normally truncated by the rounding process suddenly became significant digits. For example, consider the floating value 9.99f being multiplied by 1000 improperly. Instead of 9990, the value could be 9989.99977112. As additional calculations were applied, the slight errors compounded. The result was that the final value was significantly different from the expected value.

This situation is not limited to RESTful services and APIs. It can also happen when using third-party libraries. Depending on the data types used (and the author’s understanding of the types), you can encounter the same issues. Some libraries deeply consider these issues and provide some guarantees on the precision. Others may not understand these challenges, leaving you to reconcile the results. In a worst case scenario, multiple of these issues can create a perfect storm.

The workaround

If you need precise values, you will need to use a precise data type, such as decimal and recognize the performance tradeoffs. While these methods are slower, they don’t require the same understanding of the precision issues. As a result, they are very predictable. In terms of the speed, the slight difference may be worth the reduced risks. If you truly need the performance, you’ll need to either master floating point calculations or find ways to efficiently utilize integers. Doom’s fixed-point arithmetic is one example of that approach.

If you’re building RESTful services, then make sure that you are not defining the service (or its implementation) to use single or double precision values. With .NET, that means serializing decimal values and avoiding float and double for the interfaces. This ensures that only precise values will be serialized, even if you are using floating point calculations internally.

If you really want to master this topic, I strongly recommend reading What Every Computer Scientist Should Know About Floating-Point Arithmetic (Goldberg, 1991). For .NET folks, Jon Skeet’s Binary floating point and .NET, which has some sample code.

Diving a bit deeper

For those that want to understand the math and details at a deeper level, read on! To keep this short, I’ll focus on the handling of a specific value (rather than I deep dive into all of IEEE 754 and some edge cases).

The value 9.99 can be represented in binary as:

19 = 1011
2.99 = 0.1111 1101 0111 0000 1010 0011 

The value is imprecise. Notice that we stopped at 24 bits. It needs a value that is larger than the mantissa (23 bits) and has at least one non-zero value. As a result, we’ve lost some accuracy since this value can’t be precisely represented. Remember that floating point are not precise. This is why they don’t support equality comparisons (==) in most languages.

Next, the value is normalized so there is one non-zero digit to the left of the decimal:

11011.1111 1101 0111 0000 1010 0011 = 1.011 1111 1101 0111 0000 1010 0011 x 2³

We now have an expression with a power of 2. The exponent value, 3, is normalized by adding a bias value of 127 (2⁸-1). Using a bias allows positive and negative exponents to be preserved. As a side note, if we were creating a double precision value, the bias would be 1023 (2¹⁰-1).

13 + 127 = 130

Converting 130 to binary gives us the value 1000 0010.

Next, we need to also normalize he mantissa. First, since the leading value is always 1, it can be removed:

11.011 1111 1101 0111 0000 1010 0011 => 1̶.̶011 1111 1101 0111 0000 1010 0011

Now, remove the least significant bits from the right side until we reach the maximum length of the mantissa (23 bits). This is where the values can change the most since each 1 removed is a loss of precision; that portion of the value is no longer represented in the floating point number.

1011 1111 1101 0111 0000 1010 0011 => 011 1111 1101 0111 0000 1010 0̶0̶1̶1̶

Now, construct the float:

  • Sign: 0 (positive)
  • Normalized exponent: 130 (1000 0001)
  • Normalized mantissa: 4183818 (011 1111 1101 0111 0000 1010)

The IEEE 754 single-precision binary representation of 9.99 is therefore: 0 1000 0001 011 1111 1101 0111 0000 1010.

Converting this back to a float, we can reverse the steps. It’s worth recognizing that our floating point value is equivalent to 5237637 x 2⁻¹⁹, which is approximately 9.98999977112. Since this is a single precision value, we can then round it to 9.99, the original value. Since we can recover the original value, the number is able to round trip.

How did I know to use 5237637 x 2⁻¹⁹? Let’s walk through that math. The mantissa is converted to an integer value by reintroducing the leading 1, then removing any 0’s on the right. That value is the then converted to its decimal (base 10) equivalent:

1011 1111 1101 0111 0000 101 => 1 011 1111 1101 0111 0000 1010̶ = 5,237,637

Because we’re treating the mantissa as an integer, we’re essentially multiplying it by 2⁻²³. To make the math work, add the original exponent, 3, to this exponent, -23. For each trailing zero we removed, add 1 to the exponent. In this case, there was a single trailing zero, so we add a single 1 (-23 + 3 + 1). The result is a new exponent, -19 (or more appropriately, 2⁻¹⁹).

There is one more important piece to understand. The programming platform can alter these behaviors somewhat. The runtime can actually store those values in registers with higher precision as long as the final value is preserved. The default x86 extended precision format is an 80-bit register (storing a 63 bit fraction). This register is primarily intended to store intermediate calculated values, such as the sum of two numbers. That means that it’s possible for the runtime to treat a float as having a much larger mantissa (and higher precision). At the same time, only a portion of the value is guaranteed. That round trip value is what’s displayed when debugging, not the full numeric value in the register!

And that’s the basics. If you want to learn more, read the papers I mentioned earlier. You’ll learn some interesting facts and developer a much deeper understanding of how to properly work with floating point values.