Math within a computer will never be perfect. Python, as Matt points out, trys to do the best that it can with what it has. By default, it assumes all numbers are integers and computation using them should only be as accurate as an integer can be. Hence:

>>> 1/10 0

We all know that one divided by ten is 0.1. By Python seems to get it wrong here. Actually, it’s returning `floor(1/10)` because it assumes that, since you’re using integers, you only want values as accurate as an integer in return. So, we need to force one of the numbers to be a `float` therefore informing Python that we’re ready to receive a float as a response. We can do that by including a decimal point, or by wrapping the number in the `float()` function. Like this:

>>> 1/10.0 0.10000000000000001

WHAT? You may be wondering why the value isn’t 0.1. It should be, right? Think about it for a second. A computer stores everything in binary. How would you represent 0.1 in binary? Go ahead… think about it. Well… in binary we’d be actually doing 0001/1010 which would result in the following binary number:

0.0001100110011001100110011001100110011001100110011001100110011001100110011001100110011001100110011001...

That’s right. You can’t accurately store the value of 1/10 in binary. Just like you can’t accurately store the value of 1/3 in decimal. So, python does the best it can by storing a value that is a close approximation to 1/10. And that’s the value it returns.

Okay. So, you understand that. So now, lets compute the value of 1 – 0.9. It should be 0.1:

>>> 1 - 0.9 0.099999999999999978

I’m sure with only a few seconds of thought, you can figure out WHY we get a different value here than we did for 0.1. You see, Python will first compute the value of 0.9 and then figure the value of 1 minus that number. Like this:

>>> x = 0.9 >>> x 0.90000000000000002 >>> 1 - x 0.099999999999999978

You should also notice that 0.1 added to itself 10 times does not equal 1.0:

>>> 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 0.99999999999999989

And you thought you understood math.

It should be noted that these problems are specific to Python. It is only being used as an example. The same issues exist in most every floating point implementation in almost every programming lanaguage in existance. Do you know of one where this is not the case?