Why does .NET get simple arithmetic wrong?
.NET doesn’t get arithmetic wrong, as such – it just does arithmetic in a different way to how you might expect. For instance, when you write 0.1 in code to be stored in a double variable, the value stored isn’t actually 0.1. It’s as close to 0.1 as can be represented in the double type, but it’s not actually 0.1. This is because the float/double types are binary floating point types, and 0.1 (decimal) can’t be exactly represented in base 2. For more information on this, see my article on .NET floating point types.