Where can I get information about GPU floating point precision? Is it IEEE-compliant?
Current NVIDIA graphics hardware provides 32-bit (s23e8) floating point arithmetic that is very similar to the arithmetic specified by the IEEE 754 standard, but not quite the same. The storage format is the same (see above), but the arithmetic might produce slightly different results. For example, on NVIDIA hardware, some rounding is done slightly differently, and denormals are typically flushed to zero. Current ATI hardware does all its floating point arithmetic at 24-bit precision (s15e8), even though it stores values in the IEEE standard 32-bit format. Both NVIDIA and ATI provide a “half-precision” 16-bit (s10e5) floating point storage format; some NVIDIA GPUs can perform half-precision arithmetic more quickly than single-precision arithmetic. No GPU currently provides double-precision storage or double-precision arithmetic natively in hardware. There are several ongoing efforts to emulate double precision through a single-double approach (doubling the mantissa) and CPU-GPU interpl