What is the precision of mathematical operations in CUDA?
All compute-capable NVIDIA GPUs support 32-bit integer and single precision floating point arithmetic. They follow the IEEE-754 standard for single-precision binary floating-point arithmetic, with some minor differences – notably that denormalized numbers are not supported. Later GPUs (for example, the Tesla C1060) also. include double precision floating point. See the programming guide for more details.