whats decimal floating point?
Decimal floating point is floating point arithmetic performed using decimal digits as opposed to binary floating point which is performed using binary digits. The distinction between the two essentially boils down to two points: • In a binary floating point format, the least significant digit represented by the format is a single bit which is rounded (i.e. set to 0 or 1) depending on the rounding mode currently in effect and the binary digits (i.e. bits) which follow the bit being rounded. In a decimal floating point format, the least significant digit represented by the format is a single decimal digit which is rounded (i.e. set to a value between 0 and 9 inclusive) depending on the rounding mode currently in effect and the decimal digits which follow the digit being rounded. • The choice of radix determines how the significand is affected by non-zero exponent values. In a binary system the significand is adjusted by the power of two indicated by the exponent whereas in a decimal syst