Important Notice: Our web hosting provider recently started charging us for additional visits, which was unexpected. In response, we're seeking donations. Depending on the situation, we may explore different monetization options for our Community and Expert Contributors. It's crucial to provide more returns for their expertise and offer more Expert Validated Answers or AI Validated Answers. Learn more about our hosting issue here.

How come we get different answers on one platform versus a linux x86 platform?

0
10 Posted

How come we get different answers on one platform versus a linux x86 platform?

0
10

The x86 architecture implements a floating-point stack by using 8 80-bit registers. Each register uses bits 0-63 as the significand, bits 64-78 for the exponent, and bit 79 is the sign bit. This extended 80-bit real format used by floating instructions is the default. When values are loaded into the floating point stack they are automatically converted into the extended real format. The precision of the floating point stack can be controlled, however, by setting the precision control bits (bits 8 and 9) of the floating control word appropriately. In this way, the programmer can explicitly set the precision to standard IEEE double or single precision (the Intel documentation, however, claims that this only affects the operations of add, subtract, multiply, divide, and square root.) We have also noticed that, although extended precision is supposedly the default which is set for the control word, it is set at double precision in the x86 linux systems. Thus, we now also have a -pc o

Related Questions

What is your question?

*Sadly, we had to bring back ads too. Hopefully more targeted.

Experts123