Important Notice: Our web hosting provider recently started charging us for additional visits, which was unexpected. In response, we're seeking donations. Depending on the situation, we may explore different monetization options for our Community and Expert Contributors. It's crucial to provide more returns for their expertise and offer more Expert Validated Answers or AI Validated Answers. Learn more about our hosting issue here.

Why not let the compiler calculate constants?

0
Posted

Why not let the compiler calculate constants?

0

Because existing compilers only use native built-in floating point precision, often only 64-bit with a guaranteed accuracy of only 15 decimal digits. Using these constants gives the best chance of portability. For example, it may be a little better to use third_pi rather than pi /3. This is because the single constants are produced with a higher precision than the floating-point hardware and/or software that the compiler is also certain to use for its computation. The constants are entirely fixed at compile time, so there is no code to call functions, for example, log, nor the need to include header files which define those functions, for example . This may speed compilation and reduce file accesses and dependency, and facilitate optimisation. When debugging code (often with fewer optimisation options) the constant is never calculated at run-time. Behaviour when debugging is independent of compiler type, version and compile options like debug/release and optimisation.. There is

Related Questions

What is your question?

*Sadly, we had to bring back ads too. Hopefully more targeted.

Experts123