Machine Precision Calculation
创建于:2025年1月18日
创建于:2025年1月18日
Tracing floating point precision
OK, let's see: the program uses floating point arithmetic to calculate values like , which might not represent exactly. This leads me to the concept of machine epsilon.
Unraveling machine epsilon
I'm piecing together the details of machine epsilon in IEEE 754 double precision, which is typically 2^-52. This precision is pretty standard, though variations might reflect rounding differences.
Evaluating precision
I’m digging into forward error analysis to assess machine precision. It seems the difference approximates , which aligns with floating point representation.
Thought about machine precision calculation for 10 seconds
Below is an outline of how the code works and why it returns the machine precision in IEEE 754 double precision (commonly ).
% Machine precision a = 4/3; b = a - 1; c = b + b + b; eps0 = abs(c - 1);
a = 4/3
: In exact arithmetic, this is . In binary floating-point, however, cannot be represented exactly, so a
is stored with a small rounding error.b = a - 1
: This should ideally be , but again cannot be represented exactly in binary. Thus, b
has another small rounding error.c = b + b + b
: In exact arithmetic, adding three copies of recovers . But because b
is already a slightly rounded approximation of , summing it three times does not perfectly return 1; instead, it slightly overshoots or undershoots.eps0 = abs(c - 1)
: This difference from 1 ends up on the order of machine epsilon ( in double precision).Definition of machine epsilon (in double precision)
In IEEE 754 double precision, a floating-point number has 53 bits of precision in its significand (1 implicit bit + 52 fraction bits). The “machine epsilon” is conventionally defined as the difference between and the next representable number greater than . In double precision,
(Sometimes one sees in discussions, depending on whether you define epsilon as the gap at 1 or the gap at 0.5. The most common definition in numerical analysis texts is .)
Arithmetic steps and rounding
By summing three times the (slightly inexact) representation of , the code amplifies the hidden rounding error just enough to pick it out from the final result c
. We see c ≠ 1
, and the magnitude of that discrepancy is about .
Below is a highly simplified demonstration of why this yields something close to .
eps0 ≈ 2.22×10^-^1^6
.This argument can be made more precise using standard results from floating-point arithmetic error bounds (e.g. using the standard model with unit round-off ), but even this outline shows why the final difference is of the order of machine epsilon.
If you run the above code in MATLAB, Octave, Python (NumPy), or any typical double-precision environment, you will get:
>> eps0 ans = 2.22044604925031e-16
which matches the textbook value of .
Hence, the short program is a neat demonstration of how floating-point arithmetic “reveals” the fundamental precision limit () of the machine.