Machine Precision Calculation

Creato il: 18 gennaio 2025

Risposto usando OpenAI 01 da OpenAI01

Domanda

Machine precision (eps = 2^−52 ) can be computed by the following program (at-
tributed to Cleve Moler):
# Machine p r e c i s i o n
a=4/3
b=a−1
c=b+b+b
eps0=abs ( c −1)
Run the program and prove its validity.

Pensiero

Tracing floating point precision

OK, let's see: the program uses floating point arithmetic to calculate values like a=4/3a = 4/3, which might not represent 1/31/3 exactly. This leads me to the concept of machine epsilon.

Unraveling machine epsilon

I'm piecing together the details of machine epsilon in IEEE 754 double precision, which is typically 2^-52. This precision is pretty standard, though variations might reflect rounding differences.

Evaluating precision

I’m digging into forward error analysis to assess machine precision. It seems the difference c1c - 1 approximates 2522^{-52}, which aligns with floating point representation.

Thought about machine precision calculation for 10 seconds

Risposta

Below is an outline of how the code works and why it returns the machine precision in IEEE 754 double precision (commonly 2522.22×10162^{-52}\approx 2.22\times 10^{-16}).


1. The code

% Machine precision a = 4/3; b = a - 1; c = b + b + b; eps0 = abs(c - 1);
  • a = 4/3: In exact arithmetic, this is 1.31.\overline{3}. In binary floating-point, however, 4/34/3 cannot be represented exactly, so a is stored with a small rounding error.
  • b = a - 1: This should ideally be 13\frac{1}{3}, but again cannot be represented exactly in binary. Thus, b has another small rounding error.
  • c = b + b + b: In exact arithmetic, adding three copies of 13\frac{1}{3} recovers 11. But because b is already a slightly rounded approximation of 13\frac{1}{3}, summing it three times does not perfectly return 1; instead, it slightly overshoots or undershoots.
  • eps0 = abs(c - 1): This difference from 1 ends up on the order of machine epsilon (252\approx 2^{-52} in double precision).

2. Why the result measures machine epsilon

  1. Definition of machine epsilon (in double precision)
    In IEEE 754 double precision, a floating-point number has 53 bits of precision in its significand (1 implicit bit + 52 fraction bits). The “machine epsilon” is conventionally defined as the difference between 11 and the next representable number greater than 11. In double precision,

    machine epsilon  (ϵmach)=2522.220446049250313×1016.\text{machine epsilon} \; (\epsilon_{\text{mach}}) = 2^{-52} \approx 2.220446049250313 \times 10^{-16}.

    (Sometimes one sees 2532^{-53} in discussions, depending on whether you define epsilon as the gap at 1 or the gap at 0.5. The most common definition in numerical analysis texts is 2522^{-52}.)

  2. Arithmetic steps and rounding

    • Step 1: a=4/3a = 4/3. Although 4/34/3 is rational, it is not exactly representable in a binary format. Thus a=43(1+δa),a = \frac{4}{3} (1 + \delta_a), where δa\delta_a is a small relative error of size 252\sim 2^{-52}.
    • Step 2: b=a1b = a - 1. We want 13\frac{1}{3}, but we get b=13(1+δb),b = \frac{1}{3}(1 + \delta_b), with δb\delta_b again on the order of 101610^{-16}.
    • Step 3: c=b+b+b=3bc = b + b + b = 3b. Ideally, 313=13 \cdot \frac{1}{3} = 1. Because bb is only an approximation, c=1+δc,c = 1 + \delta_c, where δc\delta_c is roughly on the order of δb\delta_b.
    • Final difference: eps0=c1=δc\text{eps0} = |c - 1| = |\delta_c|. This is typically on the order of the floating-point unit round-off, i.e., the machine epsilon.

By summing three times the (slightly inexact) representation of 1/31/3, the code amplifies the hidden rounding error just enough to pick it out from the final result c. We see c ≠ 1, and the magnitude of that discrepancy is about ϵmach\epsilon_{\text{mach}}.


3. A sketch of the forward-error “proof”

Below is a highly simplified demonstration of why this yields something close to 2522^{-52}.

  1. Representing 4/34/3: Suppose floating-point arithmetic rounds to the nearest representable number. Let a=43(1+δ1),with δ1252.a = \frac{4}{3} \bigl(1 + \delta_1\bigr), \quad \text{with }|\delta_1|\lesssim 2^{-52}.
  2. Subtracting 1: b=a1=(431)(1+δ2)=13(1+δ2),b = a - 1 = \left(\frac{4}{3} - 1\right)\bigl(1 + \delta_2\bigr) = \frac{1}{3}\bigl(1 + \delta_2\bigr), with δ2252|\delta_2|\lesssim 2^{-52}.
  3. Adding three copies: c=3b  =  1(1+δ2)  =  1+δ2.c = 3b \;=\; 1 \cdot (1 + \delta_2) \;=\; 1 + \delta_2.
  4. Result: c1=δ2δ21016.c - 1 = \delta_2 \quad \Longrightarrow \quad |\delta_2| \sim 10^{-16}. Thus, eps0 ≈ 2.22×10^-^1^6.

This argument can be made more precise using standard results from floating-point arithmetic error bounds (e.g. using the standard model (xy)(xy)uxy|(x \otimes y) - (x \, y)| \le u \,|x \,y| with unit round-off u252u\approx 2^{-52}), but even this outline shows why the final difference is of the order of machine epsilon.


4. Checking the result experimentally

If you run the above code in MATLAB, Octave, Python (NumPy), or any typical double-precision environment, you will get:

>> eps0 ans = 2.22044604925031e-16

which matches the textbook value of 2522.220446049250313×10162^{-52}\approx 2.220446049250313\times 10^{-16}.


Conclusion

  • The code exploits the fact that 1/31/3 is not exactly representable in binary, and the small rounding errors become evident after subtracting 11 and then multiplying back by 33.
  • The difference c1|c - 1| aligns with the granularity limit of double-precision floating-point (i.e., the machine epsilon).

Hence, the short program is a neat demonstration of how floating-point arithmetic “reveals” the fundamental precision limit (ϵmach\epsilon_{\text{mach}}) of the machine.

Condividi questa Q&A