The loss of accuracy during a single computation with floating-point numbers usually isn’t enough to worry about. However, if you compute a value that is the result of a sequence of floating-point operations, the error can accumulate and greatly affect the computation itself. Here is an attempt to compute the value of pi using one of its many series representations:
BEGIN { x = 1.0 / sqrt(3.0) n = 6 for (i = 1; i < 30; i++) { n = n * 2.0 x = (sqrt(x * x + 1) - 1) / x printf("%.15f\n", n * x) } }
When run, the early errors propagate through later computations, causing the loop to terminate prematurely after attempting to divide by zero:
$ gawk -f pi.awk -| 3.215390309173475 -| 3.159659942097510 -| 3.146086215131467 -| 3.142714599645573 ... -| 3.224515243534819 -| 2.791117213058638 -| 0.000000000000000 error→ gawk: pi.awk:6: fatal: division by zero attempted
Here is an additional example where the inaccuracies in internal representations yield an unexpected result:
$ gawk 'BEGIN { > for (d = 1.1; d <= 1.5; d += 0.1) # loop five times (?) > i++ > print i > }' -| 4