1.11 Numerical Errors: Due to finite-precision floating-point arithmetic

In general, since the precision of numbers used by a computer is limited, they are only approximations of the true values. Using such numbers in calculations therefore automatically implies that the unavoidable uncertainties enter according to the standard ways of error propagation (as they are also used to find the error of a result calculated from experimental data). In that respect, three levels of computation can be discerned which vary considerably in their contribution to the error of the final result: elementary operations (addition, multiplication etc.; basic increase in uncertainty), function calls (many elementary operations \(\Rightarrow\) much larger increase in uncertainty), and processing of an algorithm (many function calls \(\Rightarrow\) even larger increase in uncertainty).

However, there are also some additional sources of error which arise from the limited precision and the limited range of numbers that can be handled by a computer: When subtracting similar numbers, the finite size of the mantissa may lead to a loss of significant digits; this is called cancellation. On the other hand, numbers that differ by many orders of magnitude cannot be treated correctly by floating-point operations in an accumulated summation and/or subtraction. Finally, the finite range of numbers implies the possibility of numerical overflow or underflow; in the latter case mostly the calculation continues assuming zero as the respective result.


With frame Back Forward as PDF

© J. Carstensen (Comp. Math.)