Taming Floating-Point Errors: Machine Epsilon and Practical Examples in Python
Here's a deeper explanation of the concepts involved:
Here's how you can find the machine epsilon in Python using NumPy:
import numpy as np
# Print the machine epsilon for floats
print(np.finfo(float).eps)
This code will output a value around 2.2204460492503131e-16
on most systems, indicating that the smallest number you can add to 1.0 without changing its value (due to floating-point precision) is roughly 2.22 times 10 to the power of -16.
import numpy as np
# Define a value and machine epsilon
a = 1.0
eps = np.finfo(float).eps
# Check for equality with a small tolerance
if abs(1.0 + eps - a) < eps:
print("Values are considered equal within machine epsilon")
else:
print("Values are not exactly equal due to machine epsilon")
This code defines a value a
and retrieves the machine epsilon using finfo
. Then, it checks if the difference between 1.0 + eps
and a
is less than eps
. This accounts for the machine epsilon when comparing for equality.
Avoiding division by small numbers:
import numpy as np
def safe_division(x, y):
# Check if denominator is very close to zero (within machine epsilon)
if abs(y) < np.finfo(float).eps:
return 0 # Handle division by zero safely (e.g., return 0)
else:
return x / y
# Example usage
result = safe_division(1.0, 1e-15)
print(result)
This code defines a function safe_division
that checks if the denominator y
is very close to zero (within machine epsilon). If so, it returns a safe value (e.g., 0) to avoid division by zero errors. Otherwise, it performs the normal division.
Absolute difference considering machine epsilon:
import numpy as np
def approx_equal(x, y):
eps = np.finfo(float).eps
return abs(x - y) < eps
# Example usage
if approx_equal(1.000000000000001, 1.0):
print("Values are approximately equal within machine epsilon")
This code defines a function approx_equal
that takes two numbers and checks if their absolute difference is less than the machine epsilon. This can be useful for comparing floating-point numbers where exact equality might not be achievable due to precision limitations.
This method exploits the definition of machine epsilon itself. It starts with eps
equal to 1.0 and repeatedly halves it until adding it to 1.0 no longer produces a different value (considering machine precision).
def calc_epsilon():
eps = 1.0
while 1.0 + eps != 1.0:
eps /= 2.0
return eps
# Example usage
machine_epsilon = calc_epsilon()
print(machine_epsilon)
Utilizing the Unit in the Last Place (ULP):
The Unit in the Last Place (ULP) refers to the smallest representable difference from a number. You can calculate the ULP of 1.0 and it will be equivalent to machine epsilon for the specific floating-point format being used.
Here's an approach using frexp
function (might not be available in all Python implementations):
import math
def calc_epsilon_ulp():
# Get mantissa and exponent of 1.0
mantissa, exponent = math.frexp(1.0)
# Calculate ULP by shifting the mantissa by 1 bit to the left
return math.ldexp(mantissa * 2.0, exponent - 1)
# Example usage
machine_epsilon = calc_epsilon_ulp()
print(machine_epsilon)
Important points to consider:
- The iterative approach might be slightly slower than using
finfo
. - The ULP approach might require additional functions (availability depends on Python implementation).
- Both methods rely on the specific floating-point format used by your system and might not be universally portable.
python numpy epsilon