Floating-point accuracy visualization example: the ULP error for each math function coming with Julia

in construction, recalculating and moving the post to Discourse

Introduction

How about visualizing the discrepancy between the exact value of a univariate real function, such as the sine, and the approximate value that is returned by a call such as sin(0.3). By putting an interval that subsets the domain of the function on the x-axis of a plot, and the value of this discrepancy, the error onto the y-axis of the same plot, it might be possible to tell on which part of the function’s domain is the implementation less accurate than on other parts. That is, it might be possible to read the worst-case regions for the approximation off the plot.

I choose the error measured in units in the last place/units of least precision (ULPs) as the type of error to visualize here. Other types of error that are often used are absolute error, which is not usually appropriate for floating-point numbers, and relative error, which has these drawbacks to measuring the error in ULPs:

  • it is usually less immediately useful and less intuitive

  • plotting software tends to have trouble with the tiny values

One advantage of the error in ULPs is that it has some convenient and easy interpretations in the context of floating-point numbers. For example:

  • If the error between the exact value and the approximate (floating-point) value is less than half, the approximate value is the number closest to the exact value among the numbers belonging to that floating-point representation. The technical term is correctly rounded.

  • If the error is merely less than one, the approximate value is one of the two floating-point numbers closest to the exact value. The technical term is faithfully rounded (although faithful rounding is not technically a rounding).

If we tried to, say, evaluate the error in ULPs on an evenly-spaced grid on the chosen interval, the plot would just like like noise if each evaluation was plotted as a data point. However, given that the worst cases are what is most interesting, it might be possible to achieve a comprehensible visualization by giving up on visualizing the best cases. In other words, we’re interested in the upward spikes of the error, and by eliding the downward spikes it might be possible to make the plot a lot less noisy.

To accomplish this, the approach I choose here is vaguely similar to downsampling/decimation, from signal analysis: take n values, where n is some large positive integer, and represent them on the plot by an aggregate value: the maximum value among the n points (in this case). In a signal analysis context, it is also common to apply a lowpass filter to reduce noise/high-frequency components of the signal, before the decimation itself. This helps prevent artifacts in the resulting output. In this case, a sliding window maximum seems like an appropriate lowpass filter to smooth out the data before decimation, to prevent artifacts.

Julia app on Github

The app used for creating these visualizations is published on Github:

NB:

  • Neither the package, nor the app it exposes, are registered as of writing this.

  • Probably should move the Git repository from my personal namespace to the JuliaMath organization on Github.

The plots

Might be useful to both contributors and users of Julia to better understand where there’s room for improvement in the current implementations. In the cases of some functions, though, the worst case error spikes are difficult or impossible to fix efficiently, without reaching for some arbitrary-precision implementation like MPFR (BigFloat) or Arb.

acos

X

acosd

X

acosh

X

acot

X

acotd

X

acoth

X

acsc

X

acscd

X

acsch

X

asec

X

asecd

X

asech

X

asin

X

asind

X

asinh

X

atan

X

atand

X

atanh

X

cos

X

cosc

X

cosd

X

cosh

X

cospi

X

cot

X

cotd

X

coth

X

csc

X

cscd

X

csch

X

deg2rad

X

exp

X

exp10

X

exp2

X

expm1

X

log

X

log10

X

log1p

X

log2

X

rad2deg

X

sec

X

secd

X

sech

X

sin

X

sinc

X

sind

X

sinh

X

sinpi

X

tan

X

tand

X

tanh

X

tanpi

X

Miscellaneous

Connected PRs

Version, platform info

julia> versioninfo()
Julia Version 1.14.0-DEV.1661
Commit 78e95d2e67f (2026-02-01 01:39 UTC)
Build Info:
  Official https://julialang.org release
Platform Info:
  OS: Linux (x86_64-linux-gnu)
  CPU: 8 × AMD Ryzen 3 5300U with Radeon Graphics
  WORD_SIZE: 64
  LLVM: libLLVM-20.1.8 (ORCJIT, znver2)
  GC: Built with stock GC

Nice! But the plots are hard to see due to the low contrast, low resolution and small text, if it’s not too much trouble, I’m sure an svg (or higher res) version with better sizing and colors would be appreciated!

2 Likes

(post deleted by author)

Interesting, thank you for sharing.
We recently had a conversation about ULPs over at IntervalArithmetic.
We used to have a rounding mode based on prevfloat/nextfloat (fast, not tight, but GPU compatible), though we ultimately pulled the plug because determining the ULPs turned out to be tricky and architecture-dependent.

I am not an export on this subject, though Julia itself provides implementations of some functions:

  1. Where can we find the list of implemented functions?

  2. For these functions, this means that the ULP is architecture independent, correct?

  3. Do any of these implementations provide proven bounds on the ULP error?
    I would not expect correct rounding in general, but even a pessimistic, certified upper bound on the ULP would already be very useful for our purposes.
    Maybe one day we can even get an implementation of CORE-Math functions.