How to test if a function is continuous

I am writing some tests for a package (not yet public) where I want to test that a function f(x::Real)::Real is continuous at some x.

Is there a numerical way of doing this?

A solution I thought of is finding the left- and right limits with Richardson extrapolation, as in

using Richardson

function diff_at_x(f, x, h; kwargs...)
    f_left, f_left_err = extrapolate(f, -h; x0 = x, kwargs...)
    f_right, f_right_err = extrapolate(f, h; x0 = x, kwargs...)
    # roughly, < 1 is OK as limits are conservative
    abs(f_left - f_right) / (abs(f_left_err) + abs(f_right_err) + eps(x))
end

julia> diff_at_x(abs, 0.0, 0.1)
0.0

julia> diff_at_x(x -> x > 0 ? sin(x) : sin(x) - 1, 0.0, 0.1)
2.7536530891901533e12
2 Likes

Can you autodiff through it? A similar idea to what you’re doing now would be to compare ForwardDiff.derivative(f, x+eps(x)) and ForwardDiff.derivative(f, x-eps(x)) or something like that.

1 Like

I’m not an expert, but how about using FFT? Shouldn’t discontinuous functions give a sharp peak in a FFT? Due to a sudden change in frequency?

You need a whole lot of frequencies to approximate a discontinuity, so it wouldn’t be a peak. Explaining it as a sudden change in frequency, whatever that means, seems questionable at best. Which frequency would suddenly change to which if you have f(x) = sign(x)?

If the function is sufficiently well behaved, you probably don’t need to extrapolate? i.e. just compare f(nextfloat(x)) and f(prevfloat(x))?

Most AD tools silently take one-sided derivatives at discontinuities and kinks, e.g.

julia> using ForwardDiff

julia> ForwardDiff.derivative(x -> x > 0 ? 1 : 0, 0)
0

julia> ForwardDiff.derivative(abs, 0)
1

That’s checking whether the (one-sided) derivative is continuous, not the function itself?

7 Likes

Fair points, that’s why I said I’m not an expert on FFT. :slight_smile: You’re right that a sudden change of slope is not a change of frequency. I guess it is more like an impulse, which contains all frequencies. Thanks for the clarification.

I guess I was thinking along the lines of those artifacts (peaks) that occur when you Fourier series a square wave function. But I guess derivative is a better method. Like the derivate of the Heaviside step function can be understood as the dirac delta function at the step.

EDIT: I was just trying to brainstorm but I guess it was not that helpful…

Also, in order to calculate an FFT (DFT) you’d need to sample the function’s outputs at a number of points in the domain around the particular value. A discontinuity anywhere in that sampled region, not just at the desired point, would probably be indistinguishable.

1 Like

Note that you wouldn’t use a DFT or Fourier series directly, because that implies periodic boundaries (so a generic non-periodic function will effectively always have “discontinuities” at the endpoints from the perspective of the DFT). If you have a function on an arbitrary finite interval x \in [a,b] and you want to detect discontinuities and other singularities automatically (e.g. at unknown positions), you could expand in Chebyshev polynomials (e.g. via ApproxFun.jl or FastChebInterp.jl … these use FFTs internally because they are related to a DFT by a change of variables), and look at the convergence rate of the Chebyshev coefficients c_n — if it is converging as only \sim 1/n asymptotically, that indicates a discontinuity somewhere in the interior (a,b) of the domain. (More generally, slower-than-exponential asymptotic convergence indicates some non-analyticity.)

If you want to locate unknown discontinuities/singularities in an interval [a,b] (if any) there are various recursive search algorithms. e.g. the algorithm used by Chebfun with the splitting on option is described in the article “Piecewise smooth chebfuns” (2010)

image

Their findjump code (free/open-source under 3-clause BSD is pretty short and wouldn’t be too hard to re-implement in Julia.

7 Likes

Oh geez, you’re right. I read that too quickly and thought Tamas was asking for a test of differentiability (where the point was to compare the two one-sided derivatives), not continuity. Sorry guys.

I thought about this but could not figure out a reasonable cutoff. Eg consider

function f(x; A)
    z = x - 1
    z > 0 ? A * z : 2 * A * z
end

which is definitely continuous at 0, but the difference between f(1\pm\epsilon) will depend on A, which can be arbitrarily large.

I should have emphasized that the locations of the possible discontinuities are known. No need to search for them.

Thanks for all the answers. I will probably go with Richardson extrapolation as it is fairly straightforward, and the cost does not matter (again, this is for CI).

The way to go is more symbolic. Like in AD a tape of operations of a function on a “dual variable type” will collect the points of potential discontinuities and their left/right/middle values.

TL;DR copy-paste AD logic and apply to this problem, but prevfloat/nextfloat is a good approx.

If there is a piecewise-linear functions implementation in some package, it could also be a guide (I’m not familiar with one).

I may be missing something, but I don’t see how that can be done. Is there a paper I can read about this?

It isn’t trivial. And I’m not aware of a paper. The idea would go, by extending a Value into a (Value, Dir) where Dir is a “direction”, either + or - or 0. Most calculation would go through to work on Value, except expressions like <,<=,==,> which would virtually add an infinitismal Dir. Thus (3, +) <= 3 would be false.
With this setup, finding continuity of f(3) would be seeing if f(3+) == f(3) == f(3-). Some other discontinuous functions would also need to be defined, such as round, sign, etc…

ADDED:
This “etc…” above does a lot of work. For example -(3,+) would be (-3,-).

I would wager that interval arithmetic (IEEE 1788) is the correct tool for this job. It defines the DAC decoration which confirms that a function is ‘defined and continuous’ on an interval (which can be restricted to just a single real number).

Also without decorations one could easily imagine a numerical epsilon-delta test for continuity: As you shrink your input interval around a suspect point on your function, the result interval should also shrink at the appropriate rate.

Is your setting that your function is stitched together of different definitions at certain nodes, like e.g. a spline? And you now want to do a quick plausibility check that everything matches up?

For a “mathematical / extensional” function defined on floating point numbers, i.e. a blackbox, there is not a lot more to do than comparing f(prevfloat(p)), f(p) and f(nextfloat(p)) to some cutoff (which may depend on f(p)). What cutoff is appropriate depends on your domain.

For a “mathematical / extensional” function defined on real numbers, I have a halting-problem oracle to sell you. Jokes aside, that doesn’t work :frowning:

For a AD-able / interval-math-able implementation of a function, there might be some clever definitions and tests. These tests may depend on the implementation of the function, not just its input-output behavior, same as all these things (you can implement f(x)=x as a lookup table on Float32, but that won’t AD).

Yes.

I am finding Richardson extrapolation easier, especially because the cutoff is easier to determine.

BTW, I think that Richardson extrapolation is the AD equivalent of taking limits numerically and has a broad range of applications.

Consider the following exercise: numerically verify that

\lim_{x \to \infty} \log(1 + \exp(x)) - x = 0
Solution

julia> using Richardson, LogExpFunctions

julia> to_inf(x) = 1/x
to_inf (generic function with 1 method)

julia> extrapolate((x → log1pexp(x) - x) ∘ to_inf, 0.01)
(0.0, 0.0)

1 Like

AD is exact in exact arithmetic (i.e. in the absence of roundoff error). Richardson extrapolation (with a finite number of iterations) is not. So, they seem qualitatively different to me.

Richardson is blackbox: You apply it to a function, i.e. a relation.

AD applies to something like a circuit that computes a function, and outputs a circuit that computes / approximates the derivative. The same holds for interval math.