Testing and tolerances for implicitly calculated functions and their AD

To fix ideas, suppose we have a function x = g(p) defined by f(x(p), p) = 0, both arguments are scalars.

The numerical implementation involves finding an x such that | f(x, p)| \le \mathrm{tol}, either by Newton’s method or Brent, depending on circumstances.

Once x is found, the derivative x'(p) = - (\partial f/\partial p)/(\partial f/\partial x) can be readily calculated and incorporated into an AD rule, which I will want to unit test.

I am running into two issues:

  1. most test frameworks use the excellent FiniteDifferences.jl, so one can pass on a finite difference method for checking the derivative. However, I am unsure how to parametrize it, there is a factor = ... argument but it refers to relative noise. But even the absolute, let alone the relative error of these calculations depends on the p and the tolerance. Is there anything else I should be doing?

  2. When it comes to unit testing, it is unclear how to choose tolerances. In practice I found that some tests fail if I don’t choose high tolerances (think rtol = 0.05 etc). But then I am worried that it’s just my code having errors that pass silently.