Gentler transformation from real to positive

I am solving an equilibrium of an economic model for prices p > 0. I need to do this robustly for “wacky” parameter values when I am fitting the model to the data in an outer loop.

I have coded up a market clearing residual function market_residuals(model, p).

I found that I can do this by, roughly,

NLsolve.nlsolve(x0; autodiff = :forward, iterations = 200,
                method = :newton, ftol = residual_tol) do x
    p = exp.(x)
    market_residuals(model, p)
end

The problem is that the Newton solver can very easily lead to arguments which are invalid when transformed, as

julia> exp(750)
Inf

julia> exp(-750)
0.0

Scaling x does not help as the Newton method is scale invariant.

Is there a “gentler” transformation from \Bbb{R} to \Bbb{R}_+ that one could recommend?

Maybe I don’t understand something, but can’t your function operate on logarithm of the price, so then simply skip call to exp?

That just puts the numerical problem somewhere else.

Maybe a trick is to use the identity (plus some offset) on positive numbers, and on negative numbers continue with something that plateaus, in such a way that the “junction” is still differentiable. For example the following:

f(x) = \begin{cases} x+\frac{\pi}{2} &\text{ if } x >0\\ \mathrm{arctan}(x) + \frac{\pi}{2} &\text{ otherwise.} \end{cases}

It should still be continuously differentiable, because the derivative of \mathrm{arctan} in 0 is 1.

2 Likes

p²?

1 Like

I should have specified, but I need a bijection (continuous, monotone; does not need to be continuously differentiable but that would be nice, too).

Sigmoid takes you from R to in the interval between 0 and 1. Perhaps useful?

How to calculate sigmoid without exp(x)?

You can calculate it using either exp(x) or exp(-x) so you can choose one of them that works. But if one is too big I guess the other one would be too small…

How about a hyperbola:

f(x, slope, abcissa) = slope/2 * x + sqrt(slope^2/4 * x^2 + abcissa^2)

increasing abcissa should move the x where f(x)==0 to smaller values.

Why? Finding a root of f(p²) means that every root is doubled, but if Newton converges to a negative root you can just flip the sign.

3 Likes

What about using log(1+exp(x)). Is monotonic, smooth, and never blows up

It still overflows because of the intermediate exp(x) calculation: log(1+exp(710)) === Inf. It’s also subject to underflow: log(1+exp(-37)) == 0.0. It would be better to use log1p(exp(x)), but even that eventually underflows: log1p(exp(-746)) == 0.0. You could define your own special-function implementation for f(x) = log1p(exp(x)), rewritten to avoid spurious overflow — for example, f(x) = x > 0 ? x + log1p(exp(-x)) : log1p(exp(x)) suffices and is equivalent to log(1+exp(x)) in exact arithmetic — but the underflow is unavoidable since f(-746) ≈ 1.03e-324 is not representable as a Float64.

I still don’t understand why you need a bijection in a root-finding context, e.g. why x -> x^2 is not acceptable, since you can flip a negative solution to a positive one a posteriori as I commented above.

1 Like

Translating by the minimum element of the set?

If the error is only happening inside the Newton solver (ie. it is not the actual market clearing price) could you do a small check beforehand to bound the admissible set of x and then use a bounded solver instead?

I usually try to do these domain transformations with bijections in Bayesian inference, but I realized that here there is no compelling reason for that, so I will experiment with your suggestion. Thanks for bringing it up again.

1 Like