[ANN] InverseGammaFunction.jl (not registered)

This package provides a function to compute the inverse of the gamma function for positive (real) arguments. It supports both branches.

It is a very quick implementation using root finding. I don’t plan to register it. At the moment, I don’t plan further development.

1 Like

For a very quick improvement, you might want to use the Bessels.jl gamma implimentation.

2 Likes

Done.

I haven’t seen that, or forgot about it. This might make it compatible with AD, as well.

Another improvement would be to use the fact that we have the analytical derivative \Gamma'(x) = \Gamma(x) \psi^{(0)}(x) (in terms of the digamma function \psi^{(0)}), which allows you to use Newton’s method. A Newton step for solving \log \Gamma(x) - y = 0 for x, given y (the logarithm of the desired Gamma function value), is

using SpecialFunctions
x -= (loggamma(x) - y) / digamma(x)

(Framing it as the inverse loggamma function gives you much bigger dynamic range for positive arguments.)

Root finding method should be slow. Why not trying the method in this pdf?

https://tminka.github.io/papers/minka-gamma.pdf

That paper is also describing root-finding by a variation on Newton’s method. Though it’s not exactly inverting \Gamma(x)?

That’s probably a good idea. I saw that SpecialFunctions has the digamma function.

I have not done this for quite a while, but my recollection is that if you have a derivative, it usually pays to use it. I was trying to find out if it’s worth it just by reading the docs in Roots.jl. I read that the extra function evaluations sometimes are not worth using a method that uses derivatives. (I was hoping there might be some way to compute gamma at a point and use that for both the function and the derivative, rather than computing gamma once for each.) The docs even seem to suggest that all of the methods that use derivatives are “legacy” methods (!)

OTOH, if x -= (loggamma(x) - y) / digamma(x) is the entire Newton step, it certainly looks worth trying.

The idea is that the generalized Newton method for this (similar) situation is efficient.

Yup. (Newton’s method is so simple that I would typically just implement it directly rather than calling a package like Roots.jl.)

The other trick is getting a decent initial guess, e.g. by some simple Stirling-like approximation. Even just x = y + 2 as the initial guess for the loggamma argument seems to converge within 5 iterations over most of the domain (for the x > 1.4\cdots branch), except right at the local minimum at the beginning of the branch (where root-finding methods struggle because the slope goes to zero — you could employ a specialized method close to the minimum x = 1.4\cdots, e.g. based on Taylor expansion around this point).

Newton’s method also has the advantage that it doesn’t require an explicit bracket for the root — you can take advantage of the known monotonicity of the \Gamma function (in each of the two positive branches) to guarantee convergence from any starting point (in the branch). This is especially important if you want to invert loggamma because the domain is not limited by overflow.

3 Likes

I added an invloggamma function. On the branch between zero and the minimum of \Gamma, the Newton-Raphson step sometimes fails because an argument to \Gamma(x) becomes negative. I made a crude initial guess, which has taken care of all the failures that I found.

(I was on the fence about making a package. But I needed this once and thought, why not publish it in some form.)