Differences in MATLAB's versus Julia's beta(x, y) for large x or y?

Hi everyone,

I was comparing a colleague’s MATLAB code to my version (we are coding in parallel, and I am using Julia. I am also trying to convince him to move to Julia. Making progress… :slight_smile: ), and I noticed that the beta function returns something different for large x or y. Because of some other related functions these values are fed into, there are some differences that emerge in our results.

I elaborate on this in the following code:


using SpecialFunctions
beta1 = 3.6e13;
alpha1 = 0.563483398;
a13 = -1.27e29;

julia> beta1
3.6e13

julia> alpha1
0.563483398

julia> beta(beta1,1/alpha1+1)
3.9891491767885927e-38

# MATLAB's version 
>> beta(beta1,1/alpha1+1)

ans = 4.47377930618112e-38

The differences emerge in our codes because we use the following expression, which result in different answers.

julia> a13
-1.27e29

julia> a13*beta1*beta(2*beta1,1/alpha1+1)
-26651.76910015019

# MATLAB's version 
>> a13*beta1*beta(2*beta1,1/alpha1+1)

ans = -21558.4828041656

Would anyone have any insight about why these two values differ? I tried reading up on this and found some useful issues like #14349 and #14620 but I am wondering if someone has any additional comments or insights regarding what is likely happening here.

1 Like

It’s impossible to help without seeing your full code: Please read: make it easier to help you

1 Like

The full code is too long to post.

However, I think what I have posted highlights the difference and provides all the values that are needed to compute the expressions. If you copy-paste what I have posted, it should run on your machine. The beta function is in SpecialFunctions.jl I believe.

2 Likes

Ah, I see the SpecialFunctions.jl import is now at the top of the file.
That should do it.

I don’t know enough about the floating point specifics of the beta function to tell you the correct answer, but you can always use BigFloats to check your answer in Julia:


julia> a13*beta1*beta(big(2*beta1),big(1)/alpha1+1)
-26651.76910015004456135864648326051363960481520541247651058910044861872889956144


1 Like

I think the question is more about why these values differ so much between MATLAB and Julia. I’m not an expert in floating point arithmetic, so maybe someone with more knowledge can help.

This is weird

I get NaN here. Julia 1.6.0, SpecialFunctions 1.3.0

I think Julia’s answer is closer to correct, but with this large of a number it’s easy to blow up or hit a catastrophic cancellation. To take this back one step, a naive definition for beta is

\beta(p, q) = \frac{\Gamma(p) \Gamma(q)}{\Gamma(p+q)}

You wouldn’t want to use this implementation in the real world, though, because those intermediate values are huge. Like 10^{472392288679098} huge. So I’m quite certain this isn’t the implementation in either language.

julia> gamma(3.6e13)
Inf

But, hey, we can opt into higher precision floating point here:

julia> gamma(big(3.6e13))
7.882861104028365451673058070377064242970022825665897715265241972922672893761775e+472392288679098

julia> p = big(3.6e13)
3.6e+13

julia> q = 1/big(0.563483398)+1
2.774675178628776489804929213602311938188089959295862682297487944939957397047059

julia> gamma(p)*gamma(q)/gamma(p+q)
3.989149176788555366691087811957298867890005562542321569109120632926257477147039e-38

julia> beta(p, q)
3.989149176788555366691087811957298867890005562542321569109120643773484257104723e-38

So I’m not a numericist, but I think this is looking pretty good.

11 Likes

4 posts were split to a new topic: gamma(::BigFloat) overflows on Windows

This is very helpful! And, another reason to hopefully convince him to switch to Julia soon :wink:

Weird. From my side

julia> using SpecialFunctions

julia> gamma(big(3.6e13))
7.882861104028365451673058070377064242970022825665897715265241972922672893761775e+472392288679098

julia> @which gamma(big(3.6e13))
gamma(x::BigFloat) in SpecialFunctions at /Users/gerzaguet/.julia/packages/SpecialFunctions/mFAQ4/src/gamma.jl:578

(testSpecialFunctions) pkg> st
     Project testSpecialFunctions v0.1.0
      Status `~/testSpecialFunctions/Project.toml`
  [276daf66] SpecialFunctions v1.3.0

(testSpecialFunctions) pkg> 

Mathematica also likes Julia’s answer:

In[9]:= x=36*10^12; y=1+10^9/563483398;
Beta[N[x,30],N[y,30]]

Out[10]= 3.98914917678855*10^-38

The fact that the number of accurate digits (in MMA’s estimation) drops from 30 to about 15 suggests a condition number of about 10^15, which is very nearly the reciprocal of double precision roundoff. In fact, if I tell MMA to use 16 digits in the parameters, it reports the result as simply 4*10^(-38).

Thus both Julia and MATLAB are reporting the same, right answer, but you can’t trust more than one significant digit due to the limit of double precision. I suspect that Julia’s library may use extended precision to get more accurate digits than you have a right to expect. That is, it’s treating your parameters as being given with a lot more precision than they actually have, as BigFloats do.

EDIT: In fact, if beta1 and/or alpha1 has actually been rounded off to 9 significant digits as it would appear, then the result is utterly meaningless. The next, unknown digit could change the answer by more than 100%.

10 Likes

Mathematica seems to agree with MATLAB here:
image

1 Like

A post was merged into an existing topic: gamma(::BigFloat) overflows on Windows

How about instead of voting (who agrees with whom) use a proper tool to derive answer with propert bounds? :wink:

julia> using Arblib

julia> gamma(p::Arblib.ArbLike) = Arblib.gamma!(zero(p), p)
gamma (generic function with 1 method)

julia> beta(p,q) = gamma(p)*gamma(q)/gamma(p+q)
beta (generic function with 1 method)

julia> # default precision: 256
       beta1 = Arb("3.6e13")
36000000000000.000000000000000000000000000000000000000000000000000000000000000

julia> alpha1 = Arb("0.563483398")
[0.5634833980000000000000000000000000000000000000000000000000000000000000000000 +/- 1.43e-77]

julia> a13 = Arb("-1.27e29")
-127000000000000000000000000000.00000000000000000000000000000000000000000000000

julia> beta(beta1, 1/alpha1 + 1)
[3.989149176788546129470797546088123799273404209572333790534334e-38 +/- 3.11e-99]

julia> a13*beta1*beta(2*beta1,1/alpha1+1)
[-26651.76910014998257725068251860373729547788703275342456102132 +/- 5.61e-57]

So it seems that julia BigFloat gets only ~ 6 digits right?

EDIT: if you don’t quote the floting point numbers you get this:

julia> beta1 = Arb(3.6e13)
36000000000000.000000000000000000000000000000000000000000000000000000000000000

julia> alpha1 = Arb(0.563483398)
0.56348339800000002419722022750647738575935363769531250000000000000000000000000

julia> a13 = Arb(-1.27e29)
-127000000000000001962397401088.00000000000000000000000000000000000000000000000

julia> a13*beta1*beta(2*beta1,1/alpha1+1)
[-26651.76910015004611139587054694048338038368723697731618508524 +/- 5.67e-57]

which is probably what arith BigFloat is doing :wink:

8 Likes

I don’t have a Julia session handy, but I think the right way to solve this is with asymptotic approximations. The beta function beta(p,q) = gamma(p)*gamma(q)/gamma(p+q). For large p and small q, the ratio gamma(p)/gamma(p+q) is given by an asymptotic expansion of Tricomi and Erdelyi (see equation 1). For really large p, you can use gamma(p)/gamma(p+q) roughly equal to p^-q. The remaining term gamma(q) poses no problem for 1/alpha1+1 in your example.

To build on other’s answers that express beta() as \Gamma(p)\Gamma(q)/\Gamma(p+q), it seems you can trivially reproduce Matlab’s behavior in Julia using loggamma():

julia> p = beta1
3.6e13

julia> q = 1/alpha1 + 1
2.7746751786287764

julia> exp(loggamma(p) + loggamma(q) - loggamma(p+q))
4.4737793061811207e-38

julia> exp(loggamma(big(p)) + loggamma(big(q)) - loggamma(big(p)+q))
3.98914917678856144745513696868341367384631253052121882201338979974154777667653e-38

I’d bet on round-off error, as others have pointed out.

EDIT: loggamma() is preferred over lgamma().

6 Likes

If this doesn’t convince him, nothing will.

2 Likes

Thanks, this is useful. I didn’t realize there was a loggamma() function.

I’m quite a fan of the exponential-log trick in general, so this is pleasing to see.

1 Like

I’d bet on round-off error

You don’t just have to bet on round-off error; you can see it happening by looking at the terms inside that exp call. First look at each term individually:

julia> loggamma(p), loggamma(q), loggamma(p+q)
(1.087723441957833e15, 0.4955539367041963, 1.0877234419579195e15)

Note that loggamma(p) and loggamma(p+q) are large — around 1e15 — but nearly equal, so you have to expect that their difference will be dominated by round-off error. The built-in eps function shows us that each number is only represented to a precision of 0.125:

julia> eps(loggamma(p)), eps(loggamma(p+q))
(0.125, 0.125)

So even if loggamma is effectively exact, using Float64 limits the accuracy of the results, and therefore the difference between these numbers to an accuracy of roughly that size. In fact, we have

julia> loggamma(p) + loggamma(q) - loggamma(p+q)
-86.0

while the true value is more like -86.11466.

So the error in the argument to the exp function is large. But the fractional error in an exponential is approximately the absolute error in its argument:

\frac{\exp(x + \Delta x) - \exp(x)} {\exp(x)} = \exp(\Delta x) - 1 \approx \Delta x

Here, \Delta x \approx 0.11466, and hsgg’s results show that the fractional error in beta when evaluated in this way with floats is about 0.12149. (In fact the error is precisely \exp(\Delta x) - 1, so everything appears to be working correctly.) Thus, the error is consistent with the above failure of cancellation.

It may be worth noting that SpecialFunctions.beta avoids this level of error by testing for the condition where it happens and directly approximating the combination loggamma(p) - loggamma(p+q) when it will affect the result.

18 Likes