Hello,
Iβam trying to calcuate values of the sin-function to high precision. I want to get at least 1 million valid digits.
Using the following code I test the accuracy of BigFloat:
# Set precision to 7 million bits, so I expect about 2 million valid digits
setprecision(7000000)
# Get value of pi with high precision
bigpi=BigFloat(pi)
# Calculate difference of sin(pi/2)-1. Should be zero in exact arithmetic
sin(bigpi/BigFloat(2))-1
# Calualate difference of sin(pi/6)-1/2. Should be zero in exact arithmetic
sin(bigpi/BigFloat(6))-1/BigFloat(2)
For the first difference I get the value of zero. That means the result of sin(pi/2) is valid up to 2 million digits.
For the secons difference I get an value of about 4.76993β¦e-306923. That means sin(pi/6) is only calculated up to about 300β000 digits and not 2 million digits.
Did I make something wrong in the setup of this test, or are there limitations in the accuray of the sin-function.
I get 0 for the second calculation too.
Did you try restarting Julia? Perhaps you defined different things with different precision or something like that.
If you want a rigorous guarantee of correctness, you can use IntervalArithmetic.jl, e.g.
julia 0.6> using IntervalArithmetic
julia 0.6> @biginterval sin(pi/6) - 0.5
[-4.31809e-78, 8.63617e-78]ββ β
julia 0.6> setprecision(10000)
10000
julia 0.6> @biginterval sin(pi/6) - 0.5
[-5.01238e-3011, 5.01238e-3011]βββββ
julia 0.6> setprecision(100000)
100000
julia 0.6> @biginterval sin(pi/6) - 0.5
[-1.001e-30103, 2.002e-30103]ββββββ
julia 0.6> setprecision(1000000)
1000000
julia 0.6> @biginterval sin(pi/6) - 0.5
[-5.05018e-3010, 1.01004e-30103]βββββββ
Here, @biginterval is a macro that evaluates the following expression in a guaranteed way by converting each literal (e.g. 0.5) and variable to an Interval{BigFloat}.
Thanks for your replies.
I think this is an issue of the operating system. I used Win10 and there I observe the same problem for all three variants of sin, sind and sinpi.
Using ubuntu (16.04) the generic julia package (0.6.0) for linux I get the exact result.
Is this a known issue that julia returns different results for win and linux?
Iβm not so familiar with high precision arithmetic but for IEEE754 I would expect the same results up to rounding error (~1e-16). In my naive point of view I would assume for high precision arithmetic that only that last few digits are wrong, but in the above example 1,6million of about 2 million digits are wrong.
Just a guess until it can be debugged in full, but I know MPFR and GMP use the C long type in many APIs for integer sizes, and on Windows that is 32 bits vs most other operating systems where C long is 64 bits on x86_64. So this may be a limitation caused by the C types used in the library.