Hello everybody !
I’m busy now with studying the speed of convergence of sequences of real numbers. I begin to study classical algorithms like Archimedes polygonal approximation of pi, and newton computations of a square root. I already illustrated with some simple python programs and I want to translate into Julia . Here’s an example :
from decimal import Decimal, getcontext
def u(n):
if n == 0:
return Decimal(1)
return (u(n - 1) + (2 - u(n - 1) * u(n - 1)) / (2 * u(n - 1))) #appel récursif
def main():
"""affichage des 10 premiers termes avec 100 décimales"""
getcontext().prec = 101
for i in range(0, 8):
print("i=", i, ":", u(i), end='\n')
if __name__ == '__main__':
main()
The output is this :
i= 0 : 1
i= 1 : 1.5
i= 2 : 1.4166666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666667
i= 3 : 1.4142156862745098039215686274509803921568627450980392156862745098039215686274509803921568627450980392
i= 4 : 1.4142135623746899106262955788901349101165596221157440445849050192000543718353892683589900431576443402
i= 5 : 1.4142135623730950488016896235025302436149819257761974284982894986231958242289236217849418367358303566
i= 6 : 1.4142135623730950488016887242096980785696718753772340015610131331132652556303399785317871612507104752
i= 7 : 1.4142135623730950488016887242096980785696718753769480731766797379907324784621070388503875343276416016
√2=1.4142135623730950488016887242096980785696718753769480731766797379907324784621070388503875343276415727
I use colors to show at every step how many exact decimals we get. Theoretical discussion follows leading to acceleration processes.
The base for a Julia equivalent would be :
using Decimals
#setprecision(1600)-evaluation of a size in bits to contain such data
u(n)= n==0 ? decimal(1) : (u(n - 1) + (2 - u(n - 1) * u(n - 1)) / (2 * u(n - 1)))
for i in 0:7
print("i=",i,":",u(i)); println("")
end
So my problem is:
What is the julia equivalent for python
getcontext().prec = 101
???
I found some discussions here and there about this but the concern was mainly to produce big integers and I am familiar with this technique even with julia.
First I couldn’t appreciate the effect of ‘setprecision’ whatever the value given I always have the same result , without any error.
i=0:1
i=1:1.5
i=2:1.41666666666666666667
i=3:1.41421568627450980392
i=4:1.41421356237468991063
i=5:1.4142135623730950488
i=6:1.4142135623730950488
i=7:1.4142135623730950488
which is not enough for this special case and others to come later.
Is it just a matter of parametrization of println using a default rounding , or something more serious.
Note: specifying initial term as decimal(1) instead of simple 1 (Int64) gives 3 additional places of decimals no more…