I get strange output from this function:
function foo(n)
t = range(-1, 1, length=n)
x = (t .- 1) ./ (t .+ 1)
return x[1]
end
For example
julia> for n=100:110
println("foo($n) = ", foo(n))
end
foo(100) = -Inf
foo(101) = -Inf
foo(102) = -6.338253001141147e29
foo(103) = -Inf
foo(104) = -4.056481920730334e31
foo(105) = -Inf
foo(106) = 2.535301200456459e30
foo(107) = -Inf
foo(108) = -Inf
foo(109) = -Inf
foo(110) = -Inf
I am using the julia version packaged in Fedora 32:
julia> versioninfo()
Julia Version 1.4.2
Commit 44fa15b150* (2020-05-23 18:35 UTC)
Platform Info:
OS: Linux (x86_64-redhat-linux)
CPU: AMD Ryzen 7 1700 Eight-Core Processor
WORD_SIZE: 64
LIBM: libopenlibm
LLVM: libLLVM-8.0.1 (ORCJIT, znver1)
If I define g(t)=(t-1)/(t+1)
and do x = g.(t)
then x[1]
is alway -Inf
, as expected.
Looks like this is an issue with StepRangeLen
not choosing the reference value to be the first one.
julia> s = range(-1, 1, length=102);
julia> b = Base.broadcasted(+,s,1)
3.1554436208840472e-30:0.019801980198019802:2.0
julia> b.offset
52
julia> b.ref
Base.TwicePrecision{Float64}(1.00990099009901, -8.7938457396052e-18)
julia> s[52] + 1
1.00990099009901
So it chooses the reference value to be the 52
nd one (I’m not sure how this works). Subsequently it tries to work backwards and evaluate the first value, which then isn’t zero anymore because the step size isn’t precise.
julia> s[1] + 1
0.0
julia> b[1]
3.1554436208840472e-30
When you evaulate g.(t)
, it converts the range into an array, in which case each value is independently added to 1
, and the first term evaluates to zero.
Thanks jishnub. So defining
t = collect( range(-1, 1, length=102) );
x = (t .- 1) ./ (t .+ 1);
produces x[1]
equal to -Inf
.
I note that you can also do this
t = range(-1, 2/n, length=n)
But this seems to fail for n=110 (it works for 100 to 109 though)
Your collect
approach seems to work t hough.