I meant that you should put your code in functions instead of running things in global scope. And an array doesn’t have to be a global variable.
And why is it not possible to define an array as const? Let’s say I have an array D with some multiple values inside of which I want to consider the Set. I then define
Ds=unique(D)
and then loop over every entry in Ds. Since I do not modify Ds it should be constant and I should be able to write
const Ds=unique(D)
but this is not possible somehow.
Read the error message julia gives you. You can’t change a non-constant variable to a constant variable (otherwise it wouldn’t really be constant).
To add a bit more
a = 3
const a = 4
is not allowed but
const b = 3
b = 4
will be allowed (but will show you a warning that you’re changing the value assigned to the constant).
Sorry I meant these warnings that appear within the second run of the script.
WARNING: redefining constant Ds
WARNING: redefining constant ind0
Ds and ind0 are arrays that do not really change within the second run. I also prefixed another constant as such
const N=20
and there is no warning for N.
It’s not impossible to give general advice, but it is challenging to follow a discussion about code we haven’t seen. Maybe you can create a runnable example?
Hello.
I have a newby question.
Why sometimes do you use p and sometimes $p?
This is specific to the BenchmarkingTools package. See his reply " Inside a function, not a loop. And this way of using $
for variable interpolation is specific to the BenchmarkTools package."
Regarding const
and interpolation, I would think about it in terms of type inference.
julia> foo(a) = a + b
foo (generic function with 1 method)
julia> b = 3
3
julia> foo(2)
5
julia> @code_warntype foo(2)
Variables
#self#::Core.Compiler.Const(foo, false)
a::Int64
Body::Any
1 ─ %1 = (a + Main.b)::Any
└── return %1
The function foo
does not know what b
is, therefore a + Main.b
can return anything.
Examples:
julia> b = 3.0
3.0
julia> foo(2)
5.0
julia> using ForwardDiff
julia> b = ForwardDiff.Dual(3.0, ForwardDiff.Partials((1.0,0.0,0.0,0.0)))
Dual{Nothing}(3.0,1.0,0.0,0.0,0.0)
julia> foo(2)
Dual{Nothing}(5.0,1.0,0.0,0.0,0.0)
julia> struct WeirdBehavior end
julia> Base.:+(a::Int, ::WeirdBehavior) = println("Hello, world! 🌎")
julia> b = WeirdBehavior()
WeirdBehavior()
julia> foo(2)
Hello, world! 🌎
Like the WeirdBehavior
example shows, essentially anything could happen when we call foo(::Int)
. When we call foo
, Julia has to figure out which +
method is actually supposed to get called, and then dynamically dispatch to that correct function.
If that +
method is fast – like adding two integers – this dynamic dispatch will be many times slower than the operation itself.
Things get worse when foo
is complicated. Because Julia cannot know what the return type of a + Main.b
is, if the function does anything with the answer (eg, multiplying it by 4), we have the same problem all over again: another dynamic dispatch. The ::Any
is infectious, and can spread, killing performance.
Of course, there were no problems with a
above. That it is a global variable doesn’t matter; we passed it to foo
as an argument, letting Julia dispatch to a foo
specialized for a::Int
; if we replaced a
with a Float64
, Julia would again automatically know it is a Float64
, and we won’t have a problem with a
.
So how can we fix b
? That is, how can we make sure Julia knows its type?
Obviously, making b
an argument:
julia> foo(a, b) = a + b
foo (generic function with 2 methods)
julia> @code_warntype foo(2, 3)
Variables
#self#::Core.Compiler.Const(foo, false)
a::Int64
b::Int64
Body::Int64
1 ─ %1 = (a + b)::Int64
└── return %1
julia> @code_warntype foo(2, 3.0)
Variables
#self#::Core.Compiler.Const(foo, false)
a::Int64
b::Float64
Body::Float64
1 ─ %1 = (a + b)::Float64
└── return %1
julia> @code_warntype foo(2.0, 3.0)
Variables
#self#::Core.Compiler.Const(foo, false)
a::Float64
b::Float64
Body::Float64
1 ─ %1 = (a + b)::Float64
└── return %1
julia> @code_warntype foo(2.0, 3)
Variables
#self#::Core.Compiler.Const(foo, false)
a::Float64
b::Int64
Body::Float64
1 ─ %1 = (a + b)::Float64
└── return %1
Now, the specific +
method that needs to be called within foo
is known. This means that Julia does not need to dispatch on +
dynamically, and can instead compile each of the foo
s into very efficient machine code.
Similarly, we could also instead make b
a const
:
julia> const cb = 3
3
julia> foo(a) = a + cb
foo (generic function with 2 methods)
julia> @code_warntype foo(2)
Variables
#self#::Core.Compiler.Const(foo, false)
a::Int64
Body::Int64
1 ─ %1 = (a + Main.cb)::Int64
└── return %1
julia> @code_warntype foo(2.0)
Variables
#self#::Core.Compiler.Const(foo, false)
a::Float64
Body::Float64
1 ─ %1 = (a + Main.cb)::Float64
└── return %1
Because b
is constant, it cannot suddenly change; we can’t replace it with a WeirdBehavior
or anything other than another object of the same type:
julia> cb = WeirdBehavior()
ERROR: invalid redefinition of constant cb
Stacktrace:
[1] top-level scope at REPL[50]:1
Although, replacing it with another object of the same type will cause problems:
julia> foo(2)
5
julia> cb = 100
WARNING: redefining constant cb
100
julia> foo(2)
5
The function foo
did not get updated, and is using the old value!
Similarly, when you got warnings about updating arrays
julia> const B = [1, 2]
2-element Array{Int64,1}:
1
2
julia> sumB() = sum(B)
sumB (generic function with 1 method)
julia> sumB()
3
julia> B[1] = 100
100
julia> sumB()
102
julia> B = [-10^6, -10^7]
WARNING: redefining constant B
2-element Array{Int64,1}:
-1000000
-10000000
julia> sumB()
102
julia> B[1] = 10^9
1000000000
julia> sumB()
102
The function sumB
is still using the old array B
, and modifying the new one no longer changes the answers we’re getting.
Best not to change a global constant.
This is all also fairly awkward, which is why it’s normally better to pass things as function arguments, if you can.
What does this all have to do with interpolation?
Our original problem was that the +
inside foo
was a dynamic dispatch, because the type of Main.b
was not known.
In reality, if we have type stable code (most of the time), we wont have this problem, so we don’t want to benchmark the dynamic dispatches.
However, when we benchmark a function
@benchmark x + y
that is precisely what we’re doing: benchmarking the dynamic dispatch (assuming x
and y
aren’t both const
). The BenchmarkTools
library however has a hack, letting us use $
to avoid the dynamic dispatch, and call +
as though we were calling it inside a type stable function:
@benchmark $x + $y
Which in reality is what we’re interested in the vast majority of the time.
Note that if x
and y
are simple objects like numbers, the above will probably report times much less than ns
, because the optimizer can realize the answer is constant and thus avoid recomputing it, defeating the benchmark. That is another problem which (for now) you can avoid via something like:
julia> using BenchmarkTools
julia> @benchmark $(Ref(2))[] + $(Ref(3))[]
BenchmarkTools.Trial:
memory estimate: 0 bytes
allocs estimate: 0
--------------
minimum time: 1.534 ns (0.00% GC)
median time: 1.538 ns (0.00% GC)
mean time: 1.609 ns (0.00% GC)
maximum time: 14.689 ns (0.00% GC)
--------------
samples: 10000
evals/sample: 1000
but the smarter the compiler gets, the harder it will be to trick it into doing enough useless work to accurately time how long a computation takes.