The dot there means the same thing as in Julia?
Ps. Use better triple back ticks to quote the code.
The dot there means the same thing as in Julia?
Ps. Use better triple back ticks to quote the code.
From what (little) I understand the element by element multiplication in Matlab is the same as the broadcasting in Julia
Try
cc .= aa.*bb
This should avoid some allocations.
There was yet another typo. This is what you should get:
julia> aa = zeros(Float32,100);
julia> bb = ones(Float32,100);
julia> cc = ones(Float32,100);
julia> function test!(aa,bb,cc)
for i = 1:100
for j = 1:100
for k = 1:100
cc .= aa.*bb;
end
end
end
end
test! (generic function with 1 method)
julia> using BenchmarkTools
julia> @btime test!($aa,$bb,$cc)
15.543 ms (0 allocations: 0 bytes)
julia>
Note a second .
in cc .= aa .* bb
, meaning that you are updating cc
and not creating a new one at every iteration. (this can be written also as @. cc = aa * bb
).
Now it says it takes 600ms, but it takes more than that to print the answer on the screen(!?)
This is because @btime
executes the function many times to obtain an accurate measure of the performance. Also, the function, here test!
is compiled in the first call, so you get an innacurate measure of its performance by measuring only one call of it, if it is a fast function. For example, starting from a fresh section:
julia> aa = rand(Float32,100);
julia> bb = rand(Float32,100);
julia> cc = ones(Float32,100);
julia> function test!(aa,bb,cc)
for i = 1:100
for j = 1:100
for k = 1:100
cc .= aa .* bb;
end
end
end
end
test! (generic function with 1 method)
julia> @time test!(aa,bb,cc) # first run
0.100118 seconds (221.07 k allocations: 12.956 MiB, 9.55% gc time, 83.58% compilation time)
julia> @time test!(aa,bb,cc) # second and following runs
0.025998 seconds
This is using the @time
macro. The @btime
macro does that multiple running for you, and reports the minimum time obtained (which is a more stable measure of performance).
I would suggest that you try out a „real life“ example from your everyday work. That will be much more helpful than some generic example (the one being shown here does not seem very interesting/meaningful to me)
Note also that comparing performance for a single vectorized operation is pretty uninteresting and unlikely to show much in the way of performance gains. See also Comparing Numba and Julia for a complex matrix computation - #3 by stevengj
There is one considerable difference relative to what you are getting, which might be one source of problems. When I run the very last part (the benchmark) I get the following:
julia> @btime test!($aa,$bb,$cc)
551.124 ms (2000000 allocations: 61.04 MiB)
Aside from the time that can be machine specific, it looks like you have 0 allocations, while I have 2M. Is that a problem?
function test(as,bb,vc)
There is a typo there: as
should be aa
(my fault on the original post).
Aside from the time that can be machine specific, it looks like you have 0 allocations, while I have 2M. Is that a problem?
That above. aa
was being considered a global variable inside the function because the name of the parameter was wrong. (I figured it readily when I noticed those allocations, that shouldn’t be there).
(and the broadcasting of setting cc
, the first dot
I mentioned there, cc .= aa .* bb
).
That did make it faster. now it takes 125ms on my laptop, which is at least about the same as Matlab.
Ok, say that I want to try something more complicated. The type of staff that I do is solve/calibrate numerical models whose solution is essentially the solution to a fix point problem.
How should I think about Julia in terms of the things it does more efficiently? Loops always? No vectorizations? Is there a must read book/article/blog?
thanks again guys, I really appreciate it
Is there a must read book/article/blog?
How should I think about Julia in terms of the things it does more efficiently?
I think it is easier to say where you should not expect a significant gain: if the expensive part of the Matlab code is a call to a library which is optimized in a lower level language, the Julia code (which may be eventually a call to the same library, many times) will be equally fast (or slow). Other times a pure Julia library for the same task, or your implementation, adapted to your problem, can be faster.
Julia can be efficient as a low level language as C++ or Fortran, not more, not less.
(Ps. Probably you are still missing the dot or the fixed typo on as
to get that timing. As a good practice it is always nice to post the last code, so others can suggest improvements and fixes. If you have a slightly more realistic example, from there there are other possible optimizations and learning to parallelize the loops, that are worth exploring).
I also have some notes, which I write as I learn: Home · JuliaNotes.jl
I think it is possible that Matlab realized that the result was not dependent on the index i, j, k, and collapsed the whole looping into a single multiplication of matrices.
Edit: This does not happen in Matlab. I checked.
There is a typo there:
as
should beaa
(my fault on the original post).
No, not one typo. Two typos: as
and vc
.
It’s really important to get rid of all the typos.
Also, @asaretto, this is what Matlab was made for. There should not be a big difference between Matlab and Julia on this.
What’s more: this just runs the same simple operation over and over. You should leave those repetitions to your benchmarking tool, don’t do it manually. At worst, the compiler may even decide that it only needs to run one single iteration instead of 100^3, and return a bonkers result.
for i = 1:100
for j = 1:100
for k = 1:100
cc = aa.*bb;
end
end
end
end
, this is what Matlab was made for.
Explicit loops like those are fast in Matlab? (In some specific context or in general?)
Because compilation wasn’t fully explained earlier in the thread, I’ll go over it in a little bit more detail. The first time you run a function, Julia takes a while because it’s translating it into a well-optimized binary that your computer can read. Matlab translates this function while running it, with very little optimization. This means you don’t have to wait for it to get translated, but any future function calls will be much slower than Julia function calls (assuming identically-written programs).
(Something something this is a lie to children, but it gets the rough point across.)
Yes, Matlab has made quite big improvements in the JIT compiler, roughly 2015-ish. There was a big jump in performance for certain kinds of loops that used to be glacially slow. Element wise computations for example.
Matlab is great. But you pay. Then you pay again. Next, you pay. Then more paying. Finally, you pay.
It is a one time payment, paid in annual installments. And after you paid for it, it is entirely free.
Not really. I rented Matlab + addons for 1800 EUR per year for a couple of years, now I cannot use it any more.
They have different type of licenses.
Sorry, 't was a joke (“borrowed” from Portlandia).