Hi, Thank you for all the effort that has gone into getting Julia to this point.
I’m very new to Julia and trying to get my head around when I get faster run-times or smaller fpo-counts by handing things over to the compiler and when I’m better off hand coding something.
Take this example:
Given x
is an array of Float64
or BigFloat
.
Create x, x^2, x^3, ....x^n
.
Obviously there are common sub-expressions that I could exploit and save the number of calculations. Example: x2 = x*x; x4 = x2 * x2
, etc
How do I determine if the Julia compiler exploits those relations in something like the following:
x=[1,2,3,4];
collect(ntuple(i -> x::Array{Float64,1} .^ i::Int64, 4))
Alternatively:
julia> @time map(i -> x .^i ,[1,2,3,4])
0.133190 seconds (72.89 k allocations: 3.724 MiB, 21.80% gc time)
4-element Array{Array{Int64,1},1}:
[1, 2, 3, 4]
[1, 4, 9, 16]
[1, 8, 27, 64]
[1, 16, 81, 256]
In fact: Is the above the most efficient Julia-way to build such a matrix or array of arrays. Or am I better-off handing coding (accepting that this might limit the size of n
).
Appreciate any hints or tips.
Update:
David Sanders Slack comment pointed to Cassette.jl as a package suited to write a FPO counter.
Mark Van de vyver
Is there a convenience macro/function that counts the floating point operations in compiled code/function?
David Sanders
You should be able to do that with Cassette.jl
Or you could probably set up a new type that wrapped floating point numbers and counted the operations. But Cassette effectively does that automatically for you
Such a counter would go a long way to answering the question “Which approach ends up using the fewest floating-point-operations”. However, it is beyond my current Julia-fu to work out how to make such a counter from Cassette.jl.
Some related/prior art in Python land is outlined in this Github issue.
HTH