Efficiently type alias Float128, Float64, and Float32

I am trying to make some of my scripts more convenient by using command line options to switch between using Float128, Float64, and Float32 in multiple locations. My attempt does this by doing something like the following:

FloatA = eval(Meta.parse(ARGS[1]));
FloatB = eval(Meta.parse(ARGS[2]));

function init_a(n::Integer)::Matrix{FloatA}
    a = zeros(FloatA, n, n)
    #Initialize a
    return a
end

function init_b(n::Integer)::Matrix{FloatB}
    b = Matrix{FloatB}(I, n, n)
    #initialize b
    return b 
end

When I do this in my script however, I take a huge performance hit and I’m guessing it is because I am making my floating point types global variables. Just as a comparison, I see the following timings first when I don’t use the command line method (in other words I just use Float64 or Float32 for everything), followed by the use of type aliasing via command line:

$> julia myscript-64.jl 1000000 20
Method time: 4.471549937
$> julia myscript-cl.jl Float64 Float64 1000000 20
Method time: 23.7452088

Are there ways that I might be able to use a pre-processor of some kind, or maybe another method of doing this? I see in the metaprogramming docs that it might be possible to use a struct, but it seems rather tedious to have to bind every single function I might use for my type.

Thanks in advance for any tips!

Declare them as const, or pass them as arguments to your functions, which also makes the code much better because you don’t need two functions:

julia> init(n,T) = zeros(T,n,n)
init (generic function with 1 method)

julia> init(2,Float32)
2Γ—2 Matrix{Float32}:
 0.0  0.0
 0.0  0.0

julia> init(2,Float64)
2Γ—2 Matrix{Float64}:
 0.0  0.0
 0.0  0.0


It is more typical, though, that the types are derived from the type of input data being used, defined by some initial value somewhere else. For example, if you were to do a matrix-vector multiplication, where your vector is the input and the other is a random matrix of the same type, you could do:

julia> function f(x::AbstractVector)
           m = rand(eltype(x),length(x),length(x))
           return m*x
       end
f (generic function with 1 method)

julia> f(Float64[1.,2.,3.])
3-element Vector{Float64}:
 4.905931069551787
 3.091571412189738
 2.391551994266473

julia> f(Float32[1.,2.,3.])
3-element Vector{Float32}:
 3.374429
 3.7870452
 3.1641507


2 Likes

I think you are measuring compile time:

D:\Temp>julia test2.jl 10000
BenchmarkTools.Trial: 23 samples with 1 evaluation.
 Range (min … max):  129.072 ms … 331.770 ms  β”Š GC (min … max):  0.00% … 36.07%
 Time  (median):     223.699 ms               β”Š GC (median):    39.80%
 Time  (mean Β± Οƒ):   220.838 ms Β±  62.594 ms  β”Š GC (mean Β± Οƒ):  28.47% Β± 23.30%

   β–ƒ          β–ˆ                      β–ˆ                        β–ƒ
  β–‡β–ˆβ–β–β–β–β–‡β–β–β–β–β–β–ˆβ–β–‡β–β–β–β–β–‡β–‡β–‡β–β–β–β–β–β–β–‡β–β–β–β–β–‡β–‡β–ˆβ–‡β–β–β–β–β–β–β–β–β–β–‡β–‡β–β–β–β–β–β–β–β–β–β–β–‡β–β–ˆ ▁
  129 ms           Histogram: frequency by time          332 ms <

 Memory estimate: 762.94 MiB, allocs estimate: 2.

D:\Temp>julia test1.jl Float64 10000
BenchmarkTools.Trial: 23 samples with 1 evaluation.
 Range (min … max):  123.081 ms … 319.950 ms  β”Š GC (min … max):  0.00% … 43.44%
 Time  (median):     203.366 ms               β”Š GC (median):    32.62%
 Time  (mean Β± Οƒ):   217.781 ms Β±  63.084 ms  β”Š GC (mean Β± Οƒ):  28.12% Β± 23.13%

                                     β–ˆ                       β–ƒ
  β–‡β–β–β–‡β–‡β–‡β–β–β–β–β–‡β–β–β–‡β–β–‡β–β–β–β–‡β–‡β–‡β–‡β–β–‡β–β–β–β–β–β–β–‡β–β–β–β–ˆβ–‡β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–‡β–‡β–‡β–β–β–β–β–β–ˆβ–‡ ▁
  123 ms           Histogram: frequency by time          320 ms <

 Memory estimate: 762.94 MiB, allocs estimate: 4.

with nearly same timings.

Julia 1.7.1 , THIS MAY BE IMPORTANT HERE!

with
test1.jl:

using BenchmarkTools

FloatA = eval(Meta.parse(ARGS[1]));
n=parse(Int,ARGS[2])

function init_a(n::Integer)::Matrix{FloatA}
    a = zeros(FloatA, n, n)
    #Initialize a
    return a
end

b = @benchmark init_a(n)
io = IOBuffer()
show(io, "text/plain", b)
s = String(take!(io))
println(s)

and test2.jl:

using BenchmarkTools

#FloatA = eval(Meta.parse(ARGS[1]));
n=parse(Int,ARGS[1])

function init_a(n::Integer)::Matrix{Float64}
    a = zeros(Float64, n, n)
    #Initialize a
    return a
end

b = @benchmark init_a(n)
io = IOBuffer()
show(io, "text/plain", b)
s = String(take!(io))
println(s)
1 Like

Depending on your use case, it still may be a problem. If you need to call julia test.jl .... many times you will suffer from compile time.

So, the tips of @lmiq are good for you or you may tell us more about your real problem to solve.

1 Like

Thanks @oheil for taking a closer look, indeed the fact that the types are not constant probably doesn’t have great effect here since the type is only passed as an argument in the example to the zeros function. Yet, if the function continues doing something with those matrices, I think all that will be type unstable:

julia> function test(n)
           x = zeros(FloatA,n,n)
           y = zeros(FloatA,n)
           return x*y
       end
test (generic function with 1 method)

julia> FloatA = Float64
Float64

julia> @code_warntype test(3)
MethodInstance for test(::Int64)
  from test(n) in Main at REPL[11]:1
Arguments
  #self#::Core.Const(test)
  n::Int64
Locals
  y::Union{Vector, Matrix{Float64}}
  x::Union{Array{Float64, 3}, Matrix}
Body::Any
1 ─      (x = Main.zeros(Main.FloatA, n, n))
β”‚        (y = Main.zeros(Main.FloatA, n))
β”‚   %3 = (x * y)::Any
└──      return %3


1 Like

You should strive to avoid eval(Meta.parse(...)) when possible, since it allows execution of arbitrary code and is generally unsafe. A better pattern might be

const VALID_FLOAT_TYPES = Dict(string(v) => v for v in [Float32, Float64, Float128])

const FloatA = VALID_FLOAT_TYPES[ARGS[1]]
const FloatB = VALID_FLOAT_TYPES[ARGS[2]]
6 Likes

Thank you! Simply changing

FloatA = eval(Meta.parse(ARGS[1]));
FloatB = eval(Meta.parse(ARGS[2]));

to

const FloatA = eval(Meta.parse(ARGS[1]));
const FloatB = eval(Meta.parse(ARGS[2]));

changed the timings to be equivalent to not using command line arguments.

I don’t think I was factoring compile time into the evaluation (but I am new to the language so I might be doing something wrong). I’m timing by using

elapsed_time = @elapsed begin
    u = method_driver(nt, nx, ts, xs)
end;

We are trying to collect the runtime of the method only and not any of the boilerplate functions, and I chose this method of timing with the hopes that compile time would also be left out. I have been assuming that it isn’t factoring compile time because the elapsed times seem to be the same each time. Is this the right way to go about doing timings?

That is correct, except that you need to interpolate the variables and use the BenchmarkTools.jl package, i. e, use

using BenchmarkTools
elapsed_time = @belapsed begin
    u = method_driver($nt, $nx, $ts, $xs)
end;

I would also follow the suggestion of @stillyslalom, that pattern is much nicer.

Edit: sorry! I read @belapsed mistakenly (force of the habit), and thought you were already using BenchmarkTools. So no, you should follow the advice below. (edited this post)

Because each method is compiled immediately before its first invocation, the first run within a given Julia instance will include compilation time:

julia> @time println(1//2)
1//2
  0.025033 seconds (15.65 k allocations: 887.980 KiB, 98.29% compilation time)

julia> @time println(1//2)
1//2
  0.000476 seconds (14 allocations: 448 bytes)

The simplest way to isolate runtime is to run a function once to warm it up (often with β€˜small’ input arguments, say n = 10 instead of n = 10^6), then a second time with @time or @elapsed. You can get more statistically-stable results from BenchmarkTools.jl, or more a detailed breakdown of runtime from TimerOutputs.jl.

Awesome I’ll look into some of those packages for timing. Thanks for the tips! I appreciate the help and advice