"Best practice" for constant expressions (1/2 vs. 1.0/2.0)?

Hi all,

what do you recommend for expressions in statements concerning e.g. performmace. Examples:

a = 1/2 * sin(2/3 * x) or better 1.0/2.0 * sin(2.0/3.0) or even 1//2 * sin(2//3 * x)

BTW: I don’t want to write 0.5 * sin (0.6666666 * x) or something like that in order to make them similar to the mathematical formula. I would prefer the first variant.

From my Fortran history I was used to avoid internal conversions, e.g. if result (variable a) was double precision (aka Float64) then it was written like 1.d0/2.d0 + sin(…) and not 1/2 * sin(…).

Any recommendations, any performance differences (of course not for a single statement, but if you have thousands of such terms it may matter)? I like to find a final (own) “style” for the re-implementation of some projects in Julia (from Fortran).

Cheers

Pretty easy to do some tests and find out:

julia> f1(x) = 1/2 * sin(2/3 * x)
f1 (generic function with 1 method)

julia> f2(x) = 1.0/2.0 * sin(2.0/3.0 * x)
f2 (generic function with 1 method)

julia> f3(x) = 1//2 * sin(2//3 * x)
f3 (generic function with 1 method)

julia> using BenchmarkTools

julia> @btime f1(0.5);
  1.989 ns (0 allocations: 0 bytes)

julia> @btime f2(0.5);
  2.288 ns (0 allocations: 0 bytes)

julia> @btime f3(0.5);
  57.565 ns (0 allocations: 0 bytes)

julia> x = rand(10000);

julia> @btime f1.($x);
  89.042 μs (2 allocations: 78.20 KiB)

julia> @btime f2.($x);
  88.785 μs (2 allocations: 78.20 KiB)

julia> @btime f3.($x);
  507.082 μs (2 allocations: 78.20 KiB)

So it looks like the only hard-and-fast rule is to not use rational numbers. That makes sense because there aren’t hardware instructions for multiplying rationals.

The problem here is that not all functions used by rationals are getting inlined.

I tried adding some @inline annotations to the source code of Rational and then we get:

julia> f2(x) = 1.0/2.0 * sin(2.0/3.0 * x)
f2 (generic function with 1 method)

julia> f3(x) = 1//2 * sin(2//3 * x)
f3 (generic function with 1 method)

julia> @code_llvm f2(2.0)
; Function f2
  %1 = fmul double %0, 0x3FE5555555555555
  %2 = call double @julia_sin_215390639(double %1)
  %3 = fmul double %2, 5.000000e-01
  ret double %3
}

julia> @code_llvm f3(2.0)
; Function f3
  %1 = fmul double %0, 0x3FE5555555555555
  %2 = call double @julia_sin_215390639(double %1)
  %3 = fmul double %2, 5.000000e-01
  ret double %3
}

so it might be worth adding those @inlines to Julia so we can write literals with // for free.

15 Likes

I tend to write it like this:

f4(x) = sin(2x / 3) / 2

This makes it preserve (some) types a bit better than f1:

jl> x = 0.5f0
0.5f0

jl> sin(x)
0.47942555f0

jl> f1(x)
0.1635973483980761

jl> f4(x)
0.16359736f0

Wherever I can, I try to use integer literals.

1 Like

Good point, because beside the performance an important (even more important) point to me is precision and preserving types, or at least not so do some unexpected. Indeed we have different results in :

julia> x = 0.5f0
0.5f0

julia> 2x/3
0.33333334f0

julia> 2*x/3
0.33333334f0

julia> 2/3*x
0.3333333333333333

Did you @inline any functions in Rational.jl other than those using div, gcd or divgcd and the ^ with an Int?

Also the constructor IIRC. I just looked at @code_typed and inlined whatever was being called until nothing was called :).

Unfortunately, this performs the expensive divisions at runtime. What you want is something like the following, only less ugly:

f5(x::T) where T =(TT=float(T); TT(1/2)*sin(TT(2/3)*x))

That is, you want to preserve single-precision and double-precision, and you want to give the compiler liberty to replace division by multiplication with compile-time computed reciprocals. Then, you also want to work nice with AD types, and you want your code to be readable :frowning:

No, that computes 2/3 in double precision and then converts to TT, which will lose accuracy if TT is greater precision (e.g. BigFloat). You need TT(2)/TT(3) or similar.

1 Like

Some of the rational algorithms are fairly complicated (requiring a gcd etcetera), so it seems suboptimal to tie constant propagation to inlining. It seems like it would be better if we could mark // as pure to let it be evaluated at compile-time

1 Like

Yeah, I thought the same.

We don’t have a way right now to express pure in the common sense of no side effects. The @pure we have is too strong and it is rarely used correctly. Also, annotating everything with @pure seems like a hopeless endeavour to get right and get good enough coverage, something like POC/WIP: leverage LLVM function attributes for better CSE by jrevels · Pull Request #29566 · JuliaLang/julia · GitHub seems more reasonable.

1 Like