Type conversions with 2π (2pi) and how to avoid them


Quite often I write expressions such as:

julia> x = randn(Float32, (2, 2));

julia> typeof(pi)

julia> x ./ π # all good
2×2 Matrix{Float32}:
 -0.22765    -0.10363
  0.0520012   0.129431

julia> x ./ (2π) # Type change!
2×2 Matrix{Float64}:
 -0.113825   -0.0518152
  0.0260006   0.0647153

julia> x ./ (eltype(x)(2π))  # workaround
2×2 Matrix{Float32}:
 -0.113825   -0.0518152
  0.0260006   0.0647153

Which suddenly implies a type conversion since the denominator is typeof(2pi) === Float64.
(I’m aware that there is Tau.jl and IrrationalConstants.jl which I feel is rather inconvenient to import)

We discussed in Slack a little: If one would define as Irrational then you would be also tempted to define 4/3π and many more. The question would be: when do we stop?

So what I usually do to avoid the type issues with π, I wrap π or always in a type parameter:

function f(x::Matrix{T}) where T
     return x ./ T(2π)

I feel like, we should mention that somewhere in the Performance Tips since this is a common pitfall when people work with Float32 and suddenly it gets converted to Float64.
What do you think?



1 Like

Some hazards with this is that it will error when eltype(x) is an Integer or Rational and it will lose precision on BigFloat due to the intermediate Float64. A more generic workaround is x ./ (convert(eltype(x),2)*pi), or to use float(eltype(x)) instead of eltype(x) for the target type.

Unfortunately, to really understand or mitigate this issue, one needs familiarity with Julia’s promotion system. It’s not the easiest thing to fit in a footnote but perhaps someone can find a way. In any case, I think this would fit more closely with the promotion system rather than performance tips. Using Float64 isn’t a serious performance trap (worst case this promotes an operation that would SIMD and you lose 2x performance), but this seems important to somebody who cares a lot about the types involved in their computations.

That said, most people savvy enough to care to work in Float32 can probably figure out one of the many ways around this for their specific case.

1 Like

The precision loss can be prevented by only wrapping pi, can’t be?

And for Integer I think it’s not a big deal since algorithms involving division or multiplication with pi are probably no integers. But good point with rationals.

julia> BigFloat(2)*pi - 2pi

pi is not a Float64 but rather an Irrational type (although Irrational commonly promotes to Float64). The promotion system sees you multiplying 2.0::BigFloat by pi::Irrational and promotes pi to BigFloat, thus giving it the extended precision.

But I will certainly concede that 2*convert(float(eltype(x)),pi) may be more intuitive. Even here, 2*x would promote a lesser Integer to Int which is only avoided in our case by the fact that our x is definitely not an integer type.

Sorry if this is not quite the point but in Julia’s manual it says we can redefine pi. However, starting a fresh session (and without startup.jl), I can’t do it:

pi = Float32(pi)      # throws error in Julia 1.7.3

which, if it worked would allow getting:

julia> typeof(2pi)

That’s because you’re using the existing constant first, just like in the next example directly below:

However, if you try to redefine a built-in constant or function already in use, Julia will give you an error:

julia> pi
π = 3.1415926535897...

julia> pi = 3
ERROR: cannot assign a value to variable MathConstants.pi from module Main
1 Like

Thanks. It makes sense, but it is frustrating…

This isn’t really designed for REPL behavior. The real point here is that you can redefine something normally exported from Base so long as you haven’t already used it within that scope.

julia> pi
π = 3.1415926535897...

julia> function foo()
       pi = 3 # local pi has no relation to the one in `Main` aka the REPL
       return pi
foo (generic function with 1 method)

julia> foo()

julia> pi
π = 3.1415926535897...
1 Like

It occurs to me that we could define τ and make return τ. This is a very special case though and it would not work for any other coefficient; moreover, it’s probably a bad idea still since it would make x*π type unstable in general.

You could just return an instance of a new AbstractIrrational type for *(::Integer, ::Irrational), representing an exact multiple, which could be type-stable. IrrationalExpressions.jl is a more complicated attempt at this.

See also the discussion in this issue.

(Doing this in a package requires type piracy if you want to support the built-in Irrational type, however, making this problematic to implement outside of Base. It’s also problematic from a backwards-compatibility perspective because e.g. f(2π) may suddenly throw method errors for functions f that don’t accept arbitrary Real arguments.)


I’m aware of the package, but this would be a motivation for the definition being in Base.

pi = Float32(MathConstants.pi)
1 Like

I think such solutions are kind of dangerous since they involve precision loss and don’t solve the issue if you work with Float16 on a GPU for example.

I agree it’s inconvenient. I put using Tau in my startup.jl for this reason.

Putting tau in Base doesn’t seem like a bad idea though.

I see at least two problems.

Why do you insist on parentheses (not a problem if you think clearer, but would be if Julia had different precedence)? They have no effect when juxtaposition is used, you get the precedence you would want from math, same as:

julia> x ./ 2π  # and even x / 2π without the dot
2×2 Matrix{Float64}:
 -0.19802    0.208875
 -0.0571671  0.229319

but different from:

julia> x ./ 2*π  # equivalent to (x ./ 2)*π, likely why you used them out of habit.
2×2 Matrix{Float32}:
 -1.95438   2.06152
 -0.564216  2.26329

but note at least you got the right type (and would have for whatever constant with the pi). So you could do:

julia> x / 2 * 1/π  # just less clear, but good to know... and the type is correct:
2×2 Matrix{Float32}:
 -0.19802    0.208875
 -0.0571671  0.229319

Would there be a possibility for Julia to interpret x / 2π, as the semantically equivalent to (in math), the above (x / 2) * 1/π = 0.5x/π (or since this gives Float64), or rather 0.5f0x/π = two_over_pi_in_Float32*x?

@code_native c/2 gives me vdivss in (brand-new 1.9.0-DEV.1078) but should (preferably) give a multiply instruction.

Which brings be to the second problem, the division(s) got me thinking, is this slow, and timing now non-trivial (here for 200x200):

julia> @time x_large / 2 * 1/π;
  0.000133 seconds (6 allocations: 468.891 KiB)

julia> @time x_large / 2 * 1/π;
  0.000330 seconds (6 allocations: 468.891 KiB)

julia> @time x_large / 2 * 1/π;
  0.000469 seconds (6 allocations: 468.891 KiB)

the next 10 timings were as slow, never as fast as the first one, so I suspected load from the web browser, so I killed it (well actually suspended it)

julia> @time x_large / 2 * 1/π;
  0.010223 seconds (6 allocations: 468.891 KiB, 95.40% gc time)

I had just "killed" firefox, before the above excessive above, I guess GC is a coincidence, not induced by that killing firefox.

Time went down for a while, then up again:

julia> @time x_large / 2 * 1/π;
  0.000128 seconds (6 allocations: 468.891 KiB)

julia> @time x_large / 2 * 1/π;
  0.000128 seconds (6 allocations: 468.891 KiB)

julia> @time x_large / 2 * 1/π;
  0.000126 seconds (6 allocations: 468.891 KiB)

julia> @time x_large / 2 * 1/π;
  0.000402 seconds (6 allocations: 468.891 KiB)

julia> @time x_large / 2 * 1/π;
  0.000425 seconds (6 allocations: 468.891 KiB)

A better way:

julia> two_over_pi = Float32(1/2π)
julia> @time x_large .*= two_over_pi;
  0.000070 seconds (2 allocations: 64 bytes)
julia> GC.gc(true)  # seems to guarantee I see minimal time after.
julia> @code_native @inbounds x_large / 2 * 1/π;  # is very excessive and even worse without @inbounds but similarly but misleadingly much better (only one page) with:

julia> f(x_large) = @inbounds x_large / 2 * 1/π

julia> @code_native f(x_large)

because I see three callq and no div or mul, so code is likely still large, just elsewhere. Timing is the same so, in this case, maybe NOT better to wrap in a function (as usually stated to do), at least to see the code.

Does anyone know if the broadcasting implies using threads? Would that always be in your hands? Could easily fix, make a loop, but that didn’t work with or without a dot:

julia> @time @Threads.threads x_large .*= two_over_pi;
ERROR: LoadError: ArgumentError: @threads requires a `for` loop expression
julia> f!(x_large) = @inbounds x_large .*= Float32(1/2π)
f (generic function with 1 method)

julia> @time f!(x_large);
  0.000052 seconds

That gives the longest assembly, but all of it, no call, and only mul, no div, unless I missed something.