Splatting a vector into hypot

The following code takes an extremely long time to compile on my computer:

x = randn(10000);
hypot(x...) # 220.104289 seconds (66.32 M allocations: 5.422 GiB, 0.88% gc time, 100.00% compilation time)
hypot(x...) # 0.001436 seconds (60.03 k allocations: 2.290 MiB)

I’d imagine that splatting such a large vector causes issues. I’d like to clarify: is this intended behaviour? Is the underlying algorithm for hypot only a good idea for small vectors regardless, or does the fact that we have to splat prevent the use of hypot on larger vectors even though it would be a good idea to do so otherwise?

If any case, perhaps some clarification should be added to the Julia docs, because as a new Julia user looking for a way to find the L_2 norm of a vector, I simply saw hypot(x...) and ran with it, which led me to code with a very long compilation time, that crashed for very large vectors.

1 Like

This is not only a problem of hypot but a more general one. (Don’t) try this one:

x = randn(10000)
y = (x...,)

Seems like type inference for NTuple{10000, Float64} is a tough call.

Edit: it is not only type inference, but compilation itself. It seems we are generating specializations for n in 1:10000. Issue: https://github.com/JuliaLang/julia/issues/35619

1 Like

From base/math.jl:

hypot(x::Number, y::Number, xs::Number...) = _hypot(float.(promote(x, y, xs...))...)

The problematic part here seems to be float.(...)
But you can work around with:

julia> Base.Math._hypot(x...)
100.39290077763592

when you know that x is Array{Float}.
But it’s probably not recommended.

I think it’s worth to open an issue.

3 Likes

You might be looking for LinearAlgebra.norm. Which arguably should be linked from the help.

Is the underlying algorithm for hypot only a good idea for small vectors

For more than two arguments, it isn’t doing anything very exotic: https://github.com/JuliaLang/julia/blob/1389c2fc4af952f5c8b9759cf6fe633995b523f9/base/math.jl#L708-L717

3 Likes