No method matching `oneunit(::Type{Any})` when solving matrix equation

I have some functions trying to solve a matrix equation

u = A\b

inside a function. Upon running the function, I receive the error

ERROR: MethodError: no method matching oneunit(::Type{Any})

What’s strange is, I copy the values of A and b from the terminal via @show, i.e.

@show A = ...
@show b = ...

and run u = A\b in a script, and it runs without errors.

What am I doing wrong? What results in the error when run inside the function?

Let me know if you would like the specific context via a standalone Julia script.

What’s the type of A and b?

typeof(A) = SymTridiagonal{Float64,Array{Float64,1}}
typeof(b) = Array{Any,1}

Well there’s your problem - b can hold elements of quite literally any type and there is no one definition of oneunit for all possible types. I don’t know how you generate b, but you’ll probably want to make sure it contains elements of a concrete type, in your case possibly Float64.

Thanks! That solved it.

b is constructed as a matrix product of SymTridiagonal{Float64,Array{Float64,1}} and Array{Real,1} and the result was an array of Any which… should I think that’s strange?

Anyway, I just wrapped the definition of b with real, i.e.

b = real(...)

and it fixed it.

Thanks for your help!

Maybe there’s a missing method? If you can share some more code to reproduce this deterministically, you might be able to file an issue to get this fixed (or you’re doing something else weird, who knows :p).

Okay, well here’s a simple replication:

julia> using LinearAlgebra

julia> A = SymTridiagonal([1., 2., 3.], [4., 5.])
3×3 SymTridiagonal{Float64,Array{Float64,1}}:
 1.0  4.0   ⋅ 
 4.0  2.0  5.0
  ⋅   5.0  3.0

julia> v = Vector{Real}(undef, 3)
3-element Array{Real,1}:
 #undef
 #undef
 #undef

julia> for n = 1:length(v)
           v[n] = rand(Float64)
       end

julia> typeof(A)
SymTridiagonal{Float64,Array{Float64,1}}

julia> typeof(v)
Array{Real,1}

julia> typeof(A*v)
Array{Any,1}

However, when initializing v as a vector of Float64s, the Any issue disappears.

julia> using LinearAlgebra

julia> A = SymTridiagonal([1., 2., 3.], [4., 5.])
3×3 SymTridiagonal{Float64,Array{Float64,1}}:
 1.0  4.0   ⋅
 4.0  2.0  5.0
  ⋅   5.0  3.0

julia> v = Vector{Float64}(undef, 3)
3-element Array{Float64,1}:
 1.902534155e-315
 8.06947933e-316
 0.0

julia> for n = 1:length(v)
           v[n] = rand(Float64)
       end

julia> typeof(A)
SymTridiagonal{Float64,Array{Float64,1}}

julia> typeof(v)
Array{Float64,1}

julia> typeof(A*v)
Array{Float64,1}

Real is an abstract type and all concrete subtypes of Real are these (expanded recursively using subtypes):

17-element Array{Any,1}:
 BigFloat
 Float16
 Float32
 Float64
 Irrational
 Bool
 UInt128
 UInt16
 UInt32
 UInt64
 UInt8
 BigInt
 Int128
 Int16
 Int32
 Int64
 Int8

Are you sure you want to have v be able to hold all of those?

If all you want is a vector of Float64 of n elements, rand(n) is what you’re looking for:

julia> rand(4)
4-element Array{Float64,1}:
 0.18146125094318588
 0.5071153204876462
 0.43918306228459847
 0.5878162664116326
1 Like

I see your point, and the answer is not necessarily yes, but considering things like optimization through type stability, and… other stuff I’m sure exists in efficient and healthy coding in Julia… I suppose the answer is no.

I had the (incorrect) mindset that if I initialize with Real then it would well handle anything I feed the vector elements.

Oh, in my original context, there were specific values to assign the vector in a loop (complicated so I couldn’t use comprehensions (or could I…)) and in my MWE above, I used rand to grab any random values, what the values were didn’t matter.

You can always put the hard work of generating those values into a function and call that in the comprehension - in fact, it’ll probably work better than hard-coding Real since the compiler can infer exactly what types are actually needed :slight_smile: (as long as the generating function is somewhat type stable, of course)


Nevertheless, I think the compiler should be able to infer Real here :thinking:

2 Likes

Ah, good point. That makes total sense.

That… is a genius idea. Why didn’t I think of that.

My original thoughts exactly. But let’s go with specific type specifications, it’s healthier for the code I suppose. :slightly_smiling_face:

Thanks again @Sukera :smiley:

1 Like

I’ve opened an issue.

Well… not if it unnecessarily restricts the type, like in your case :sweat_smile: If you change your mind or the return type of the function called in your comprehension changes, your code will recompile to fit those types and still be fast. If you keep it abstractly typed, you’ll incur performance overhead from having to actually check the runtime type of the elements.

1 Like

Sorry you lost me for a bit there.

When you say

are you saying that when the matrix-vector product produced a vector of type Any, that that’s the restriction?

No, that’s not what I mean :thinking: In fact, “restricts” was the wrong word to use, it’s more like writing Array{Real,1} unnecessarily opens your the type of the array? As a result of that, A*v (wrongly) infers to Any, making everything down the road slower. Since you’re only (presumably) storing a small subset of all possible concrete types of Real at any one point, having the container be Real instead of e.g. Union{Float64,Int64} like it may be as a result of using the [ f(...) for _ in 1:n ] approach limits optimization to things that any Real can be. If it were Union{Float64,Int64}, the compiler would know that each element was 64 bits long and could make smarter decisions about how to make it fast. Otherwise it would have to take care of the cases where the element is e.g. Int8, which is much shorter with only 8 bits.

1 Like

Ah, okay that makes sense.

I’ve subscribed to the issue you opened, I’m interested to see where this goes. But AFAI understand, what you’re suggesting totally makes sense - get the inference right so that we can continue writing code that will be automatically type stable, instead of becoming Any.

It could, but it’s a deliberate choice not to: https://github.com/JuliaLang/julia/pull/36208. In this case,

julia> methods(+, (Int, Real))
# 7 methods for generic function "+":
[1] +(x::T, y::T) where T<:Union{Int128, Int16, Int32, Int64, Int8, UInt128, UInt16, UInt32, UInt64, UInt8} in Base at int.jl:87
[2] +(c::Union{Int16, Int32, Int64, Int8}, x::BigInt) in Base.GMP at gmp.jl:534
[3] +(c::Union{Int16, Int32, Int64, Int8}, x::BigFloat) in Base.MPFR at mpfr.jl:385
[4] +(a::Integer, b::Integer) in Base at int.jl:919
[5] +(x::T, y::T) where T<:Number in Base at promotion.jl:394
[6] +(y::Integer, x::Rational) in Base at rational.jl:295
[7] +(x::Number, y::Number) in Base at promotion.jl:321

julia> Base.return_types(+, (Int, Real))
7-element Vector{Any}:
 Int64
 BigInt
 BigFloat
 Any
 Union{}
 Rational
 Any

Here there are 5 concrete and 2 abstract applicable methods, well over the limit of 3.

2 Likes

I guess I’ll stick to Float64ing or AbstractFloating instead of Realing then.

Thanks! :smiley:

I think that LinearAlgebra methods should work with any type if they contain valid numbers. Cf

2 Likes

Intuitively that’s what I initially thought would happen, with the philosophy that it would/should “just work”. But I also accept that there’s more going on within the internals, and will leave it to you guys to figure out a viable solution.

An alternative to

v = Vector{Real}(undef, 3)
for n = 1:length(v)
    v[n] = rand(Float64)
end

is

v = [rand(Float64) for n in 1:3]

(We’re assuming that rand(Float64) is a stand-in for some more complicated expression, possibly a function of n, etc.)

The latter syntax not only saves a bit of typing, it will also give a more narrow element type for v. In fact, even if you do use the first syntax to generate v, you could re-generate it afterwards using

v = [v[i] for i in eachindex(v)]

and that will narrow down the element type.

2 Likes