My program precomputes some matrices, then operates on them. During the computation, it may turn out that some of the matrices are Diagonal
, etc. However, I do not know the structure ahead of time, and once pre-computed, the matrices are operated on iteratively. Currently I have a type that looks like:
struct MyModel
mats::Vector{AbstractMatrix{Float64}}
end
model = MyModel([Diagonal([1.0; 2.0]), [1.0 1.0; 1.0 1.0]])
and I have a function that looks like (simplified for brevity):
solve(m::MyModel) = m.mats[1] * m.mats[2]
The type can’t be inferred in this case, so the recommended recourse is to use a function barrier:
solve(m::MyModel) = solve(m.mats[1], m.mats[2])
solve(m1::AbstractMatrix, m2::AbstractMatrix) = m1 * m2
However, with this formulation of MyModel
, there is no way to run solve
on the top-level model
while getting a type-stable output.
Is this good enough? I’m wondering if there is a better way to set up MyModel
so that there are no abstract types in its definition, but I can still be fully expressive. What is the typical way to solve this problem? Would a Union
of the types I expect help?
If the matrixes are big enough that storing them sparsely is faster than storing them densely (e.g. forcibly just converting them to Matrix{Float64}
in the mats
Vector), then you probably don’t need to worry about the speed of dispatch. I would write whatever is clearer / easier to write, then use @profile
(from the stdlib using Profile
package) to see if it’s a bottleneck, before try to make optimizations.
You could do something like this
import Base.push!
struct MyModel{T <: Number}
mats::Vector{Matrix{T}}
diag_mats::Vector{Diagonal{T}}
end
MyModel() = MyModel( Vector{Matrix{Float64}}(), Vector{Diagonal{Float64}}() )
push!(m:: MyModel,mat::Matrix{T}) where T <: Number = push!(m.mats,mat)
push!(m:: MyModel,mat::Diagonal{T}) where T <: Number = push!(m.diag_mats,mat)
model = MyModel()
push!(model,Diagonal([1.0; 2.0]))
push!(model,[1.0 1.0; 1.0 1.0])
solve(m::MyModel) = m.mats[1] * m.diag_mats[1]
But I don’t know if you could then write your solve function in a nice way. You could use meta-programming to deal with larger number of matrix types.
It is not clear to me
- Whether you have two, or that was just for the MWE and you have more matrices, or even a large number.
- If all of them are of the same type.
If they are the same type, you can use a T<: AbstractVector{<: AbstractMatrix}
for mats
, which could ideally be a concrete type. This can cope with a large number of matrices.
If they are of a heterogeneous type, but not too many of them, you could use a Tuple
.
If solve
is inherently not type stable, you could perhaps break it up into functions so that some components are. But I second @jameson’s point: not everything can and need to be type stable to be fast. Occasional high level dynamic dispatch is very useful, eg look at the implementation of \
for matrices. Did you actually benchmark your code and establish that dynamic dispatch is a bottleneck?
Thanks for the replies everyone, all the responses helped clarify the possible choices I have. I will continue using what writes nicest for now (the high-level dynamic dispatch), then profile
the code and test both a dense representation and something like @jonathanBieler wrote.
1 Like