I’m playing around with linear programming using Tulip.jl and BigFloat, and I wanted to decrease the time necessary for finding each LP solution, so it seemed like JET.jl might be helpful to investigate possible performance issues. No issues are reported for my code, but there seem to be some in Tulip.
Two lines in particular that are flagged by JET.jl for runtime dispatch confuse me.
These two functions are at the end of Tulip’s src/KKT/LDLFactorizations/ldlfact.jl file:
function update!(kkt::LDLFactSolver{T,K2}, θ, regP, regD) where{T}
m, n = kkt.m, kkt.n
# Sanity checks
length(θ) == n || throw(DimensionMismatch(
"length(θ)=$(length(θ)) but KKT solver has n=$n."
))
length(regP) == n || throw(DimensionMismatch(
"length(regP)=$(length(regP)) but KKT solver has n=$n"
))
length(regD) == m || throw(DimensionMismatch(
"length(regD)=$(length(regD)) but KKT solver has m=$m"
))
copyto!(kkt.θ, θ)
copyto!(kkt.regP, regP)
copyto!(kkt.regD, regD)
# Update KKT matrix
# K is stored as upper-triangular, and only its diagonal is changed
@inbounds for j in 1:kkt.n
k = kkt.K.colptr[1+j] - 1
kkt.K.nzval[k] = -kkt.θ[j] - regP[j]
end
@inbounds for i in 1:kkt.m
k = kkt.K.colptr[1+kkt.n+i] - 1
kkt.K.nzval[k] = regD[i]
end
# Update factorization
try
LDLF.ldl_factorize!(Symmetric(kkt.K), kkt.F)
catch err
isa(err, LDLF.SQDException) && throw(PosDefException(-1))
rethrow(err)
end
return nothing
end
function solve!(dx, dy, kkt::LDLFactSolver{T,K2}, ξp, ξd) where{T}
m, n = kkt.m, kkt.n
# Setup right-hand side
@views copyto!(kkt.ξ[1:n], ξd)
@views copyto!(kkt.ξ[(n+1):end], ξp)
# Solve augmented system
# CHOLMOD doesn't have in-place solve, so this line will allocate
LDLF.ldiv!(kkt.F, kkt.ξ)
# Recover dx, dy
@views copyto!(dx, kkt.ξ[1:n])
@views copyto!(dy, kkt.ξ[(n+1):end])
# TODO: iterative refinement
return nothing
end
JET detects runtime dispatch for the LDLF.ldiv!
call and for the LDLF.ldl_factorize!
call:
││┌ @ /home/nsajko/.julia/environments/v1.8/dev/Tulip/src/KKT/LDLFactorizations/ldlfact.jl:112 %228(%242, %245)
│││ runtime dispatch detected: %228::typeof(LDLFactorizations.ldl_factorize!)(%242::LinearAlgebra.Symmetric{BigFloat, SparseArrays.SparseMatrixCSC{BigFloat, Int64}}, %245::LDLFactorizations.LDLFactorization{BigFloat})
││└────────────────────────────────────────────────────────────────────────────────────────────
││┌ @ /home/nsajko/.julia/environments/v1.8/dev/Tulip/src/KKT/LDLFactorizations/ldlfact.jl:126 %143(%144, %145)
│││ runtime dispatch detected: %143::typeof(LinearAlgebra.ldiv!)(%144::LDLFactorizations.LDLFactorization{BigFloat}, %145::Vector{BigFloat})
││└────────────────────────────────────────────────────────────────────────────────────────────
Two things confuse me here:
- Why does JET say
%143::typeof(LinearAlgebra.ldiv!)
instead of justLinearAlgebra.ldiv!
? Likewise withldl_factorize!
. - Why the runtime dispatch at all, the types indicated by JET for the call are neither abstract (I think) nor unions?
How to reproduce this (if someone is interested)
The function I was interested in optimizing is ipm_optimize!
in IPM/HSD/HSD.jl, with this type signature: ipm_optimize!(hsd::HSD{T}, params::IPMOptions{T}) where{T}
. To run JET successfully I needed the exact types of the arguments of ipm_optimize!
when called by my code, so I put something like this at the beginning of the function:
println(typeof(hsd))
println(typeof(params))
Then I used this information for defining these two type alias constants:
const HSD = Tulip.HSD{BigFloat, Vector{BigFloat}, BitVector, SparseArrays.SparseMatrixCSC{BigFloat, Int64}, Tulip.KKT.TlpLDLFactorizations.LDLFactSolver{BigFloat, Tulip.KKT.K2}}
const PAR = Tulip.IPMOptions{BigFloat}
Then it’s possible to call JET.report_opt(Tulip.ipm_optimize!, Tuple{HSD, PAR})
(I think that’s the correct way to do it).
I should also mention that I modified some parts of Tulip to remove @timeit
macros and debugging branches of the code. Also, in a few places I replaced try x catch y end
with just x
. This will both conceivably help with my performance issues and does away with a lot of JET noise.