Type issue to get PreallocationTools.DiffCache work with Symbolics.jacobian_sparsity

Hello,

I would like to use Symbolics.jacobian_sparsity to detect the sparsity pattern for a code that supports ForwardDiff.jacobian using PreallocationTools.

The code below is a MWE for a program where the full residual array dae.f is incrementally built by evaluating multiple functions res1 and res2.

using Symbolics
using PreallocationTools
using ForwardDiff


struct GlobalDAE{T}
    x::T  # array of variables
    f::T  # array of residuals
end


dae_cache = GlobalDAE(dualcache([0., 0.]), dualcache([0., 0.]));


# multiple residual functions that collectively build the residual array
function res1!(dae::GlobalDAE, u)
    f = get_tmp(dae.f, u)
    x = get_tmp(dae.x, u)
    @views f[1] = 2 * x[1] * x[1]
    nothing
end

function res2!(dae::GlobalDAE, u)
    f = get_tmp(dae.f, u)
    x = get_tmp(dae.x, u)
    @views f[2] = 2 * x[2] * x[2]
    nothing
end


# A dispatch function to set the guess from the solver, call the residual
# functions, and write to the residual array
function res_dae_cache!(dae, output, x0)
    get_tmp(dae.x, x0) .= x0   # <- errors here for x0 = Num[...]

    res1!(dae, x0)
    res2!(dae, x0)

    output .= get_tmp(dae.f, x0)
    nothing
end


wrap_res_dae_cache! = (output, x0) -> res_dae_cache!(dae_cache, output, x0)


x0 = [1., 2.];
output = similar(x0);
res_dae_cache!(dae_cache, output, x0)

ForwardDiff.jacobian(wrap_res_dae_cache!, output, x0)  # works

Symbolics.jacobian_sparsity(wrap_res_dae_cache!, output, x0)

Error track log:

ERROR: MethodError: no method matching Float64(::Num)
Closest candidates are:
  (::Type{T})(::Real, ::RoundingMode) where T<:AbstractFloat at rounding.jl:200
  (::Type{T})(::T) where T<:Number at boot.jl:772
  (::Type{T})(::AbstractChar) where T<:Union{AbstractChar, Number} at char.jl:50
  ...
Stacktrace:
  [1] convert(#unused#::Type{Float64}, x::Num)
    @ Base ./number.jl:7
  [2] setindex!(A::Vector{Float64}, x::Num, i1::Int64)
    @ Base ./array.jl:966
  [3] _unsafe_copyto!(dest::Vector{Float64}, doffs::Int64, src::Vector{Num}, soffs::Int64, n::Int64)
    @ Base ./array.jl:253
  [4] unsafe_copyto!
    @ ./array.jl:307 [inlined]
  [5] _copyto_impl!
    @ ./array.jl:331 [inlined]
  [6] copyto!
    @ ./array.jl:317 [inlined]
  [7] copyto!
    @ ./array.jl:343 [inlined]
  [8] copyto!
    @ ./broadcast.jl:954 [inlined]
  [9] copyto!
    @ ./broadcast.jl:913 [inlined]
 [10] materialize!
    @ ./broadcast.jl:871 [inlined]
 [11] materialize!
    @ ./broadcast.jl:868 [inlined]
 [12] res_dae_cache!(dae::GlobalDAE{PreallocationTools.DiffCache{Vector{Float64}, Vector{Float64}}}, output::Vector{Num}, x0::Vector{Num})
    @ Main ./REPL[8]:4
 [13] (::var"#5#6")(output::Vector{Num}, x0::Vector{Num})
    @ Main ./REPL[9]:1
 [14] jacobian_sparsity(::var"#5#6", ::Vector{Float64}, ::Vector{Float64}; kwargs::Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
    @ Symbolics ~/.julia/packages/Symbolics/wbKHF/src/diff.jl:582
 [15] jacobian_sparsity(::Function, ::Vector{Float64}, ::Vector{Float64})
    @ Symbolics ~/.julia/packages/Symbolics/wbKHF/src/diff.jl:577
 [16] top-level scope
    @ REPL[14]:1

I used the debugger and identified that

    get_tmp(dae.x, x0) .= x0

does not work for x0::Vector{Num}, where Num <: Real == true

I also checked that after using the three packages, there are five methods of get_tmp:

 methods(get_tmp)
# 5 methods for generic function "get_tmp":
[1] get_tmp(dc::PreallocationTools.DiffCache, u::T) where T<:Dual in PreallocationTools at /home/hacui/.julia/packages/PreallocationTools/bohGK/src/PreallocationTools.jl:44
[2] get_tmp(dc::PreallocationTools.DiffCache, u::Number) in PreallocationTools at /home/hacui/.julia/packages/PreallocationTools/bohGK/src/PreallocationTools.jl:60
[3] get_tmp(dc::PreallocationTools.DiffCache, u::LabelledArrays.LArray{T, N, D, Syms}) where {T<:Dual, N, D, Syms} in LabelledArrays at /home/hacui/.julia/packages/LabelledArrays/QY8rm/src/LabelledArrays.jl:67
[4] get_tmp(dc::PreallocationTools.DiffCache, u::AbstractArray{T}) where T<:Dual in PreallocationTools at /home/hacui/.julia/packages/PreallocationTools/bohGK/src/PreallocationTools.jl:52
[5] get_tmp(dc::PreallocationTools.DiffCache, u::AbstractArray) in PreallocationTools at /home/hacui/.julia/packages/PreallocationTools/bohGK/src/PreallocationTools.jl:61

but they don’t include a method that supports a vector of Num.

I guess I may be using PreallocationTools beyond the purpose and might also be missing another package. How can I get the MWE to work? Thank you in advance.

Edit: add the stacktrace.

PreallocationTools.jl doesn’t support this right now, but I’ve thought about doing it before and I think it could be really helpful, if done lazily. Open an issue and we can discuss this further.

Hi Chris,

Thank you for your reply! Num appears to solve some type-related issue, but since it wraps all subtypes of Real, converting Num to Float64 is not permitted. My understanding can be quite superficial, and I’m curious about the root cause of this issue.

I found a workaround by duplicating the data with the field types being Num. The code below is my workaround, and that is no where close to an ideal solution.

import PreallocationTools: get_tmp

get_tmp(dc::PreallocationTools.DiffCache, u::AbstractArray{T} where T<:Num) = dc.du;

dae_cache_num = GlobalDAE(dualcache(Num[0., 0.]), dualcache(Num[0., 0.]));
x0_num = Num[1., 2.];
output_num = similar(x0);
wrap_res_dae_cache_num! = (output, x0) -> res_dae_cache!(dae_cache_num, output, x0)
Symbolics.jacobian_sparsity(wrap_res_dae_cache_num!, output_num, x0_num)

I will open an issue next.

Yes exactly, that can be a solution. Or just a fallback Any mode

1 Like