We have a framework where users define models as nested structs and declare at runtime which fields are optimization variables. The framework then needs to differentiate constraints/objectives with respect to those selected fields using ForwardDiff.
The challenge: kernels take the top-level struct, but ForwardDiff needs Dual to propagate through only the selected field — which lives somewhere deep in the struct graph. I am struggling with how to go about this. I have a few approaches I have prototyped shown below. Am I on the right track? Is there a better way to go about this? Thanks in advance for any thoughts or alternatives.
Approach 1: Accessors.jl + independent type parameters
Give each potentially-differentiable field its own type parameter, use @set to surgically inject Dual:
struct Foo{TA<:Real, TB<:Real}
a::TA
b::TB
end
ForwardDiff.derivative(x0) do x
foo_dual = @set foo.a = x
f(foo_dual)
end
Pro: Call sites unchanged, struct evolves naturally.
Con: Every differentiable field needs its own type parameter — proliferates on complex models.
**Approach 2: Kernels take field values, not structs
Functions take values extracted at the call site and data on structs does not need to be converted to Dual.
kernel(a, b) = ...
kernel(foo.a, foo.b)
ForwardDiff.derivative(x -> kernel(x, foo.b), foo.a)
Pro: Structs need no type parameters.
Con: Adding a field to a struct requires updating all kernel signatures — loses OO ergonomics. Maintenance nightmare.
** Question
Is there a cleaner Julia-idiomatic solution? Specifically: how do you differentiate with respect to a runtime-selected subset of fields in a nested struct without either proliferating type parameters, breaking struct-based dispatch, or rebuilding large structs for every AD permutation? Are there better solutions I am missing? Thanks!