Hello,

I am building a large `ODESystem`

using ModelingToolkit,

on which I’d like to use automatic differentiation.

Because it is large, there are many parameters, resulting from the many subsystems.

However, I want to use automatic differentiation that optimizes only a handful of these parameters.

Marking those parameters as tunable and then using `tunable_parameters`

is an elegant way of obtaining only these parameters.

But how can I then use `remake(prob..., )`

to recreate the problem with these parameters, and prevent the others from being changed?

As far as I can tell, `ModelingToolkit.varmap_to_vars`

does not help here.

Any suggestions would be much appreciated!

1 Like

This is more of just writing the cost function in a nice way.

Thanks for your reply!

I’m afraid I don’t quite follow; could you elaborate perhaps?

My cost function does not depend on any of the parameters directly,

only on the behavior of the system (the output of a small number of subsystems, to be precise).

The cost function will have a gradient that depends on all / most parameters,

I just don’t want all parameters to change.

It seems to me this would be a good use-case for tunable parameters, but I could be wrong.

Every time you change these few parameters, you want to recompute your ODE system and its outputs, and reevaluate the objective/constraint functions?

If so, it reminds me of the implicit function theorem (IFT): you can consider the outputs of the ODE system as implicit functions of your few parameters (through the ODE system). You can then optimize only wrt the few parameters (say x) and provide the optimization solver with the the coupled (total) derivatives wrt x that you compute using the IFT.

What I want to do is quite simple.

I want to do something along the lines of what the mtk docs do here:

```
p = @parameters x y z
idxs = ModelingToolkit.varmap_to_vars([p[1] => 1, p[2] => 2, p[3] => 3], p)
p[idxs]
prob = ODEProblem(sys, [], [p1 => p[1], p2 => p[2]])
idxs = Int.(ModelingToolkit.varmap_to_vars([p1 => 1, p2 => 2], p))
function loss(p)
remake(prob,p = p[idxs])
sol = solve(prob, Tsit5())
sum(abs2,sol)
end
```

But, I don’t want to use the entire `p`

vector in the loss function.

Rather, I want to use a subset (whose variable names and indices I have).

I know I could try something like

```
p_subset_indices = ...
function loss(p_subset)
new_p = p
new_p[p_subset_indices] .= p_subset
remake(prob,p = new_p)
sol = solve(prob, Tsit5())
sum(abs2,sol)
end
```

But this would not work with Zygote, as mutation is not supported.

I think this is a useful functionality to have.

Quite often I want to re-run simulations with just a few different parameters, and not having to recompile the `ODEProblem`

again would be very helpful.

Instead just build the array instead of mutating, using a comprehension with an ifelse statement.

```
new_p = [ifelse(i in p_subset_indices,p_subset[i],p) for i in 1:length(p)]
```

Or use a Zygote.Buffer.

Thanks Chris, that’s helpful!

For others who are looking for a potential solution; here’s what I ended up doing, inspired by Chris’s comment.

After marking parameters as ‘tunable’ you can extract their indices easily:

```
function split_parameters(params)
indexmap = [p => i for (p,i) in zip(params, 1:length(params))]
subset_indexmap = eltype(indexmap)[]
for (p, i) in indexmap
istunable(p) && push!(subset_indexmap, p => i)
end
return indexmap, subset_indexmap
end
```

Then one can rebuild the `p`

vector as follows, replacing the relevant subset parameter values:

```
p_idxs_map, p_subset_idx_map = split_params(parameters(sys))
p_subset_indices = last.(p_subset_idx_map)
p = prob.p
p_subset = ...
idx_to_subset_idx(i) = findfirst(isequal(i), p_subset_indices)
new_p = [i in p_subset_indices ? p_subset[idx_to_subset_idx(i)] : p[i] for i in 1:length(p)]
```

which should (hopefully) work fine with Zygote also.

1 Like