Hi everyone,

I’m trying to implement a toy example of running `NeuralPDE.NNODE`

to add to my repository of SIR models. Here’s what I have, which doesn’t work well.

```
using OrdinaryDiffEq
using NeuralPDE
using Flux
using OptimizationOptimisers
function sir_ode(u,p,t)
(S, I, R) = u
(β, γ) = p
dS = -β*I*S
dI = β*I*S - γ*I
dR = γ*I
[dS, dI, dR]
end
tspan = (0.0,40.0)
u0 = [0.99, 0.01, 0.0];
p = [0.5, 0.25];
prob_ode = ODEProblem(sir_ode, u0, tspan, p);
sol_ode = solve(prob_ode, Tsit5(), saveat=δt);
num_hid = 8
chain = Flux.Chain(Dense(1, num_hid, σ), Dense(num_hid, 3))
opt = OptimizationOptimisers.Adam(0.1)
alg = NeuralPDE.NNODE(chain, opt)
sol_nnode = solve(prob_ode,
NeuralPDE.NNODE(chain, opt),
verbose=true,
abstol=1f-6,
maxiters=2000)
out = Array(sol_nnode)
```

This doesn’t give a good solution as my state vector u = [S,I,R] is (a) constrained to be positive and (b) constrained to sum to 1. My understanding was that the `Flux.Chain`

represents a mapping from `t`

to `u`

such that `u'(t)`

matches the output of the `sir_ode`

function, but when I add a `softmax`

layer to the above, it doesn’t give output `out`

where S(t)+I(t)+R(t)=1.0. Any suggestions for a neural net architecture that works?

In the NeuralPDE.jl docs there is a section on how to add constraints. Can this also be done with the simplified `NNODE`

interface too, and is it possible to implement the above constraints using this approach?