How can I evaluate a neural net and take the derivative of a neural net

I have solved a differential equation with a neural net. I leave code below with an example. I want to be able to compute the first derivative of this neural net with respect to its input “x” and evaluate this derivative for any “x”.

1- Notice that I compute der = discretize.derivative . Is that the derivative of the neural net with respect to “x”? With this expression, if I type [first(der(phi, u, [x], 0.00001, 1, res.minimizer)) for x in xs] I get something that I wonder if it is the derivative but I cannot find a way to extract this in an array, let alone plot this. How can I evaluate this derivative at any point, lets say for all points in the array defined below as “xs”? Below in Update I give a more straightforward approach I took to try to compute the derivative (but still did not succeed).

2- Is there any other way that I could take the derivative with respect to x of the neural net?

I am new to Julia, so I am struggling a bit with how to manipulate the data types. Thanks for any suggestions!

Update: I found a way to see the symbolic expression for the neural net doing the following:

predict(x) = first(phi(x,res.minimizer))
df(x)      = gradient(predict, x)[1]

After running the two lines of code, type predict(x) or df(x) in the REPL and it will spit out the full neural net with the weights and biases of the solution. However I cannot evaluate the gradient, it returns an error. How can I evaluate the gradient with respect to x of my function predict(x)??

The original code creating the neural net and solving the equation

using NeuralPDE, Flux, ModelingToolkit, GalacticOptim, Optim, DiffEqFlux
import ModelingToolkit: Interval, infimum, supremum

@parameters x
@variables u(..)

Dx   = Differential(x)
a    = 0.5
eq   = Dx(u(x)) ~ -log(x*a)

# Initial and boundary conditions
bcs  = [u(0.) ~ 0.01]
# Space and time domains
domains           = [x ∈ Interval(0.01,1.0)]
# Neural network
n                 = 15
chain             = FastChain(FastDense(1,n,tanh),FastDense(n,1))
discretization    = PhysicsInformedNN(chain, QuasiRandomTraining(100))
@named pde_system = PDESystem(eq,bcs,domains,[x],[u(x)])
prob              = discretize(pde_system,discretization)

const losses = []
cb = function (p,l)
    push!(losses, l)
    if length(losses)%100==0
        println("Current loss after $(length(losses)) iterations: $(losses[end])")
    end
    return false
end

res  = GalacticOptim.solve(prob, ADAM(0.01); cb = cb, maxiters=300)
prob = remake(prob,u0=res.minimizer)
res  = GalacticOptim.solve(prob,BFGS(); cb = cb, maxiters=1000)
phi  = discretization.phi
der  = discretization.derivative

using Plots

analytic_sol_func(x) = (1.0+log(1/a))*x-x*log(x)
dx                   = 0.05
xs                   = LinRange(0.01,1.0,50)
u_real               = [analytic_sol_func(x) for x in xs]
u_predict            = [first(phi(x,res.minimizer)) for x in xs]
x_plot               = collect(xs)
xconst               = analytic_sol_func(1)*ones(size(xs))
plot(x_plot ,u_real,title = "Solution",linewidth=3)
plot!(x_plot ,u_predict,line =:dashdot,linewidth=2)

The solution I found consists in differentiating the approximation with the help of ForwardDiff.

So if the neural network approximation to the unkown function is called “funcres” then we take its derivative with respect to x as shown below.

using ForwardDiff
funcres(x) = first(phi(x,res.minimizer))
dxu        = ForwardDiff.derivative.(funcres, Array(x_plot))
display(plot(x_plot,dxu,title = "Derivative",linewidth=3))
1 Like

Thank you for this. dxu is for the first order, but can you tell me for the second order? I just applied for hessian but it’s not working.
Thank you.