I started to code using only the proportional part. This, I need an input layer with 2 inputs and 1 output, and one output layer with 1 input and 1 output. This is my code so far:

struct Input
W
end
Input() = Input(param(randn(2)))
(m::Input)(x) = m.W[1]*x[1] + m.W[2]*x[2]
struct P
W
end
P() = P(param(randn()))
function (m::P)(x)
# P
if (x > 1)
return m.W[1]
elseif (x < -1)
return -m.W[1]
else
return x*m.W[1]
end
end
m = Chain(Input(), P())
function loss(x,y)
# Simulate the system.
t = 0:0.1:100
o = x
Î” = 0.0
for k in t
Î” = Î” + (y-o)^2
r = m([y;o])
o = o + r*0.1
end
return Î”/length(t)
end
ps = Flux.params(m)
Tracker.gradient(()->loss(0,1), ps)

Which produces the following error:

julia> Tracker.gradient(()->loss(0,1), ps)
ERROR: MethodError: Cannot `convert` an object of type Array{Float64,1} to an object of type Float64

Can anyone please help me?

Btw, if the input layer only have one input, then it works fine.

The model only has one input, the input data, you are using the input and output.
It should be something like this:

r = m(o)

I think â€śyâ€ť is Float and â€śoâ€ť is a Vector:

(y-o)^2

In the loss function it is not clear who is the system to be controlled and I think there are errors in the controller, I think it should be something like this.

for k in t
control_output = Tracker.data(m(o)) #control output
out = model(control_output) #model output
Î” = Î” + (y - out)^2 # squared error (desired output - model output)
o = [y, out] # update next input
end

Thanks! With the help of some people on Slack, I managed to finally tune my PID Neural Network. I will write a tutorial on my website, something like a Hello World on machine learning using Julia for Control engineers

Thanks for posting this question. It doesnâ€™t look like youâ€™ve had time to update your blog yet (or at least, I donâ€™t see anything at https://www.ronanarraes.com/blog/). Beyond what @phelipe suggested above, did you have to do much to get a working system? How well did it work?

Indeed, I did not have time to write the text yet, sorry I decided to spend some time finishing the package PrettyTables.jl and trying to release an initial version of TextUserIntefaces.jl.

But it worked well after some modifications based on @phelipe advices. I currently have two students working on it and we managed to tune a PIDNN using Flux.jl. I will try to write the text in my blog soon about it

Hi, I am wondering if you have written up your package yet? I am looking for something similar to tune a temperature and humidity PID controller in a Greenhouse(GH). I have a full numerical model of my GH working in Julia. Thanks

Yes, we could apply a PIDNN to control the satellite attitude. I should write a blog post shortly about it. However, the solution is very simple, a package to implement a PIDNN seems an overkill. We you need more details, please, let me know.

Tuning a PID controller for a transfer function (that is, a single-input-single-output system modelled as linear) using neural network? Isnâ€™t that an overkill?

What I did was to replace the PID with a Neural controller. As @zdenek_hurak said, tuning a PID using a neural network seems an overkill. What you can do (although I do not know if this is possible) is using Flux.jl back propagation to create an optimization framework to find the gains as they were the neural network weights.