 # Weights update in Flux

Hello!

I am trying to use Flux to train a neural network but, in this case I don’t have the target values for the output. I will try to formulate a simpler version of my problem.

Suppose I have an input

``````input = [0.8 0.1 0.3]
``````

and, using the parameters initialized by Flux, I obtain the following output:

``````output = [0.3 0.8 0.3]
``````

As I said, I don’t have a real output (target) value to compare to the output obtained by Flux, in order to update the weights and to produce a better output. However, I know that those output values will be used as an input for a different problem (in my real case, it is an optimization problem). So, assume that the output values will be used as inputs for the following function:

``````fc = 3y_1 + 2y_2 + 5y_3
``````

Adopting the values from `output` in the `fc` expression, we obtain a result of 4. However, I know the optimum value for this function, say it is 2.5.

How can implement a neural network in Flux such that this difference (in the example, `4 - 2.5 = 1.5`) is always used to update the weights and to obtain a new value for the NN prediction?

I translated ur description into Flux training code. see if you can follow the code and clear things up

``````using Flux

using Flux: Dense

input = [0.8, 0.1, 0.3]

model = Chain(
input->reshape(input, 3, 1, 1, :),
Dense(3, 3, relu),
output->reshape(output, 3, :)
)

output = model(input)

fn(output) = transpose(output)*[3; 2; 5]

loss(input) = sum((fn(model(input)) .- 2.5) .^ 2)

p = params(model)

old_loss = loss(input)

Flux.train!(loss, p, [(input,)], opt)

new_loss = loss(input)

println(old_loss)
println(new_loss) # new loss should be better
``````
2 Likes

Thank you very much, @xiaodai ! I was able to reproduce your code, with a few modifications, and it is what I was looking for!!