# Flux: get intermediate results per epoch

Hello,

how do you access (or store) intermediate results such as weights per epoch in Flux? I have today started using it and was really surprised by how easy it is to implement thing. One thing I did not find and could not solve so far: how do you access results per epoch?

My below minimum working example is doing a simple linear regression. How could I access the weights after epoch i?

I have managed to store losses, however, I am not able to achieve the same for weights.

``````# Generate simple true data for MWE
nFeat = 10
X = rand(nFeat, 200) # rows = features, obs = columns
W = transpose(rand(1:10,nFeat))
b = rand(1:10,1)
y = W * X .+ b

# build simple model for MWE
s_in = size(X,1)
s_out = 1
m = Flux.Chain(
# only one layer for MWE...
Dense(s_in, s_out, identity)
)

loss(x,y) = Flux.mse(m(x),y)
opt = Flux.Descent()
my_losses = []
evalcb = () -> push!(my_losses, loss(X,y)) # works!

# FOR WEIGHTS
# my_weights
# evalcb = () -> push!(my_weights, [l.W for l in m.layers]) # stores only the final weights

# Train model
data = [(X,y)]
epochs = 1000
for i in 1:epochs
if i % 100 == 0
@show i
end
Flux.train!(loss, Flux.params(m), data, opt, cb=Flux.throttle(evalcb, 10))
end

``````

I could not find anything in the documentation. As I am not an expert on Julia and/or ML it may be due to my lack of understanding, however…

1 Like

I’m not an expert too.
Is this useful for you?
https://fluxml.ai/Flux.jl/stable/saving/#Checkpointing-1

1 Like

My understanding is that this is for large models with long computation time where I would want to actually store the results on file. I guess this could work as a workaround.

But my question is actually much simpler: how do I access the model information at epoch i. For example I ran a model for 1000 epochs and want to see the weights at epoch 435.

How would I do this?

Thanks!

Inside the Flux.params object are stored the parameters.
You can index it.

Before training the model declare

``````p = Flux.params(m)
``````

then pass p to the train function.

1 Like

Thanks, this makes sense. I got it to work, although I feel there is probably an easier way to achieve this than my “hack” Here my amended code:

``````# Train model
data = [(X,y)]
epochs = 1000
ps = Flux.params(m)
all_Ws = zeros(epochs, nFeat) # Container to store intermediate parameters (here: W)
for i in 1:epochs
if i % 100 == 0
@show i
end
Flux.train!(loss, ps, data, opt, cb=Flux.throttle(evalcb, 10))
all_Ws[i,:] = [el for el in p.params if size(el,2) == nFeat] # Store W of epoch
end
``````