Flux.jl changes in api

Hello, good evening.

I had an example of xor problem solving in Flux.jl, and recently I noticed that it is not working anymore because of library updates. I tryed to migrate the example but the loss is not converging towards zero. What I missed?

  model = Chain(
        Dense(2 => 2),
        Dense(2 => 1)
    );
    
    # check model input
    display(Flux.params(model)[1]);
    # check model hidden layer
    display(Flux.params(model)[2]);
    # check model output
    display(Flux.params(model)[3]);
    
    # train dat;
    intrain = [0 0 1 1; 0 1 0 1];
    outtrain = [0 1 1 0];
    
    lossfunc(model, a, b) = mean(abs2.(model(a) .- b));
    
    opt = Descent(0.001);
    
    N = 5000;
    loss = zeros(N);
    
    traindata = [(intrain, outtrain)];
    
    for i in 1:N
        Flux.train!(lossfunc, model, traindata, opt);
    end
    
    println(model(intrain))
    lossfunc(model, intrain, outtrain)
2×2 Matrix{Float32}:
 -0.088474  1.18821
 -0.241477  0.356282
2-element Vector{Float32}:
 0.0
 0.0
1×2 Matrix{Float32}:
 0.80949  1.37516
Float32[0.49538296 0.49988925 0.4987973 0.5033036]
0.25000843f0

the apis is really very different now :neutral_face: All I had learned I lost :sweat_smile:

Solved here. Was missing activation function. I was thinking the Dense constructor used sigmoid by default. This solved the issue

    model = Chain(
        Dense(2 => 2, sigmoid),
        Dense(2 => 1, sigmoid)
    );
2 Likes