Julia and the Game of Go

Once I a while a start a new hobby project in Julia in the vacations. To learn Julia and have so fun and than always forget the most afterwards because I use Python in my day job. :frowning: (I don’t like the forgetting part)
This Christmas I started a bit larger Project and I wanted to Translate elemental parts of the code for the Book “Deep Learning and the Game of Go” from Python into Julia.
My current state is: that Module contains a random Player and a Monte-Carlo Player and a transformer that translates the game states into a matrix (‘OnePlaneEncoder’). In the next steps I want to use the training data (as JLD2 files) from the Encoder into simple DL networks using FLUX.

The current state there is that the networks don’t train because of some implementation errors. I assume the arrays I feed in do not have the correct shape? But I do not know how to fix it. How, do I fix the training? Should I do it in the training code or before that, before saving it the data?
Besides, because I always keep staying at newbie level the code might not be optimal and to python like.

2 Likes

I might misunderstand, but to me it looks like you obtained training data from somewhere else, and you’re trying to use it with a model you made?

It’s unlikely that’s going to work.

From a quick glance that looks likely.

First you need to find out how the data is supposed to be shaped. I would start to investigate what shapes the corresponding Python code uses and try to match that. If that’s not good enough, investigate if Flux has different expectations on data than the Python library used.

Either way works but presumably you save once and train multiple times, so it would be more efficient to do those transformations before saving the data.

At any rate

model = Chain(Dense(boardsize => 1000, relu),
              Dense(1000=>500, relu),
              Dense(500 => boardsize, σ)
            )

should almost certainly start with boardsize^2.

For what it’s worth, the speed of the board code can be improved substantially with better data structures, naturally at the cost of higher code complexity.

@StatisticalMouse
I have created the training data by myself, by Monte-Carlo simulation.

As @GunnarFarneback showed my code with the mlp I preferably stay with the example.

Converted the original implementation in Python
into this.
I probably fail with the reshaping.

So after reshaping X should contain a Vector of Vector with 81 elements!? While the inner vector of y is the where the stones has been set.
moveonehot[encodepoint(encoder, move.point)] = 1
with:

function encodepoint(oplenc::OnePlaneEncoder, pt::Point)
    return oplenc.boardwith*(pt.row - 1) + pt.col
end

found here. (But I have to admit, I have not looked into the code for a long time. Additionally, can possibly not figure out, what this onehot has to with winning or losing. But this is another story.)

The training data is generated by

The training/testing data has the form:

  xs = Vector{Array{Int8, 3}}[]
  ys = Vector{Vector{UInt8}}[]

The inner array is of the form:

if gostring.color == nextplayer
     boadmatrix[1, r, c] = 1
else
     boadmatrix[1, r, c] = -1
end

@GunnarFarneback do you have any example of what better data structures?

For the mlp I tried to reshape the inner of the array.
X has the shape of Vector{Vector{Array{Int8, 3}}} for # games (=400), # steps ,( 1, 9, 9). The number steps per game, is not constant so I cannot transform it into an array.
I try to flatten the innermost array so I get a Vector{Int8} for X. But during the process:

[reduce(hcat, [reshape(x, length(x)) for X2 in X for x in X2])

I am doing some kind of error, because I also want to unroll the intermediate vector. Because the dimensions do have the wrong order. And I am not sure, whether I have the correct size (which should be (# total steps, 81). Or not?
For the cnn try to reshape with:

X2 = reduce(vcat, [x for X2 in X for x in X2])

But here again the order of the array is wrong? should be, (# total steps, 1, 9, 9), right?
I have to cat in dims=3 or dims=4, but how do I do It with reduce?

regards

You can use permutedims, possibly in combination with reshape to reorder your dimensions.

This one can be written shorter as vec(x).

I don’t think that’s what you really need but the way to go about it would be

reduce((x, y) -> cat(x, y; dims = 3), v)