It is too easy to beat AlphaGo.jl

At least on https://fluxml.ai/experiments/go/, suffice to create an “eye” and grow from it.
maybe the network is not trained enough but maybe something is wrong

@MikeInnes
@tejank10

I am just starting machine learning

I have to say amazing doc for flux.jl https://fluxml.ai/Flux.jl/stable/ thank you for the good work.

2 Likes

Yeah. Agree. It’s harder than people think to make it good

I think it’s just poorly trained. The MNIST classifier is also currently sub-par. It should be able to do much better with the network structure that it’s using.

1 Like

Yes, I brought this up with the maintainers but they seemed to not think it was a big deal.

https://github.com/FluxML/fluxml.github.io/issues/28

Seeing how weak this is I was just curious and tried to play with it using random moves (random choice from A-I and 1-9 repeating “dice” if move is illegal). Everytime during first 50 moves random bot was better! (BTW. flux-bot thought pass is best in 6-th and 8-th moves!)

Then random-bot made some self-ataris which changed position.

But flux-bot is so weak that it screwed it up again…

In this (clearly winning by radom bot (*)) position flux-box went to never ending cycle with “dancing” green line on top of board:

In my humble opinion and some could see it otherwise, this is … (I don’t want to be too harsh so I say it very politely) disappointing.

If JuliaComputing (**) wants to advertise Julia and show serious work on problems (at least in pillar packages in ecosystem) then I propose (and well I could be wrong!) to remove this example immediately! (or improve it radically)

(*) yes - random moves of black still had some chance to do suicide but I couldn’t test it due to bug (in flux? in bot? in javascript?) … BTW. white stones around J5 was several moves in atari…
(**) @MikeInnes is from JuliaComputing right? (see https://www.youtube.com/watch?v=R81pmvTP_Ik ; BTW about go-bot look around 5:30 )

I think it was trained by a summer intern with limited time. I am a go player and i have read the alphago papers. I am interested in having a crack at some point.

1 Like

If it want to show how simple is work with API maybe it could be good example.

Maybe it is enough to rename it to “OmegaGo.jl” (see omega male) or JokeGo.jl and say clearly that it is not about go or AI but about API.

And yes it could be good to see some serious results from Julia ecosystem so I am happy if you want to try to improve (means rewrite?) it.

I am afraid it is not just about training. With current HW (and human knowledge) it need some cooperation with MCTS (or something similar) part. I mean it is kind of unbelievable that MCTS didn’t find that filling one of two eyes of own big group is bad.

I don’t see any problem to have unsuccessful experiment or work in progress, but why to publish it? :scream:

BTW if you are to trying to play with source code you could do more automatic tests against random bot.

It is some kind of philosophical question if there is more stupid play than random moves. (ie you need some intelligence to be able to choose worse moves than random). My guess is that random bot will win at least in 10% of games against current version… :stuck_out_tongue:

1 Like

Agreed, this level of play cannot be explained by a lack of training. Even random play enhanced with a tiny bit of search should beat pure random play handily.

Philosophical question indeed. The answer depends critically on the definition of random play, the choice of go rules, and the definition of stupid play. The full range from “no” to “yes” via “meaningless question” is plausible.

You could propose another definition! :slight_smile: Here I was trying to answer to practical question how much more stupid could go bot be (in eyes of public audience) than flux bot.

Maybe people which are not go players or AI researchers could think it is good example. But from a little more experienced point of view it is very very very stupid bot. (very probably most stupid ever published)

Please don’t get me wrong! I just think that Julia community needs some self reflection and some kind of internal processes to support quality and suppress non-quality.
(At least if we don’t want to look like bunch of … which looks happy with similar impractical … “toys”)

Maybe some “review branch” on discourse where constructive criticism will be wanted and welcomed could help here?

@Liso IIUC, this is an open source MIT licensed project GitHub - tejank10/AlphaGo.jl: AlphaGo Zero implementation using Flux.jl. Criticizing is fine and all but actually figuring out what’s wrong, and fixing it or proposing the fix is more constructive IMO. Please take this as a mild criticism of your criticism :slight_smile:

3 Likes

(BTW I am quite of sad to see how often is (mis)used this kind of excuse for some results here)

I probably didn’t described problem clearly. It is perfectly fine to have draft WIP project at that level of immaturity on github. (every project needs to start at basic level)

Problem is here (publishing/advertising it at flux page) and here (publishing/advertising it at conference).

From my point of view - using project with this level of immaturity as example of using flux is damaging flux’s (and very probably JuliaComputing’s too) reputation.

Well first one is easy to answer, there is probably everything wrong :stuck_out_tongue:

How to fix some things:

GO and GUI:

  1. solve deadlock or livelock bug. (I played 3 games yesterday and 2 ended in this kind so it has to be not difficult to simulate problem)
  2. find end game criteria. Bot is still playing in hopeless position, for example with less 5 legal position where to play (and without any chance to make living group). Without this I am not sure how could MCTS work!
  3. if MCTS starts to work properly it has to give some number of lost and some number of won possible games. Define some threshold (for example 90% of lost games) as resign threshold. It is pity to play against stubborn machine. (show this winning expectation percentage on screen)
  4. give possibility to save game (in sgf format for example) - this has to be very easy.
  5. create some versioning system and show version of bot (I propose something like 0.0.1 in this moment) it could help people to forgive bugs and weaknesses and give them some hope in the future! :slight_smile:
  6. add undo possibility (this one probably in the future where one would like to analyze game)

AI:

  1. This one is probably hardest. Try to show that flux could do some job here! :wink:
  2. fight trained version vs untrained and show results.
  3. try bot against other bots offline and online on go servers (for example on KGS or OGS) and show results.
  4. beat best bots on specialized competition :wink:
  5. give best human players 5 stones handicap and crush them

Meta:

  1. you don’t need to hire European champion of go (Fan Hui) like Deepmind, at least just consult some go player about product before you sell it. (I mean show it at conference)
  2. remove it from flux web page or describe it properly as something very very very draft…
  3. try to create or help to create team where people could work on partial tasks (some of them I wrote above)
  4. try to motivate teachers and students to participate on partial works (there are people who like to work on something like this)

Some of proposals is easy to fulfill (if there is understanding of problem and will to solve it) some of them are harder and some of them really hard (some maybe impossible).

There is still possibility to resign and start to do something different. Sometimes this is the best option :wink:

EDIT:

  1. mark last put stone differently
  2. add resign button for human player (although it is not needed now :stuck_out_tongue: maybe in future it would be useful )
  3. there are plenty of topics how to negotiate result (it is useful in human games too) for example status of living groups could be resolved by reopening playing in disputable position, etc, etc. This is probably more advanced topic which I am not sure it is here any will to analyze.

But maybe I have to emphasize that I don’t see biggest problem in technical weaknesses of that particular project!!

I see it (and sorry I don’t know how to say it more mildly) in level of professionalism which choose this project as public example of flux’s usability.

1 Like

BTW choosing to make go bot is good idea. It is what Deepmind did to show that it was worth for Google to buy this company. It is very good area where to start tests and show that AI is working as it have to.

So it is very good to test flux in this area too. :slight_smile:

1 Like

Yes, the AlphaGo model is not fully trained. Building a good AlphaGo model is a very non-trivial project even if you have a huge team of Google engineers at your disposal. Aside from the basic engineering of the model itself, ML papers don’t generally have a high standard for reproducibility, which means a lot of time needs to be spent just figuring out hyper-parameters. Then, even once you are seeing improvement during training, doing a full run means tens of thousands of dollars of compute time.

In my opinion our GSoC students made quite remarkable progress in the face of these challenges, and we wanted to showcase their hard work. We should probably add a note to the website just to set expectations, though. And I wholeheartedly second the idea that anyone interested should check out the repo, give it a go themselves and try to improve on it; we’d happily take improved weights for the website.

11 Likes

Would it be possible to translate the best trained models from http://zero.sjeng.org/ to Flux models?
If so how can I do that ?