Seeing how weak this is I was just curious and tried to play with it using random moves (random choice from A-I and 1-9 repeating “dice” if move is illegal). Everytime during first 50 moves random bot was better! (BTW. flux-bot thought pass is best in 6-th and 8-th moves!)
Then random-bot made some self-ataris which changed position.
But flux-bot is so weak that it screwed it up again…
In this (clearly winning by radom bot (*)) position flux-box went to never ending cycle with “dancing” green line on top of board:
In my humble opinion and some could see it otherwise, this is … (I don’t want to be too harsh so I say it very politely) disappointing.
If JuliaComputing (**) wants to advertise Julia and show serious work on problems (at least in pillar packages in ecosystem) then I propose (and well I could be wrong!) to remove this example immediately! (or improve it radically)
(**) @MikeInnes is from JuliaComputing right? (see https://www.youtube.com/watch?v=R81pmvTP_Ik ; BTW about go-bot look around 5:30 )
If it want to show how simple is work with API maybe it could be good example.
Maybe it is enough to rename it to “OmegaGo.jl” (see omega male) or JokeGo.jl and say clearly that it is not about go or AI but about API.
And yes it could be good to see some serious results from Julia ecosystem so I am happy if you want to try to improve (means rewrite?) it.
I am afraid it is not just about training. With current HW (and human knowledge) it need some cooperation with MCTS (or something similar) part. I mean it is kind of unbelievable that MCTS didn’t find that filling one of two eyes of own big group is bad.
I don’t see any problem to have unsuccessful experiment or work in progress, but why to publish it?
BTW if you are to trying to play with source code you could do more automatic tests against random bot.
It is some kind of philosophical question if there is more stupid play than random moves. (ie you need some intelligence to be able to choose worse moves than random). My guess is that random bot will win at least in 10% of games against current version…
Agreed, this level of play cannot be explained by a lack of training. Even random play enhanced with a tiny bit of search should beat pure random play handily.
Philosophical question indeed. The answer depends critically on the definition of random play, the choice of go rules, and the definition of stupid play. The full range from “no” to “yes” via “meaningless question” is plausible.
You could propose another definition! Here I was trying to answer to practical question how much more stupid could go bot be (in eyes of public audience) than flux bot.
Maybe people which are not go players or AI researchers could think it is good example. But from a little more experienced point of view it is very very very stupid bot. (very probably most stupid ever published)
Please don’t get me wrong! I just think that Julia community needs some self reflection and some kind of internal processes to support quality and suppress non-quality.
(At least if we don’t want to look like bunch of … which looks happy with similar impractical … “toys”)
Maybe some “review branch” on discourse where constructive criticism will be wanted and welcomed could help here?
@Liso IIUC, this is an open source MIT licensed project https://github.com/tejank10/AlphaGo.jl. Criticizing is fine and all but actually figuring out what’s wrong, and fixing it or proposing the fix is more constructive IMO. Please take this as a mild criticism of your criticism
(BTW I am quite of sad to see how often is (mis)used this kind of excuse for some results here)
I probably didn’t described problem clearly. It is perfectly fine to have draft WIP project at that level of immaturity on github. (every project needs to start at basic level)
Problem is here (publishing/advertising it at flux page) and here (publishing/advertising it at conference).
From my point of view - using project with this level of immaturity as example of using flux is damaging flux’s (and very probably JuliaComputing’s too) reputation.
Well first one is easy to answer, there is probably everything wrong
How to fix some things:
GO and GUI:
solve deadlock or livelock bug. (I played 3 games yesterday and 2 ended in this kind so it has to be not difficult to simulate problem)
find end game criteria. Bot is still playing in hopeless position, for example with less 5 legal position where to play (and without any chance to make living group). Without this I am not sure how could MCTS work!
if MCTS starts to work properly it has to give some number of lost and some number of won possible games. Define some threshold (for example 90% of lost games) as resign threshold. It is pity to play against stubborn machine. (show this winning expectation percentage on screen)
give possibility to save game (in sgf format for example) - this has to be very easy.
create some versioning system and show version of bot (I propose something like 0.0.1 in this moment) it could help people to forgive bugs and weaknesses and give them some hope in the future!
add undo possibility (this one probably in the future where one would like to analyze game)
This one is probably hardest. Try to show that flux could do some job here!
fight trained version vs untrained and show results.
try bot against other bots offline and online on go servers (for example on KGS or OGS) and show results.
beat best bots on specialized competition
give best human players 5 stones handicap and crush them
you don’t need to hire European champion of go (Fan Hui) like Deepmind, at least just consult some go player about product before you sell it. (I mean show it at conference)
remove it from flux web page or describe it properly as something very very very draft…
try to create or help to create team where people could work on partial tasks (some of them I wrote above)
try to motivate teachers and students to participate on partial works (there are people who like to work on something like this)
Some of proposals is easy to fulfill (if there is understanding of problem and will to solve it) some of them are harder and some of them really hard (some maybe impossible).
There is still possibility to resign and start to do something different. Sometimes this is the best option
mark last put stone differently
add resign button for human player (although it is not needed now maybe in future it would be useful )
there are plenty of topics how to negotiate result (it is useful in human games too) for example status of living groups could be resolved by reopening playing in disputable position, etc, etc. This is probably more advanced topic which I am not sure it is here any will to analyze.
But maybe I have to emphasize that I don’t see biggest problem in technical weaknesses of that particular project!!
I see it (and sorry I don’t know how to say it more mildly) in level of professionalism which choose this project as public example of flux’s usability.
BTW choosing to make go bot is good idea. It is what Deepmind did to show that it was worth for Google to buy this company. It is very good area where to start tests and show that AI is working as it have to.
Yes, the AlphaGo model is not fully trained. Building a good AlphaGo model is a very non-trivial project even if you have a huge team of Google engineers at your disposal. Aside from the basic engineering of the model itself, ML papers don’t generally have a high standard for reproducibility, which means a lot of time needs to be spent just figuring out hyper-parameters. Then, even once you are seeing improvement during training, doing a full run means tens of thousands of dollars of compute time.
In my opinion our GSoC students made quite remarkable progress in the face of these challenges, and we wanted to showcase their hard work. We should probably add a note to the website just to set expectations, though. And I wholeheartedly second the idea that anyone interested should check out the repo, give it a go themselves and try to improve on it; we’d happily take improved weights for the website.