The 2048 challenge: fastest ways to simulate games of 2048 in Julia

I am sure many of you have played the game 2048. I have been writing a simulation engine for 2048 in Julia which I hope to eventually contribute back to a package like Reinforce.jl as an environment. If you enjoy programming, this could be something fun for an afternoon especially with Christmas coming up in about 2 months time.

Here is the challenge: Simulate games of 2048 by random play as efficiently as possible? Here are the requirements for the challenge:

  • It must return the the initial state and a sequence of steps that lead to the final state so the simulation can be “replayed”. So it won’t work to just simulate the game and throw away all the intermediate steps.
  • Input and output formats are entirely up you

Here is my entry. On a $14 Juliabox subscripton I get the below timings using @btime

122.599 μs (705 allocations: 448.83 KiB)

I have only made one optimisation which is to store the entries as 0,1,2,3… instead of [0, 2, 4, 8, 16,…] so that I can use Int8 as entries to store the 16*16 game grid.

Feel free to make a PR if you like so all the code can be stored in one place and tested on Juliabox.com
https://github.com/xiaodaigh/Game2048.jl

Edit:

BTW this is the fastest 2048 implementation I can find on the internet which is a C++ solution https://stackoverflow.com/questions/22342854/what-is-the-optimal-algorithm-for-the-game-2048/22498940#22498940. It claims to be able to perform

over 10,000,000 game states per second on one core of my mid-2011 laptop

So I wonder if the same speed is possible in Julia. My implementation is orders of magnitude slower. Perhaps you can see this as a community effort to try and get the best possible solution in Julia and see how close we can get to the best known C++ implementation.

1 Like

Why is this an interesting challenge? If I understand correctly, we are talking about a 4x4 grid of integers which happen to be 2^n, with 1 \le n \le 11 , so using n they can be encoded as an UInt8. Then we are talking about manipulating a state of 16 bytes.

I find these challenges interesting, just like I find chess AI programming interesting. I probably won’t have time to contribute to your code, but can offer some feedback:

Your program seems to benchmark a complete game, defined as a series of fully random moves, is that correct? It’s usually more relevant to talk about states evaluated per second in this context. Even so, slower algorithms can perform better than faster since they have better (but more expensive) heuristics and board scoring. States can also be slower to evaluate as the game progresses, so it’s not so easy to measure.

There are tons of techniques that can be used to speed up your code, most of them invented a long time ago for chess AIs. The first thing you want to do is to find efficient ways to represent the board and do moves and validation. This normally means using bitboards (packing information as bits in larger integers), and pre-calculating as much as you can. Also, never allocate new data during the search. Pre-allocate everything.

If you look at the stackoverflow link you posted, you’ll see that a lot of effort was put into doing exactly this, so I would advice studying that code to get some ideas. Moreover, that project is focused on making an AI as strong as possible, which I think is a more interesting question than just optimizing moves and state evaluation.

1 Like

Nice, @xiaodai. I would be happy to also add it eventually as an environment to DiscreteEnvironments.

Looks nice. Currently its too slow to contribute.

@xiaodai A nice challenge!!

We could also host the challenge on nextjournal:

https://nextjournal.com/sdanisch/2048-challenge

People can remix the solution, execute it in the same environment and publish a new version when they found something nicer :slight_smile: We could have a main article that aggregates the best results!

3 Likes

It would be good with some test case for a fixed RNG seed (if that makes sense here, haven’t looked at the actual challenge carefully yet).

Let me be more direct – it doesn’t make any sense IMO to minimize the total runtime of the game. The point of the game is to keep playing for as long as possible. By doing a @btime, the reported time will likely represent the worst game played (moves and luck so bad that it was an almost immediate game over), and will encourage making the AI as bad as possible.

Yes, we can fix the random seed to always time the same thing, but first of all that severely limits how you can modify and optimize the algorithm (there’s random throughout the game, so it will force you to always make the exact same moves, and issue the same rand calls in the same order), and second, what appeals to me about a board game AI is improving the AI, not making it play fast but dumb.

A better measure would be something like they did in the stackoverflow post, e.g. median score obtained for 100 games if the processing time is limited to 10 ms per move.

1 Like