Previously I have been using LRSLib (which calls the C library lrsnash) for computing Nash equilibria of two-player simultaneous move zero-sum games. However, I wanted to implement a pure Julia implementation for the sake of performance (and for ease of use, as LRSLib requires additional setup than Pkg). This is my implementation thus far, which works in the very limited test cases I have run it on (adapted from https://people.csail.mit.edu/costis/6896sp10/lec2.pdf) :

```
using JuMP
import GLPK
function nash(R::Array{Float64, 2})
# https://people.csail.mit.edu/costis/6896sp10/lec2.pdf
# Nash Equilibrium for row player
model = Model(GLPK.Optimizer)
@variable(model, z)
@objective(model, Max, z)
@variable(model, x[1:size(R)[1]])
@constraint(model, x .>= 0)
@constraint(model, x' * R .>= z * ones(size(R)[2])')
@constraint(model, x' * ones(size(R)[1]) == 1)
optimize!(model)
z_return = value(z)
x_return = JuMP.value.(x)
# Nash Equilibrium for column player
model = Model(GLPK.Optimizer)
@variable(model, z_1)
@objective(model, Min, z_1)
@variable(model, y[1:size(R)[2]])
@constraint(model, y .>= 0)
@constraint(model, -y' * R' .>= z_1 * -ones(size(R)[1])')
@constraint(model, y' * ones(size(R)[2]) == 1)
optimize!(model)
y_return = JuMP.value.(y)
return z_return, x_return, y_return
end
```

Note that while the nash equilibrium value itself is the same for both models, I am computing both to get the mixed strategy for the column player (maybe not necessary?). For large matrices, this is outperforming LRSLib beautifully. The results below are from BenchmarkTools (mean for points, error bars are standard deviations. It’s more obvious in the next plot, but it should not be assumed that the distributions are normal, I just wanted to show some measure of variance).

However, for smaller matrices, the results are less than inspiring:

Here’s my benchmarks (note that LRSLib also requires integers):

```
@benchmark nash(R) setup=(R=rand($i, $i) .- 0.5)
@benchmark LRSLib.nashsolve(trunc.(Int64, R .* 1000), -trunc.(Int64, R .* 1000)) setup=(R=rand($i, $i) .- 0.5
```

Most of the matrices I input are small. How can I make this more efficient, hopefully to at least comparable to the LRSLib times? Is it domain-specific optimizations (in more fleshed-out implementations I have versions that solve 1xm, nx1, and 2x2 matrices without linear programming; maybe there are similar tricks for other sizes?)? Are there faster ways of writing linear programming (I am very new to the area)? Are there ways to write the Julia code specifically faster (ie, type-safety concerns, allocations, etc)? Any help on this would be much appreciated.