Computing discrete Wasserstein (EMD) distance in Julia?

Is there any library in Julia for computing discrete Wasserstein (EMD) given two discrete distributions?
Python seems to have a tool for that http://pot.readthedocs.io/en/stable/index.html, but if a pure Julia solution is available, it will be better than using PyCall.

2 Likes

Thanks for the link, that looks cool! Unfortunately, I donā€™t know of a julia package for optimal transport.

From a (very) short glance at the source, it looks like POT is doing most of the numerical heavy lifting in C/C++ libraries, but the python code is also non-trivial.

If this is correct, then PyCall looks like a good choice (no big speed-up if POT is already using fast C code; calling the C-libs from julia would be non-trivial because the python code is not just a thin wrapper but contains real logic). Of course, if you want to experiment with new algorithms for optimal transport, then this state of affairs sucks (two language problem); also it costs you heavy dependencies.

Is this assessment correct? You probably know more about POT than me.

I never used POT, but they do offer a buffet of solvers, spent the effort to offload a lot of computation into faster languages, and the choices like sinkhorn vs direct solver are probably more relevant than choice of programming language anyway (algorithm class very often beats implementation / hardware). Not sure how fast they are compared to other existing libraries.

Or are you in the situation where you want to solve many cheap OT problems instead of few expensive ones? Then you are in a bad situation, I fear :frowning:

1 Like

I am also new to POT. I had a look at their code and made similar observations as you. Their python code portions are not trivial. I am not sure if that will cause performance problem.

Yea, I am in the situation where I want to solve many cheap OT problems a lot of time :frowning:

Yea, I am in the situation where I want to solve many cheap OT problems a lot of time ļæ¼

ļæ¼Merde. More specifically, are you by chance planning to run KNN-like constructions on a metric space, where each experiment corresponds to one point-cloud of samples, and the metric (between experiments) is EMD, such that you need, in worst worst-case, quadratically many EMD solutions? Thatā€™s my guess because of the Machine Learning Tag (learn distributions, not medium-dimensional points) combined with ā€œmany cheapā€ problems.

I have met people running such computations; they were very, very unhappy with their compute needs (context was some medical data that came in form of a (sampled) distribution for each patient; so you have a partially labeled distribution-of-distributions).

I am pretty sure that there is a lot one could do in these settings; but this is algorithmic research rather than coding or googling for packages. Itā€™s a fun problem, though, that I spent some off-time working on.

Edit: If you end up writing a paper-thin wrapper using PyCall around POT, it would be much appreciated if you could make a github repo for it. That way you can also swap out the POT for another solver if a faster one can be found.

1 Like

just found this thread and wanted to chime in about my experience (hoping I am on point really):
thatā€™s pretty much what I have - targeting EMD/Wasserstein distance for vectors similarity (sequences of word vectors that is). As itā€™s quite expensive to compute in real-time I model it via a nn: two sequences inputs, a siamese encoder part (weights sharing) and a custom target of which EMD is a component - basically encoding such sequences as dense vector representations which then are queried live - knn over angular distances, superbit lsh. To sum it up, what I did was pre-compute/cache then approximate it afterwards via nn - highly interested in flux to be able to look at this end-to-end.

Ok, can you explain again, also giving orders of magnitude for sizes/dimensions?

E.g. your distributions (sentences, for the earth movers distance) contain 10-100 points each (e.g. 10-1000 dim output of word2vec for each word in a sentence, using cosine distance), and you want to use an approximate k-nearest-neighbor classifier employing the EMD-distance. knn-classifiers do a lot of distance evaluations, and EMD is expensive. Therefore, you instead train a neural network NN_map to map sentence->some_Rk, such that EMD(sentence1, sentence2) ~ dist_cosine(NN_map(sentence1), NN_map(sentence2)). Did I get this right?

1 Like

From this discussion, I found this package

3 Likes

precisely. In my case these sentences are sets of skills - ā€œjava + clojure + ā€¦ā€ and to be complete in practice (although not very relevant for this thread) my custom target relies on several terms in its computation (eq. one quantity is 1/(1+emd), another quantity is divergence in LDA distributions,ā€¦). In total I have ~3M such encoded sentence vectors in 300d with vectors being stored offline, knn afterwards.

So, to clarify: Each sentence-vector has ~300 entries. Each entry is itself a weight and a point in a fixed metric space M. Then you would naively need to solve (or approximate) a lot (billions) of 300 x 300 EMD instances in M. M itself is a moderately small finite metric space (the set of possible ā€œskillsā€, where you presumably learn the metric on M in a separate step). Luckily, M fulfills the triangle inequality which may or may not help your EMD solver.

Because you jointly approximate EMD + other_target_components by Euclidean-after-NN_map (or for cosine, Euclidean-after-normalization-after-NN_map), you are free to use kd-trees or similar for your knn-search, and donā€™t need to drop down to general metric space search trees like e.g. covertree (you already run a svd in front of the search, right? kd-tree is not isometry invariant and runs a lot better if you align the coordinate axes with the most relevant dimensions).

Did I get this right?

Interesting, I hadnā€™t encountered this kind of problem before. I have mostly thought about infinite and more geometric apriori known metric spaces M: e.g. large vectorspace and your sentences for EMD happen to be mostly supported on some lower-dimensional ā€œdata manifoldā€ (not really a manifold), which you of course donā€™t know apriori. Iā€™m originally coming from the perspective of trying to understand and treat the data manifold as a geometric object. As such, I have unfortunately nothing interesting to say about your problem, except that we probably should use very different approaches :wink:

I just wrote this(with help from the LightGraphs crew) for fun the other day! Using light graphs and Lightgraphs flows

function EarthMoverDistance(a::Vector, b::Vector)
     @assert( sum(a) ā‰ˆ sum(b), "Input vectors are not stochastic...")
     @assert( length(a) == length(b), "Input vectors have different lengths...")
     len = length(a)
     g = LightGraphs.complete_digraph(len)
     cost,  capacity = zeros(len,len), ones(len,len)
     demand = b .- a
     for i in 1:len, j in 1:len
         if i != j
             cost[i,j] = abs(j - i)
         end
     end
     flow = mincost_flow(g, demand, capacity, cost, ClpSolver())
     return sum( flow .* cost )
end

EarthMoverDistance([1,2,3],[3,2,1])#should be 4
EarthMoverDistance([3,2,1,4],[1,2,4,3])#should be 5

dunno if this really belongs in any packages but I figured Iā€™d put this out there for someone else :slight_smile:

3 Likes

I have some functionality for Optimal transport here
https://baggepinnen.github.io/SpectralDistances.jl/latest/distances/#SpectralDistances.IPOT-Tuple{Any,Any,Any}

3 Likes

if you are okay with adding LightGraphs and LightGraphsFlows to your package, that little snippit could fit in there if you want :). It could also be improved Iā€™m sure!

That particular package is focused on computing distances between signals in the frequency domain, if you can argue that it helps with that it can sure be added. Otherwise it might find a use in

Which has as dependencies both SpectralDistances.jl and LightGraphs

1 Like

More EMD fun,

function earthmoverdistance(a::Vector, b::Vector) 
    return sum( abs, cumsum( a ) .- cumsum( b ) )
end
P = earthmoverdistance([1,2,3],[3,2,1])
P = earthmoverdistance([3,2,1,4],[1,2,4,3])


function earthmoverdistancefaster(A::Vector, B::Vector) 
    tot_a, tot_b, emd = 0, 0, 0
    for (a,b) in zip(A,B)
        tot_a += a
        tot_b += b
        emd += abs(tot_a-tot_b)
    end
    return emd
end
P = earthmoverdistancefaster([1,2,3],[3,2,1])
P = earthmoverdistancefaster([3,2,1,4],[1,2,4,3])

@btime earthmoverdistance([3,2,1,4],[1,2,4,3])
@btime earthmoverdistancefaster([3,2,1,4],[1,2,4,3])

Of course this only works for histograms with equal spaced bins, but, good to know how easily this reduces for this special case.

1 Like

Looks like Wasserstein distances are now also supported by @zsteveā€™s OptimalTransport.jl!

(Figured Iā€™d add this reply because this Discourse topic is the first search result that appears for me when I search ā€œJulia Wasserstein distanceā€.)

5 Likes

Just to add for future people searching this, theres a whole JuliaOptimalTransport org:
https://github.com/JuliaOptimalTransport

2 Likes

Sorry if this question looks a bit ridiculous.
Is it correct that the function emd2 from ExactOptimaTransport.jl (see this link) computes the Wasserstein distance, for certain choices of the cost matrix C?