Distributions.jl
has discrete distributions, but these operations are best represented by linear algebra. LinearAlgebra
should suffice entirely. (edited)
So I was thinking the same some years ago and got frustrated with
- matching/reshuffling dimensions to fit the “variable” labels. E.g. p(x,y,z) == p(y,x,z), and p(x,y) may be multiplied with p(y,z) with only one variable in the overlapping scope.
- I wanted a sparse implementation, and I’m not sure how the memory mapping reshaping arrays worked back in the day in Python.
In C++ I have an object that with the main data structure similar to Dict{Vector{Int}, Any}
to represent this: (edited)
image (6).png
Along with some other helper data such as the A,B,C,D labels. These are wrapped in a object DiscreteTable
which is a subclass of Factor
Those operations were then defined on DiscreteTable
and allowed DiscreteTable
to operate in other algorithms designed for Factor
classes. But doing anything in that library is such a hassle and a big mess. I’m busy writing up my thesis, and I was wondering if it’s not better to just get the minimal stuff I did working in Julia and then be able to write my experiments much faster.
So I can obviously go with the same structure as C++ in Julia, but I have the same hunch as you @extradosages:
LinearAlgebra should be able to reach this problem. I just don’t have any idea how to do the dimension->variable mapping, and have everything sparse. (edited)
Hmm, this might help me:
Implementing an equivalent of python's xarray (labeled nd-arrays) in julia (edited)
I might be wrong, but it seems like GitHub - JuliaArrays/AxisArrays.jl: Performant arrays where each dimension can have a named axis with values annotates the “indexes”, i.e. for a 2x2 array, each row can get a name and each column can get a name. But what I would need is to let each dimension have a name, like :row and :column and not each item within that dimension.
This looks exactly like what I want: