Actually I think this DSL is not the only language that tries to solve these problems in program language’s level .For example,Halide program language:Halide, and recently I come across another language called Taichi ,which focuses on sparse data structure.(while Halide focuses on dense data structure). Taichi also supports differential programming.
So I think more and more people try to optimize a specific (high performance) problem by carefully designing a DSL and building a corresponding compiler to transform the program.
I wonder whether this will somehow affect Julia, since as a general program language, it’s impossible to do better than such
specifically-optimized DSL. We can of course also build a embedded DSL in Julia (and a compiler), like CUDAnative, but I am not sure whether this would be much better than C++…
I just scan through some tutorials and this workshop papers, it seems that DEX has nothing new and interesting(correct me if I’am wrong), though its syntax looks quite neat:
pairwiseL1 :: n=>d=>Real-> n=>n=>Real
pairwiseL1 x = for i j.sum (for k. abs (x.i.k - x.j.k))
So basically the author views an array as a function from an indexable space to some element space: array::IndexSpace->ElementType, this is not a new idea, at least I know Halide adopts this view. And I think Halide’s approch is much better and smarter.
I personally doubt that this project would be another Lens library: use pure functions in a partialy extended static type systems to implement everything.
I even consider to state there that Julia already uses typed indexes (CartesianIndex<=>LinearIndex) though that might not be entirely what they meant(that rather would be AbstractRange), it is perfectly possible. Also, while liking the style of Haskell for its intuitive suggestion of currying and chaining, we can do similar stuff:
For their example of pairwiseL1: map(((a,b),)->sum(abs,a.-b), Iterators.product(eachcol(A), eachcol(A))) would be valid julia code.
Though, the important part here is the (imho) very concise term sum(abs, a.-b) for the L1 distance between 2 columns.
This one even preserves custom-indexing! And it seems to be fast aswell as the functions are all iterators.
For their second example for L1 between whole images, well, we don’t even care about the shape: sum(abs, a.-b) *hmm looks somewhat familiar I really love Julia!
But most grouping of indexing can be achieved by intermixing integer indexes and CartesianIndexes.
There’s also a german article where it is said that they still look for some native interfacing to move calculation heavy jobs to some faster language. Wouldn’t that be a very nice use case for Julia?
I think that the main reason why they use typed index is to do bound and index inference, which needs compiler’s support. And that’s why they are so concerned with the shapes’ information, they can use it to do some smart rewrite and get better performance.
They create a whole program language, which means that what they can do is much more powerful than just lift type information onto type paramenters and write for compilers to figure out how to do optimizations
What they do is quite similar to StaticArrays.jl(Though they use some approachs called dependent type, but there is no big difference because Julia is a dynamic language so it can already minic such behaviours). But I guess with a strong type system a carefully designed type system and a compiler they can achieve more goals.For example they might do loop fusion and reordering based on the index pattern(for example, if you index an array with an increasing index, then it’s impossible for you to view a element twice, so it might be safe to parallelize this code(one element one thread). Every compiler can do more or less such kinds of optimizations, but with a carefully designed type system, you can gain more information and do more specific optimizations. I think StaticArrays.jl generally can’t do this, of course it can optimize its functions, but inter-functions optimizations still need to be carried out by compilers.