I just finished up an LLVM contract porting to a new processor architecture, I’ve got a bit of free time and was wondering where I might find a list of LLVM-related items/projects that need attention in Julia. I looked at the github projects, but didn’t see anything in that category. I’d primarily be interested in backend as opposed to frontend projects mainly because that’s where my experience is, but I’d be open to either.
There are also more LLVM specific issues that affects julia. Those require less knowledge of julia so it might be easier for you to get started. Unfortunately, those aren’t always searchable on our issue tracker (we currently don’t always keep an issue open for upstream issue).
Here’s a list of LLVM backend bugs that affect us.
Also, note that I just listed all the LLVM bugs affecting us that I know and realized that almost all of them are backend issues… It’s probably not surprising since we deal with the LLVM IR more and it’s much easier to deal with (we don’t expose machine IR in julia and we don’t control machine IR passes at all).
Should some of these be added to the GSoC list? I think we should start readying it for another summer. (Or maybe some of these are too difficult? I don’t know this stuff so I’ll defer to your judgement)
There is also LLVM.jl (created to support CUDAnative.jl) which wraps the C API in an idiomatic, “julian” way. It’s pretty incomplete, only serving my needs for now. Not really LLVM development, but I figured I mentioned it anyway.
The way to approach this is to port Julia’s dependencies to WebAssembly – or replace them somehow. Getting LLVM to compile itself to wasm is the very first step. This may already work; if not it’s a very natural next move for the wasm backend. Then one needs to figure out what to do about dependencies like BLAS which have native assembly code in them. Once all dependencies are ported, replaced or removed, compiling Julia itself should be straightforward.
If linear algebra is moved out to “default packages”, then couldn’t a wasm version just be built to not have the extra dependencies and use the linear algebra fallbacks? It’ll drop a little bit of speed but would be usable. That might be the easiest way to get this done.
Sure, those are possible solutions: writing BLAS routines in Julia is an option; moving linear algebra stuff out of Base Julia is also an option. That’s why I said “figure out what to do about” rather than “port OpenBLAS” – since porting it seems like the hardest path.
This help to spot well performing vs weak assembly.
A lot of good parts appears there. 3 weakness imho: lack of femtolisp test, runtime test, femtolist doc
We do not need absolutely to move on to git submodule yet. All these kinda of work could be simultate with proper subdirectories discipline. Next point should be to reflect the change in the make file.
We could later decide more simply where to plug bias
I’m still a newbie with the julia codebase and my viewpoint could be a rough approximation of reality for now.
A hot project for Julia/LLVM would be to manage to run TensorFlow aot (ahead-of-time) compiled XLA code. This came out with version 1.0 of TensorFlow (a few weeks ago) and according to description allows compiling computation graphs into standalone code using LLVM. This opens up the possibility of integrating TF more tightly with Julia.
Apparantly TF is turning slowly into a Julia clone ;), but TF does have a h-u-g-e dev base which would allow Julia to ride along the Deep- hype.
I would guess that unfortunately standard Julia code will generally compile into resulting native code that is too large to be useful for Ethereum, from my limited understanding of Ethereum. That would mean that, like CUDAnative, only a subset of Julia might be practical.