Bring Julia code to embedded hardware (ARM)

I had actually already created a Julia embedded GitHub group a while ago here: https://github.com/juliaembedded. I was starting with just simulating embedded arithmetic, and was going to be building up a type system to model fixed-point and arbitrary floating-point arithmetic with rounding mode selection (including stochastic rounding). The goal is to allow an algorithm written in Julia to be simulated using the reduced precision that would be in embedded devices so designers could see the effect on performance.

The next project was going to be building an embedded compiler for Julia, that will take a Julia function and transform it to runtime-independent code that can be fed into the LLVM backend for the desired target (then the LLVM to C backend could be used to generate direct C code if desired). My main goal was to create static libraries that would contain the compiled Julia functions, and then those could be linked into existing C applications.

That does have its difficulties though, since you have to extract the memory allocations from the compiler, and ensure that they are either done with the provided embedded functions or all allocatons become static (which is actually the preferred way of handling memory in critical embedded systems). I haven’t had a chance to dig around in the generated code yet, but I think this should be doable in a similar way to how the GPU group accomplished this.

5 Likes