Trial of a Commercial Julia AOT Compiler with Offline License Checking

The leadership team at Suzhou-Tongyuan has confirmed that we can provide an offline license checker and a standalone installation of the AOT compiler, independent of the full Syslab package, if there is potential for future commercial cooperation.

(EDIT: ) You might see this post for more details: SyslabCC: Suzhou-Tongyuan’s proprietary Julia AOT compiler is now available for free use (personal & educational license only) - Community - Julia Programming Language

If you are interested in the AOT compiler, please DM me.

We look forward to hearing from you regarding:

  1. The specific scenarios and needs for using Julia’s AOT compiler, so we can assess how well our technology meets your requirements.
  2. Basic information about your company, such as the email address or a brief description of your company.

Thank you, and we look forward to your response.

We’d also announce that our latest AOT compiler is now capable of producing standalone CMake project with generated pure C++ code and shared libraries copied from the local Julia installation. The libraries and executables compiled from the generated CMake project have recently been verified to slightly outperform the execution of the same code in vanilla Julia, based on well-known benchmarks such as Julia performance measurements (Benchmarks Game) (pages.debian.net).

Suzhou-Tongyuan has consistently contributed to the Julia ecosystem with various open-source projects, including, but not limited to:

  1. Suzhou-Tongyuan/JNumPy: Writing Python C extensions in Julia within 5 minutes.
  2. Suzhou-Tongyuan/ObjectOriented.jl: Conventional object-oriented programming in Julia without breaking Julia’s core design ideas.
  3. Suzhou-Tongyuan/UnzipLoops.jl: broadcast and unzip it!
  4. Suzhou-Tongyuan/GaloisFieldNumbers.jl: JuliaCN 2022 archived demo repo: How Julia beats MATLAB’s C codes by 1000x.
17 Likes

You guys did a great job!

1 Like

How well does this compiler cooperate with Lux.jl? Is it practical to generate the inference code for Lux Models? Do you have any experience with this use case?

3 Likes

@liuyxpp How type-stable is the code? If the code is totally type-stable, things should mostly work. However, strict type stability is hard and we only see this property in lower-level packages such as DataStructures.jl, StaticArrays.jl and so on. I’ll have a try for Lux.jl on Monday. Do you have any specific downstream example for us to test?

1 Like

Will it work with LoopVectorization.jl?

Does it generate and links to BLAS / LAPACK?
If so, will it work with FastLapackInterface.jl?

The code is purely based on basic arrays of primitive types (Intxx, Uintxx, Floatxx).

1 Like

llvmcall is not supported according to this announcement:

2 Likes

Lux.jl/test/enzyme_tests.jl at ca23485e6e50c8b4f4ae7aa6ac76fcf92b040195 · LuxDL/Lux.jl · GitHub would be nice to try out. These already work with Enzyme which requires a relatively high degree of type stability.

1 Like

FYI Enzyme actually presently works fairly well with type unstable code. There’s still a few todos, but generally things work. I haven’t checked that recently but if memory serves those tests did require type unstable support.

4 Likes

We have recently uploaded the GH repo (github.com/Suzhou-Tongyuan/JuliaCon2024-JuliaAOT) holding the examples for the talk.

See the folder features/compile-time, the recent version of SyslabCC would work with Flux CPU inference mode. The example trains the model at compile-time with Flux.jl & CUDA.jl, and exports the model object and an inference function into C++.

3 Likes

I tried exporting Lux’s model inference to C++/standalone executable with SyslabCC, but didn’t get a good luck.

It seems that LuxCore.apply is not type-stable, maybe mainly due to Statistics.mean’s type-unstability for high dimensional cases.

Things like Base.sum(sequence; dims) or Statistics.mean(sequence; dims) are not type-stable when sequence has 2 or more dimensions. Stdlib uses dynamic dispatch as well for this, but it is totally doable for us to make a type-stable implementation for sum, mean or similar stuffs. We have already patched Base.sum.

That seems to hit the training=Val(true) dispatches for normalization. You might have missed adding a testmode call LuxCore | Lux.jl Docs before running inference?

2 Likes

Thanks, I’d have a try again tomorrow.