The leadership team at Suzhou-Tongyuan has confirmed that we can provide an offline license checker and a standalone installation of the AOT compiler, independent of the full Syslab package, if there is potential for future commercial cooperation.
If you are interested in the AOT compiler, please DM me.
We look forward to hearing from you regarding:
The specific scenarios and needs for using Julia’s AOT compiler, so we can assess how well our technology meets your requirements.
Basic information about your company, such as the email address or a brief description of your company.
Thank you, and we look forward to your response.
We’d also announce that our latest AOT compiler is now capable of producing standalone CMake project with generated pure C++ code and shared libraries copied from the local Julia installation. The libraries and executables compiled from the generated CMake project have recently been verified to slightly outperform the execution of the same code in vanilla Julia, based on well-known benchmarks such as Julia performance measurements (Benchmarks Game) (pages.debian.net).
Suzhou-Tongyuan has consistently contributed to the Julia ecosystem with various open-source projects, including, but not limited to:
How well does this compiler cooperate with Lux.jl? Is it practical to generate the inference code for Lux Models? Do you have any experience with this use case?
@liuyxpp How type-stable is the code? If the code is totally type-stable, things should mostly work. However, strict type stability is hard and we only see this property in lower-level packages such as DataStructures.jl, StaticArrays.jl and so on. I’ll have a try for Lux.jl on Monday. Do you have any specific downstream example for us to test?
FYI Enzyme actually presently works fairly well with type unstable code. There’s still a few todos, but generally things work. I haven’t checked that recently but if memory serves those tests did require type unstable support.
See the folder features/compile-time, the recent version of SyslabCC would work with Flux CPU inference mode. The example trains the model at compile-time with Flux.jl & CUDA.jl, and exports the model object and an inference function into C++.
It seems that LuxCore.apply is not type-stable, maybe mainly due to Statistics.mean’s type-unstability for high dimensional cases.
Things like Base.sum(sequence; dims) or Statistics.mean(sequence; dims) are not type-stable when sequence has 2 or more dimensions. Stdlib uses dynamic dispatch as well for this, but it is totally doable for us to make a type-stable implementation for sum, mean or similar stuffs. We have already patched Base.sum.
That seems to hit the training=Val(true) dispatches for normalization. You might have missed adding a testmode call LuxCore | Lux.jl Docs before running inference?