Does anyone know if there are dialects made for julia and MLIR, not wrappers for MLIR dialects but compiled to MLIR that define julias types and ops for MLIR in C++ files. The problem: MLIR types need their storage class fully defined before
registration, but I have a circular dependency:
- JLCSTypes.h includes the typedef classes
- But the storage definition is in JLCSTypes.cpp.inc
- JLCSDialect.cpp includes JLCSTypes.h then tries to register the type
- Storage is incomplete at registration time. Do I split the files and try resolving the includes seperatley. My MLIR starting cpp, h, td, cmake files are in examples of RepliBuild.jl repo.
#!/usr/bin/env julia
# Test JLCS dialect using MLIR.jl bindings
using Pkg
Pkg.activate(joinpath(homedir(), ".julia", "dev", "MLIR"))
using MLIR
using MLIR.API
# Path to our compiled JLCS dialect
const libJLCS_path = joinpath(@__DIR__, "Mlir", "build", "libJLCS.so")
# Check library exists
if !isfile(libJLCS_path)
error("JLCS library not found at: $libJLCS_path")
end
println("β Found JLCS library: $libJLCS_path")
# Load JLCS dialect
@assert dlopen(libJLCS_path, RTLD_GLOBAL) != C_NULL "Failed to load JLCS library"
# Create MLIR context
ctx = API.mlirContextCreate()
@assert ctx.ptr != C_NULL "Failed to create MLIR context"
# Register JLCS dialect
ccall((:registerJLCSDialect, libJLCS_path), Cvoid, (API.MlirContext,), ctx)
# Create module
loc = API.mlirLocationUnknownGet(ctx)
mod = API.mlirModuleCreateEmpty(loc)
@assert mod.ptr != C_NULL "Failed to create module"
println("β Created MLIR module")
# Print module
println("\nEmpty module:")
op = API.mlirModuleGetOperation(mod)
API.mlirOperationDump(op)
# Cleanup
API.mlirContextDestroy(ctx)
I figured out how to compile julia dialects to MLIR, and I can resolve ffi gen for Inheritance, Virtual methods, nested callbacks, and execute c++ without a wrapper, this goes Julia \rightarrow DWARF \rightarrow MLIR \rightarrow LLVM \rightarrow C++. Full ffi gen and execution with zero writing of any c++
The MLIR dialect is simply a carrier of this ABI fidelity.
When Julia βcallsβ a C++ this call, jlcs.vcall, it isnβt FFI.
Itβs a direct LLVM IR call to a function pointer with:
- correct ABI
- correct calling convention
- correct registers
- correct stack frame layout
Identical to what the C++ compiler itself would generate.
C++ Code Julia Code
β β
MLIR IR (JLCS Dialect) β Same IR β MLIR IR (Julia types)
β β
Unified LLVM IR (no boundary)
β
Native Machine Code
β
Direct Execution
// ====================================================================
// Virtual Method Call Operation
// ====================================================================
//
4. Virtual Call Op (Call C++ virtual method via vtable)
def VirtualCallOp : JLCS_Op<"vcall"> {
let summary = "Call a C++ virtual method through vtable dispatch.";
let description = [{
Calls a C++ virtual method by:
1. Reading the vtable pointer from the object (at vtable_offset)
2. Loading the function pointer from vtable[slot]
3. Calling the function with the object pointer + arguments
Example:
```mlir
%result = jlcs.vcall @Base::foo(%obj)
{ vtable_offset = 0 : i64, slot = 0 : i64 }
: (!llvm.ptr) -> i32
```
}];
let arguments = (ins
SymbolRefAttr:$class_name, // Class name (e.g., @Base)
Variadic<AnyType>:$args, // Arguments (first is always object pointer)
I64Attr:$vtable_offset, // Offset of vptr in object
I64Attr:$slot // Vtable slot index
);
let results = (outs Optional<AnyType>:$result);
let skipDefaultBuilders = 1;
let builders = [
OpBuilder<(ins "SymbolRefAttr":$class_name, "ValueRange":$args,
"IntegerAttr":$vtable_offset, "IntegerAttr":$slot,
"Type":$resultType)>
];
let extraClassDeclaration = [{
VirtualCallOp(::mlir::Operation *op) : Op(op) {}
// Helper to get the object pointer (first argument)
Value getObject() { return getArgs()[0]; }
}];
}
Introduction
This guide teaches Julia developers how to create custom MLIR dialects for advanced FFI scenarios. The JLCS dialect demonstrates:
- C-ABI struct manipulation (field access by byte offset)
- Virtual method dispatch (vtable-based calls)
- Strided array operations (cross-language arrays)
- Complete LLVM lowering (executable code generation)
- JIT compilation (runtime code execution)
Why MLIR for Julia FFI?
Traditional Julia FFI (ccall) works well for simple C functions, but struggles with:
- C++ virtual methods and inheritance
- Complex struct layouts with padding
- STL containers with implementation-defined layouts
- Cross-language optimization opportunities
MLIR provides:
- Custom IR tailored to your FFI needs
- Transformation passes for optimization
- Direct LLVM lowering for native performance
- Type-safe operations verified at IR level
Full Introduction and Julia and MLIR findings RepliBuild.jl/docs/mlir at main Β· obsidianjulua/RepliBuild.jl Β· GitHub
Examples will need specific toolchain versions, but docs are setup to be readable for curiosity.
State Sharing - Julia Dispatch
βββββββββββββββββββββββββββββββββββββββββββ
Original C++ Library (.so)
β βββ Compiled C++ code
β βββ Vtables in .rodata
β βββ C++ objects in heap/stack
βββββββββββββββββββββββββββββββββββββββββββ
β β β
β β βββ Operates on same objects
β βββββ Reads same vtables
βββββββ Calls same functions
βββββββββββββββββββββββββββββββββββββββββββ
MLIR-Generated Glue (JIT compiled)
β βββ Vtable dispatch code
β βββ Field access trampolines
β βββ Type conversion helpers
βββββββββββββββββββββββββββββββββββββββββββ
β
β ccall
βββββββββββββββββββββββββββββββββββββββββββ
Julia Code
β βββ Holds pointers to C++ objects
β βββ Triggers dispatch via glue
βββββββββββββββββββββββββββββββββββββββββββ
State is shared because:
C++ objects live in the original libraryβs memory
Julia just holds Ptr{Cvoid} to them
MLIR-generated code passes these pointers to the original C++ functions
Multiple libraries can link to the same C++ library and share objects
When Julia calls a virtual method, MLIR-generated code does the dispatch
The actual C++ function executes on the C++ call stack
C++ can call other C++ methods normally
The entire C++ library behaves as if C++ called it