MLIR is an exciting but admittedly confusing project (note that I don’t work directly on it) and the best way to keep up with it is to join the firstname.lastname@example.org mailing list. That said, let me try to give some context:
MLIR is a flexible compiler framework/infrastructure that can be used for many different things, but it’s particularly valuable as a shared infra layer for defining legalization paths through multiple levels of new and existing IRs. In TensorFlow it’s being used to improve, simplify, and share code between compiler/converter tools like the TF-TF Lite Converter, the TF-XLA bridge, and other graph compiler backends. It’s also being investigated as an infra layer within parts of XLA (for codegen), Flang (where FIR would be defined as an MLIR dialect), and other compiler projects. There’s no single MLIR IR, although there are a couple work-in-progress “standard” dialects used to implement shared functionality that can be reused by other dialects at various different levels (HLO-like, loop nest, etc.). LLVM IR itself is a first-class dialect inside the MLIR infrastructure; at the moment it, TensorFlow and TF Lite graphs, and XLA HLO are among the most complete dialect specifications.
Swift would be then be one possible frontend for MLIR dialects and MLIR-based compilers; Julia would be another. I see clear benefits to each one stemming from static compilation and diagnostics on one side and JIT compilation and a fluid value/type domain boundary on the other. I’m giving an “MLIR for Julia developers” talk at JuliaCon where my main goal will be provoking the community to try out different things and figure out what MLIR can do for Julia and vice versa.