[ANN] Breeze.jl: GPU-based high-res atmospheric modeling based on Oceananigans.jl

Hello Julia community!

We’re building GPU-first, finite volume, pure Julia software for atmospheric modeling called “Breeze.jl”, based on Oceananigans.jl. Breeze looks and smells like Oceananigans and reuses grids, fields, operators, solvers, and design principles.

We’re currently focused on high-resolution applications, from O(10 m) scale large eddy simulations to O(km) “mesoscale” simulations. Our priorities (now) are state-of-the-art cloud physics parameterizations, high performance, and differentiable workflows behind a radically easy-to-learn user interface (the Oceananigans Way), all with an eye towards weather science and forecasting.

This announcement is also a call to action. It takes a whole community to build the kind of atmosphere model we’re dreaming about.

Hello, world

Breeze’s syntax mirrors Oceananigans and is designed to maximize the productivity of scientists. Here’s a “hello world” for Breeze:

using Breeze
using Oceananigans.Units

grid = RectilinearGrid(size=(128, 128), x=(0, 1e4), z=(0, 5e3),
                       topology=(Periodic, Flat, Bounded))

Q₀ = 1000 # heat flux in W / m²
ρe_bcs = FieldBoundaryConditions(bottom=FluxBoundaryCondition(Q₀))
boundary_conditions = (; ρe=ρe_bcs)

model = AtmosphereModel(grid; advection=WENO(), boundary_conditions)

θ₀ = model.dynamics.reference_state.potential_temperature
θᵢ(x, z) = θ₀ + 1e-2 * rand()
set!(model, θ=θᵢ)

simulation = Simulation(model, Δt=10, stop_time=2hours)
conjure_time_step_wizard!(simulation, cfl=0.7)

run!(simulation)

using CairoMakie
heatmap(PotentialTemperature(model), colormap=:thermal)

This script produces a plot of a turbulent potential temperature field in free convection:

Current scope

Breeze’s current scope is driven by the needs of the current developers. These include capabilities for idealized air-sea interaction and ocean-atmosphere simulations (with contributors from University of Melbourne and University of Tasmania). A second driver is an initiative to build a differentiable, high-resolution, limited area modeling system for severe weather research, forecasting, and microphysics model development (with contributors and CI support from Aeolus Labs in San Francisco). All of us share the goal of attracting more users and developers from diverse areas in Earth system sciences who are interested in contributing to cutting-edge, beautifully-documented, ultra-efficient, and differentiable atmosphere modeling software for teaching, research, and operations.

Our focus on high-resolution, limited-area applications distinguishes the current scope of Breeze from global frameworks like SpeedyWeather and ClimaAtmos. Breeze development has proceeded rapidly over the past few months: we currently support both anelastic and compressible dynamics, potential temperature and static energy thermodynamic formulations, extensions to the Climate Modeling Alliance’s excellent CloudMicrophysics.jl and RRTMGP.jl packages, advection schemes and turbulence closures borrowed from Oceananigans, and more. Support for coupled ocean-atmosphere, Breeze-Oceananigans simulations is coming soon. We are pursuing the implementation of differentiable workflows using Reactant.jl and Enzyme.jl, as in Oceananigans. We’re also working on a JOSS paper that will describe these preliminary features.

Join us

We hope that Breeze’s potential and growing capabilities fertilize the growth of a vibrant, welcoming, and diverse community of atmospheric scientists, cloud physicists, compiler engineers, fluid dynamicists, Earth system scientists, students, researchers, users, and model developers. We’ve formed the NumericalEarth slack to help crystallize this effort. Check out Breeze’s source code and documentation, and consider joining us in this adventure.

Sincerely,
Breeze devs

PS: some eye candy below, generated by Breeze and Makie.

Clouds, updrafts, and rain in the canonical RICO LES case (VanZanten et al 2011)

Initial growth and roll up of a 2D cloudy Kelvin-Helmholtz instability

44 Likes

Very exciting news, congratulations!

Would something like wind power forecasting / energy yield estimation fall within the scope of Breeze.jl? With the rollout of off-shore wind capacity in Europe it becomes incredibly important to do wake-loss studies. A recent example is the study done for the Netherlands by the company Whiffle, a spin-off of TU Delft. Therefore collaboration with TU Delft seems logical to me (@ufechner7).

Perhaps you would also be interested to join the Julia4PDEs event at the VU amsterdam, see Julia4PDEs 2026 workshop at Vrije Universiteit Amsterdam (not my field of expertise)

2 Likes

I think wind energy forecasting (and more broadly microscale forecasting) will very much be in Breeze’s wheelhouse (and Aeolus Labs’ forecasting system which is based on it)! This is an area where capabilities we already have – high-order WENO advection + an array of LES closures from Oceananigans – can shine. We’re also building support for both anelastic and fully compressible dynamics; this is useful for some microscale cases over flat surfaces where an anelastic solver may be more efficient.

I cannot myself due to a conflict, but it’s very possible that some Europeans in the Breeze/Oceananigans community would be interested to present at this!

3 Likes

congrats!!!

1 Like

Very nice! I hope we can use this code for wind farm simulations. I will discuss in our research group at Delft University of Technology. A number of my colleagues are working on gravity waves, for example. I think we are the largest wind energy research group in Europe, so hopefully we can find someone to take this on.

One question: Can this code run on a cluster?

2 Likes

It should run on any machine Julia runs on, and if you have accelerators it should in principle work with any accelerator supported by KernelAbstractions.jl (although main platform tested is Nvidia GPUs, on others it hasn’t been tested much if at all).

If the question is whether it can run in distributed applications, I’m not aware of anyone testing Breeze.jl specifically on large scale applications yet, but Breeze.jl uses the whole Oceananigans.jl infrastructure, and Oceananigans.jl itself has been used on up to 768 GPUs quite successfully.

4 Likes

To add to this — Breeze is definitely “GPU-first”, so using it for applications basically requires access to GPU clusters (or other GPU resources). Some people use Breeze on Australia’s Gadi cluster, which has H200s. All work I’m aware of is single GPU so far, but tests for distributed configurations are in the works (and anecdotally do work, although we are not sure about scaling). Note, Breeze can fit up to 2048^2 x 200 cells on an H200 (see our prototype GATE simulation for example), which permits 200 km square simulations at 100m resolution and covers a wide range of science applications.

As @giordano said, Oceananigans is currently used on a variety of clusters (see the discussions for example) and we expect Breeze usage will develop similarly.

4 Likes

This is very cool! Thank you for your efforts on this! I work on turbulence in the ABL using LES and observations (with a fluid mechanics focus), and have been thinking about developing just this kind of tool to address some of the difficulties I’ve encountered in configuring simulations in C/Fortran codes. It’s been a bit hard to get buy-in from colleagues as many of them are at most passingly familiar with Julia, and so are not easily convinced Julia is worth considering over C/Fortran/etc. Not only will Breeze.jl be useful in its own right for my work, but I also hope it will serve as a useful example of why considering Julia seriously for future work is worthwhile. I certainly hope to contribute to this community going forward :slight_smile:

6 Likes

@fergu some people will always see Julia as a barrier, especially if they are already familiar with a modeling system (and have traumatic memories of long learning curves)…

That’s why it’s crucial that new codes offer not only an improved user experience. We’re aiming for dramatic reduction in computational cost and to support more complex science, too.

(Oceananigans sees something like 10-50x acceleration over fortran codes, for Breeze solid numbers are still TBD but preliminary results seem to be in a similar ballpark.)

3 Likes

Wow! Is that 10-50x on a core-for-core (and/or gpu-for-gpu) basis? I did semi-recently develop a 1D solver for the Euler equations (with the same general idea of a user facing interface to configure an arbitrary problem) and did see a speedup against a comparable C code around 10-20x, so I’m willing to believe it - I just assumed I hit some optimization that the comparison code didn’t have, so it was a one-off benefit that wouldn’t hold up in heavier workloads. That’s excellent if heavier workloads are seeing similar speedups, though.

I do most of my work on NCAR’s Derecho, which I see there has been some discussion on getting this to work well there. I will have to get Breeze set up when I get to work on Monday and give it a shot!

@fergu that is on a “cost” basis, using a rough formula that 10 CPU = GPU (when I said “fortran” above, I explicitly meant “legacy CPU”). The results were reported in this paper:

https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/2024MS004465

The numbers in that paper are somewhat out of date, because there have been decent performance improvements since then.

Many porting efforts from CPU to GPU claim cost improvements in the 2-10x range. I think we see more because we write from scratch specifically for GPU, and use abstractions to achieve a high degree of kernel fusion. This strategy relinquishes CPU performance: our code is only intended to be used on GPUs for heavy applications. (Optimizing for CPU is possible, but maybe not such a worthwhile effort these days.)

Oceananigans is fairly widely used on Derecho. Let us know if you have an issues with Breeze!