Iβm pleased to announce that NeuralOperator.jl is now available. Neural operator is a novel deep learning architecture introduced by Zongyi-Li et al. It learns a operator, which is a mapping between infinite-dimensional function spaces. Instead of solving by finite element method, a PDE problem can be resolved by training a neural network to learn an operator mapping from infinite-dimensional space (u, t) to infinite-dimensional space f(u, t). Neural operator learns a continuous function between two continuous function spaces.

Fourier Neural Operator learns a operator based on Fourier transformation. It maps a time-domain continuous function to another frequency-domain continuous function and back. Modes are truncated by user-specified parameter. It could be applied to solve PDE problems with initial conditions.

## Example

There is a simple example for solving Burgersβ equation. It is available for CUDA. Here how we construct the Fourier neural operator model

```
if has_cuda()
@info "CUDA is on"
device = gpu
CUDA.allowscalar(false)
else
device = cpu
end
model = FourierNeuralOperator(
ch=(2, 64, 64, 64, 64, 64, 128, 1),
modes=(16, ),
Ο=gelu
) |> device
```

Then we define loss function and load data from simple interface.

```
loss(π±, π²) = sum(abs2, π² .- model(π±)) / size(π±)[end]
loader_train, loader_test = get_dataloader()
function validate()
validation_losses = [loss(device(π±), device(π²)) for (π±, π²) in loader_test]
@info "loss: $(sum(validation_losses)/length(loader_test))"
end
```

Finally, we train it directly with Flux.jl facilities.

```
data = [(π±, π²) for (π±, π²) in loader_train] |> device
opt = Flux.Optimiser(WeightDecay(1f-4), Flux.ADAM(1f-3))
call_back = Flux.throttle(validate, 5, leading=false, trailing=true)
Flux.@epochs 500 @time(Flux.train!(loss, params(model), data, opt, cb=call_back))
```