Questions on NeuralPDE.jl

Hi, I’m quite new to Julia and I discovered this amazing package. I have a couple of questions regarding it:
1 - Should I use Lux or Flux? It seems that Lux is used in almost all examples, but when using the gpu one should use Flux, is this correct? (if I use Lux and use the same syntax I get a warning and the variable “chain” which should contain the NN is of type “Nothing”)
2 - Suppose I have a Parabolic PDE in 5 dimensions that could be solved also with HighDimPDE.jl, which of the two packages should I use and why?
3 - Is there any restriction on what packages I can use to optimise/train the NN?
Thanks a lot :slight_smile:

Hey,
These are the kinds of questions I plan to address in the higher level https://docs.sciml.ai/dev/, but I’ll put some short answers here.

Can you share more details on this? Someone also that that here for DiffEqFlux on a Lux example:

But our CI machines and my laptop cannot recreate it. Any computer I have run things on seems fine. So if you can help me hone in on what it could be (maybe it’s something like using Julia v1.6 instead of v1.7? A Mac M1 chip only issue? I don’t know, shots in the dark right now). (@avikpal do you know what it could be?)

Generally we prefer Lux.jl because it’s simpler to get both correct and fast code with. But we also make sure to support Flux.jl everywhere for compatibility reasons.

ComponentArrays.jl currently has issues with GPUArrays broadcast overloads which is why that’s not used for the GPU examples right now, but the intention is to make all examples Lux-based once that is figured out.

HighDimPDE.jl. Its methods are much more specialized to a specific type of PDE, and from that it’s much more efficient on those PDEs.

Nope. Optimization.jl is pretty much just a wrapper to all other optimization packages we can get our hands on, so you can just have it call any package in the big list.

http://optimization.sciml.ai/stable/

If the question is whether you can target it to a different optimization front end, yes it’s possible to take the OptimizationProblem and have it call other packages because… that’s what the wrapper solvers are doing :sweat_smile:. So generally it would just be easier to call the wrapper solver since otherwise you’ll likely be writing similar code. If there’s some wrapper that is missing, it would be nice to PR that and just add it to the interface. It’s generic enough that you can hack it in a few minutes to use something random like SciPy to solve the OptimizationProblem if you really wished, so it definitely doesn’t cover all possible optimization backends.

2 Likes

Cool, thanks a lot for all the answers. Indeed I find the whole SciML ecosystem a bit confusing but super exciting!

Regarding the Nothing issue, I have it if I declare something like:

dim = 2
inner = 16

chain = Lux.Chain(Dense(dim,inner,Lux.σ),
Dense(inner,inner,Lux.σ),
Dense(inner,1)) |> gpu

I have a RTX2070 mobile so nothing exotic, Julia version is 1.7.2

Doesn’t make sense. Calling gpu on a chain is a Flux idea, not a Lux idea (and this is part of the whole, Lux needs to support GPUs differently, and it doesn’t in the context of vector solves right now because of a ComponentArrays.jl issue).

1 Like

That’s indeed what I figured. I ended up doing this mistake because Flux and Lux are very similar in spelling and one is used in all the examples with a CPU and the other is used in the GPU example.

1 Like

Sidenote: This will throw a depwarn post v0.4.7 and error in v0.5

1 Like