Planning to release a differentiable FDTD package for inverse design in photonics, acoustics and RF. I’ll include examples on designing stacks, gratings and couplers as well as meta-materials and meta-surfaces. The electromagnetic FDTD update is simple, drawing on earlier patterns in FDTD.jl and uFDTD.jl based on Schneider’s uFDTD book. Autodiff uses DiffEqFlux.jl neural ODEs and fits within the SciML ecosystem. We have fully featured boundary conditions and support dispersive lossy materials (had to pull some hacks to appease Zygote.jl here). Meant as a differentiable, inverse design focused version of a commercial solver like Ansys Lumerical.
Feature requests welcome Academic collaborators, grant cooperation and corporate sponsorship welcome.
I would be interested in testing something like this. I’ve been meaning to set up a simulation of a pulsed laser in a cavity (possibly with an absorptive material). If this package could do different electromagnetic sources (like a Gaussian pulse) that would be very interesting.
Yeah side injection of Gaussian pulse will be a standard source in our code. The update loop is fully exposed so you can also inject whatever additive source you want.
This is something I would be very interested in. I’ve written some toy FDTD codes in Julia and have considered putting in the work to make a good implementation for the General registry, but haven’t had the time lately to work more on it. Making it AD-compatible is something I’d considered but wasn’t even sure would be possible (or at least practical), so hats off for making that work.
Do you have a repo for it on GitHub? I’d love to check it out and see if there’s anything I could contribute.
Thanks I’ll release it around end of this month. The AD piggy backs off DiffEqFlux.jl which uses constant memory wrt timesteps. I’m still testing on toy inverse design problems in Google’s Ceviche challenges in invrs.io ported to Julia.
Hi Alec adjoints are done in time domain. Problem is treated as a neural ODE. DiffEqFlux.jl integrates it backwards to recover intermediates for reverse mode AD.
Everything can be on the GPU but I haven’t tested it there. Takes only couple lines of code to move to/from GPU on Flux.jl / Cuda.jl. Probably have to tweak some code but nothing major.
Would you be interested in trying my AD package FastDifferentiation.jl? I’m always looking for new applications.
Not sure if it will work in your case but for problems in its domain it can be substantially faster than other AD packages. I’ll help you work through any problems you encounter.
Thanks - interesting work! I’ll read up on your approach. It seems a major advantage is a more natural way of handling higher order and mixed derivatives, which would benefit the PINN folks or loss functions involving derivatives? .
Yes, I find it much simpler to specify derivatives this way. If you need a Jacobian rather than just a gradient FastDifferentiation can also be much faster, hundreds of times faster in some cases.
I’m sharing a working draft at GitHub - paulxshen/differentiable-fdtd-beta-prerelease
Still messy and undocumented. If you’re curious you can peak at the examples to get a sense of the API. Goal is inverse design so you have full access to the physics and update looop. I ended up just AD the bare loop (accumulator) because I had stability issues playing around the adjoint ODE sensealg in SciMLSensitivity/DiffEqFlux which trades compute for less memory. So I’m happy I’ve derisked my hypothesis - what’s left is optimization & documentation to make the repo usable for other folks in the coming month.
Have you looked at putting all the code into a common module so that it can be imported into Julia as a package, i.e. Pkg.add("https://github.com/paulxshen/differentiable-fdtd-beta-prerelease")? I don’t see a Project.toml or a main module file under /src, and from the runtest.jl example it looks like each source file is being included directly.
The provided example is pretty cool. I’m looking forward to trying it out this weekend. If you’re willing to entertain PR’s maybe I could help chip away at some documentation and such.
I packed everything into a module and uploaded the inverse design example to Google Colab Google Colab . The settings are on a coarse resolution and lax tolerance to save time but gives you a sense of the workflow. Code is probably still too messy to run yourself but you can peak around. I’ll upload some forward simulation examples too (much easier/shorter than inverse design).
Can you describe that more? Most of the algorithms don’t trade memory for compute. The BacksolveAdjoint will generally be unstable on any PDE you’d use FDTD for (I describe this in my talks https://youtu.be/OyFP565kDUI?si=UsHRRs2B5p5anPrn&t=3738) and so you’d want to not use BacksolveAdjoint but InterpolatingAdjoint or GaussAdjoint.
Thanks Chris I tried InterpolatingAdjoint & QuadratureAdjoint but might’ve been my problem using it. AD the bare loop was the first thing that worked so I’m just using that for prototyping. Also it compiled much faster than ODE adjoints, which I’ll probably come back to once I transition from experimentation to optimization. Will get back to you.