How do Julia packages, e.g. Flux and Knet correspond to Python ecosystem

I’m not so much asking if they are better, just some mapping, and trying to get an overview of the mainstream and where there might be big holes in the Julia ecosystem.

E.g. Pytorch Lightning corresponds to Keras, and to what in Julia? See e.g.:

They are higher-level interfaces to Pytorch and TensorFlow, those are the most mainstream low-level packages right? There’s also MXNet, and Julia already officially part of, but has MXNet fallen out of favor (not only for Julia)?

Just as Torch was written in Lua and migrated to Python, it could happen again to Julia, or has already happened in the form of Flux (also a replacement for TensorFlow)? Where does Knet fit in?

I’m mostly thinking of neural networks, feel free to add more mappings for machine learning. I.e. MLJ.jl would correspond to SciKitlearn? And it can also have it as a backend.

For some applications, e.g. GPT-3, you would find such in a model-zoo at e.g. Flux (if available), or in other more likely places? And models improving on GPT-3 like Google Brain’s Switch Transformer Language Model Packs 1.6-Trillion Parameters | Synced

For multi-GPU, what’s the go to library now, Horovod or Deepspeed or its fork: GitHub - EleutherAI/DeeperSpeed: DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective. ? And what’s the equivalent in Julia if any, or could you use those?

GitHub - FluxML/FastAI.jl: Port of FastAI V2 API to Julia corresponds well to

There is no one location for all of this info, but Flux – Ecosystem is the best place to start. We also have quite a few issues, pointers and discussions around ecosystem parity and what should map where at GitHub - FluxML/ML-Coordination-Tracker: The FluxML Community Team repo. This is where the FastAI.jl development planning started, for example. If you either follow the meeting minutes or the Zulip link, you’ll also see that we’ve had plenty of discussions around how to implement multi-GPU and distributed training. More hands are always welcome!