Resources for Distributed Training w/ Flux

Hello -

Is there a current (c. 2022) guide to parallel / distributed training in Flux, especially on GPUs? I found this archived repo but if there’s anything more current or if anyone has done this recently, I’d love to look. My application is image classification.

Thanks!

1 Like

There are no guides since the supporting tech is still very much in development, but if you’re willing to be an early adopter then you could look into some of the options discussed in data parallel distributed training · Issue #910 · FluxML/Flux.jl · GitHub.