AutoEncoderToolkit.jl - A package for training (Variational) Autoencoders

Announcing AutoEncoderToolkit.jl: A New Package for Training Autoencoders

We are excited to introduce AutoEncoderToolkit.jl, a new package designed to simplify the training and usage of Flux.jl-based autoencoders and variational autoencoders (VAEs), with a strong probabilistic perspective.

Key Features:

  • Probabilistic Focus: Variational encoders and decoders are defined by the log-probability distribution they encode.
  • Multiple VAE Flavors: Includes (so far) β-VAE, MMD-VAE, InfoMax-VAE, HVAE, and RHVAE.
  • Modular Design: Easily implement new encoder/decoder architectures and VAE variants thanks to Julia’s multiple dispatch.
  • Differential-Geometry Perspective: Early stages of implementing a differential-geometry perspective on VAEs to better explore the learned latent space.
  • Simple Installation: Install via Julia’s package manager.
  • GPU Support: Train models on CUDA-compatible GPUs effortlessly.
  • Extensive Documentation: Detailed documentation and highly annotated code enable quick onboarding for contributors with Julia experience.

Contributions Welcome: We are looking for contributors to expand the list of available models. Check our GitHub repository for more details.

For comprehensive documentation and examples, visit our documentation page.

11 Likes