[ANN]: An efficient and scalable ecosystem for robot learning

Welcome to Lyceum, a framework for developing reinforcement learning, trajectory optimization, and other algorithms for continuous control problems in Julia. The primary goal of Lyceum is to increase research throughput and creativity by leveraging the flexible, performant nature of Julia and its cutting-edge ecosystem.

The current selection of algorithms and environments presents a balance of capability and complexity that reflect the difficulty of real world robotics and tasks. Additional algorithms will likely be added in the future through research, and we will gladly accept PRs of other algorithms. Rather than acting as a repository of reference implementations for every new algorithm, however, we are more interested in what Julia and packages like RigidBodySim, Zygote, DifferentialEquations, Turing, and now Lyceum can do for the field.

The Lyceum ecosystem is organized into several core packages:

  • LyceumBase, a lightweight package consisting of common interface definitions and utilities used throughout Lyceum, such as the AbstractEnvironment type that provides a (PO)MDP-like environment abstraction for robotic control.
  • LyceumAI, a collection of trajectory optimization, reinforcement learning, and other algorithms for robotic control.
  • LyceumMuJoCo, a variety of environments implementing the AbstractEnvironment interface built on the MuJoCo physics simulator.
  • LyceumMuJoCoViz, a feature-rich interactive visualizer for LyceumMuJoCo.
  • Shapes, a high-performance library for viewing flat (e.g. vector) data as structured data.
  • UniversalLogger, a small package that implements Julia’s logging interface and provides a general key-value store for logging experimental data.
  • MuJoCo, a low-level wrapper for the MuJoCo physics library.
  • Lyceum, a meta-package combining all of the above.

As Lyceum is still under heavy development, we’ve yet to register in the General registry. Until then, you’ll have to add the LyceumRegistry.

We’ve already shown how much faster we are then the popular OpenAI Gym and DeepMind Control Suite Python packages; we hope the community can build on these tools to produce more creative (and performant!) methods.

Check out [www.lyceum.ml] for more information and links to the documentation and paper.

Colin & Kendall


Looks cool!

Let me know what you need from DiffEq


Is this currently the maintined RL framework?

AlphaZero.jl is a bit more specialised and advanced.

Reinforce.jl and ReinforcementLearning.jl both seem not maintained.

ReinforcementLearning.jl is maintained. There are actually quite a few things happening currently in the JuliaReinforcementLearning org, e.g. CommonRLInterface.jl.

1 Like

Hi @xiaodai! Lyceum is still maintained but I have been on leave for the last couple months. It’s actively used in our lab and there are several changes/improvements in the process of being integrated (just not visible in the public repo). Once that’s done I’ll tag a new release and register it in General :).

The goals of Lyceum.jl and ReinforcementLearning.jl are a bit different. Lyceum provides a broader of an interface to support more than just RL (e.g. Real-time Control) and is also more focused on robotics (MuJoCo, a physics simulator and de-facto standard, is the only environment back-end right now, but it’s easy enough to add a new one).

Hope that helps! Once the new release is out I’ll post in this thread.

1 Like