What is the status of ReinforcementLearning.jl

Hi All,

I would like to ask, what is the status of ReinforcementLearning.jl? Specifically, if there is any active community around and it is therefore worth to learn it, use it, and fix the bugs.

To an outside observer, it seems like a big project, but there is not many activity around? Is there any alternative?

Thanks a lot for answer,
Tomas

  1. Read the online documentation! Most likely the answer is already provided in an example or in the API documents. Search using the search bar in the upper left.

  2. Chat with us in Julia Slack in the #reinforcement-learnin channel.

  3. Post a question in the Julia discourse forum in the category “Machine Learning” and use “reinforcement-learning” as a tag.

I have move it to machine learning and joined the slack channel. So it is seems to me it is active in a middle of transition.

1 Like

Hi,

This repo is currently more or less stale due to the original creator having left the project mid-refactor. The core of the package was refactored, but the algorithms were not.

I was actively contributing to RL.jl with @jeremiahpslewis a bit more than one year ago. We had the intention to complete the refactor along with continuing the improvement of the package. RL.jl is a fairly ambitious project in my opinion. It has a lot of potential to be the library for easily designing new algorithms. However, the amount of work required to bring it back to a feature complete state is daunting for two persons. On my end, I am currently finishing the writing of my PhD thesis and haven’t had time (nor need) for RL.jl.

So yes, the project is big but doesn’t have enough contributors (with time). If you want to contribute, we’ll happily help. If you prefer an alternative, POMDPs.jl can do some RL and is more stable and active. It depends on your needs.

Agree :100: with @HenriDeh

Thanks for the answer and congratulation for finishing your PhD. This is great achievement.

I understand the size of the project. What I am worried a bit is that even the new api does not suit our intended application. As I have written on slack, we need a possibility to work with environments, which have infite action space, though in every state, the action space is finite. This naturally occurs when the state is defined as a graph, where some actions are available on each vertex every vertex (here is a link to a paper of our student who does that [2009.12462] Symbolic Relational Deep Reinforcement Learning based on Graph Neural Networks and Autoregressive Policy Decomposition). It seems to me that with the current api, this is not practical, since legal_action_space should provide list of all actions and legal_action_space_mask returns a mask indicating which action is correct.

Thanks for discussion.