How do I check my custom Env in Reinforce.jl?

Hello all,

I have written a custom enviroment in reinforce.jl and now I want to check if it is correct (i.e. if the actions and the step function work as thought).
For this I wanted to call the functions one by one in the Julia REPL and execute them one after the other.
But then it always looks like in the uploaded example (here with the default CartPole environment, which is included in the package).

Does someone know how tho see what the actual values would be or how I can check my environment properly?

Thanks in advance!


You will have to create an instance of the CartPole environment. I’m not quite sure what the constructor is for CartPole (you can likely figure that out through ?CartPole in the repl to get documentation.

It will be something like:

env = CartPole()
step!(env, action, ...)

You should also be able to get what methods are available for instances of a type with methodswith(CartPole).

1 Like

Thanks a lot for your help!!! I never did the env = CartPole() line.

Have a great day!

@mkschleg would you be interested in giving me online tutoring for 1-2 hours?
My goal is to write a simple DQN Algorithm, which controls the heatpump in a Smart Home with PV energy generation and connection to the grid. I already wrote the env which would be the Smart Home then. I want to get the data for current electricity demand etc. via a CSV (my proplem at the moment is, that I don`t get Reinforce with DataFrames in its dependencys working).
The Agent will have the 3 decision possibilities: Heatpump_off, Heatpump_hotwater, Heatpump_floorheating.
If you have the time, I would be very happy for your help to write the matching Algorithm (and double check my env with me)!

Thanks a lot for your help again!

I unfortunately don’t have time to take on anymore responsibilities/projects/students beyond my current load. I’m in my final(ish) year of my PhD, so have to focus on my research and thesis writing.

You may find it easier to get something working outside of reinforce, and tbh that package hasn’t been maintained in quite awhile and should be archived. I would move to ReinforcementLearning.jl.

I also have a repo MinimalRLCore.jl that I use to do research. It is much less capable than but it is also a slightly less opinionated and lets me scaffold more quickly. It is more in-line w/ Reinforce with some changes, but doesn’t provide any batteries like ReinforcementLearning.jl does.