Hi all. If this is the wrong section, please let me know and I’ll move my question.
I am curious if there is any work on multi-objective MDPs in Julia. I’m not very familiar with the MDP landscape in Julia; I perused the JuliaPOMDP (github.com) organization and did not see anything implementing algorithms for multi-objective MDPs. I’d like to know if I am missing something. Thanks.
when playing around with the bi-objective SDDP solver
Oh dear I never put much work into tidying it up. The idea is okay. But I think solving a few scalarized problems is probably better than trying to exactly solve the multi-objective problem.
I read in detail the review I linked above, they have an example in the paper (and a code repo here Mathieu Reymond / morl-guide · GitLab (vub.ac.be)) to approximate a Pareto front over 2 objectives for a stochastic model of dam control. Thought you’d be interested @odow
I’ve just discovered this thread. Perhaps the only theoretical studies of multi-objective Markov decision processes to appear in recent times have been a pair of articles I published not long ago (Sorry, the forum doesn’t let me post links):
Mifrani, A. (2025). A counterexample and a corrective to the vector extension of the Bellman equations of a Markov decision process. Annals of Operations Research, 345(1), 351-369.
Mifrani, A., Saint-Pierre, P., & Savy, N. (2025). Solution methods for a class of finite-horizon vector-valued Markov decision processes. INFOR: Information Systems and Operational Research, 1-24.
The numerical experiments reported there were run in R and C++. I’d be delighted to make the programs available if asked to; the transition to Julia should not be too difficult, I think.
Oh dear. Sorry @amfrn, I tried to edit your post to add links, and now I’ve tripped the bot who thinks you’ve done something bad. (New users aren’t allowed to post links to reduce spam.)
Here’s the comment:
I’ve just discovered this thread. Perhaps the only theoretical studies of multi-objective Markov decision processes to appear in recent times have been a pair of articles I published not long ago (Sorry, the forum doesn’t let me post links):
Mifrani, A. (2025). A counterexample and a corrective to the vector extension of the Bellman equations of a Markov decision process. Annals of Operations Research, 345(1), 351-369. [link]
Mifrani, A., Saint-Pierre, P., & Savy, N. (2025). Solution methods for a class of finite-horizon vector-valued Markov decision processes. INFOR: Information Systems and Operational Research, 1-24. [link]
The numerical experiments reported there were run in R and C++. I’d be delighted to make the programs available if asked to; the transition to Julia should not be too difficult, I think.
I think it was already fixed by someone. The post was greyed out and they had a big red warning text next to their name when I checked Thanks for taking a look
Hi @amfrn, thank you for following up on this thread! I see your papers were published after I asked this question, it’s great to get the latest news here. I am going through the 2nd article you linked.
I’d like to see the code for the numerical experiments, thank you. Luckily, C++/R were my daily languages prior to Julia, so it will be even easier to replicate in Julia for me.