Multi objective Markov decision processes

Hi all. If this is the wrong section, please let me know and I’ll move my question.

I am curious if there is any work on multi-objective MDPs in Julia. I’m not very familiar with the MDP landscape in Julia; I perused the JuliaPOMDP (github.com) organization and did not see anything implementing algorithms for multi-objective MDPs. I’d like to know if I am missing something. Thanks.

1 Like

Not that I’ve seem. Multi-objective MDPs and other sequential decision problems seem under-studied (speaking as someone who wrote Bi-objective multistage stochastic linear programming | Mathematical Programming).

Most people scalarize the objective a-priori and then solve the single objective problem.

2 Likes

Thanks @odow. I did skim your paper when playing around with the bi-objective SDDP solver.

I found a recent review which may at least help me get a better sense of where the field is at A practical guide to multi-objective reinforcement learning and planning | Autonomous Agents and Multi-Agent Systems

1 Like

when playing around with the bi-objective SDDP solver

Oh dear :grimacing: I never put much work into tidying it up. The idea is okay. But I think solving a few scalarized problems is probably better than trying to exactly solve the multi-objective problem.

2 Likes

I read in detail the review I linked above, they have an example in the paper (and a code repo here Mathieu Reymond / morl-guide · GitLab (vub.ac.be)) to approximate a Pareto front over 2 objectives for a stochastic model of dam control. Thought you’d be interested @odow

1 Like