Hi @sush4731, let me take a stab at these questions:

What is the difference between the Linear and Markov policy graph?

A linear policy graph is a graph in which the nodes form a linear graph. A Markovian policy graph is a graph in which the nodes form a Markov chain. You might want to read https://github.com/odow/SDDP.jl/blob/master/papers/policy_graph/preprint.pdf or Markovian policy graphs · SDDP.jl.

Based on the theory of stochastic optimization, we can use SDDP only if the problem is stage-wise independent

Stagewise-independent is a misnomer. It really should be “nodewise-independent.” By construction, all policy graphs that you can formulate using SDDP.jl are node wise independent and meet the assumptions for our SDDP algorithm.

So, what does it mean to choose the type of policy graph

You should choose the graph to best model your problem. If you have an independent random variable in each stage, you ca use a linear policy graph. If you stochastic process can be modeled by a Markov chain, use that. It’s hard to offer advice in general. (You could open an issue at SDDP.jl with the beginnings of a reproducible example if you want more specific help.) You might want to read Example: the milk producer · SDDP.jl.

Does it mean our process is not Markovian if we choose the linear policy graph?

No. The linear graph is a special case of a Markov chain in which there is one Markov state in each stage.

Does it mean there is a linear relation between stages if we choose the linear policy graph?

‘Linear’ and ‘Markovian’ come from the structure of the graph, not whether the subproblems are linear.

So, this process is not stagewise independent anymore? Then, without meeting the required condition, SDDP cannot be applied.

Pretend you never heard the term stagewise-independent. I don’t find it useful, and it only gets people confused. SDDP.jl works on a policy graph, which is just a graph of subproblems, and there is a random variable at each node.