Method Error with no method matching zero(::Type{SDDP.State{VariableRef}})

Hi! I have a question about this one error with my code. I was trying to mimic the example in SDDP.jl objective status and include some integer variables. But I got the error: LoadError: MethodError: no method matching zero(::Type{SDDP.State{VariableRef}}). I do confused about what this error refers to. Thank you ahead! My code is attached below:

using SDDP, Gurobi

model = SDDP.LinearPolicyGraph(;
    stages = 5,
    sense = :Max,
    lower_bound = 0.0,
    upper_bound = 500.0,
    optimizer = Gurobi.Optimizer,
) do subproblem, t
    @variable(subproblem, 0 <= invest_decision <= 1, SDDP.State, initial_value = 1, integer = true)
    @variable(subproblem, 0 <= total_invest <= 100, SDDP.State, initial_value = 10)
    @variable(subproblem, downturn)
    @variable(subproblem, 0 <= exit_decision <= 1, integer = true)
    @constraints(
        subproblem,
        begin
            invest_decision.out <= invest_decision.in 
            exit_decision + invest_decision.out == 1
            total_invest.out    == total_invest.in + 10*invest_decision.out
        end
    )

    SDDP.add_objective_state(
        subproblem;
        initial_value = 100.0,
        lipschitz = 10_000.0
    ) do firm_value, ω
        return ω.fv_noise + firm_value
    

    Ω = [
        (fv_noise = f, downturn = w) for f in [10, 20, 30, 40] for
        w in [0.0,0.0,0.0,-50.0, -100.0]
    ]

    SDDP.parameterize(subproblem, Ω) do ω
        # Query the current firm_value.
        firm_value = SDDP.objective_state(subproblem)
        @stageobjective(subproblem, firm_value * exit_decision - total_invest - downturn)
        return JuMP.fix(downturn, ω.downturn)
    end
end

SDDP.train(model)

simulations = SDDP.simulate(model, 1)

Hi @Millie, welcome to the forum.

The error message is very cryptic, but the issue is in this line:

@stageobjective(subproblem, firm_value * exit_decision - total_invest - downturn)

You have used total_invest without specifying whether you meant .in or .out.

Here’s how I would write your model. I have started to use x_ as a prefix for state variables to make it easier to spot when we should use .in and .out.

using SDDP, Gurobi
model = SDDP.LinearPolicyGraph(;
    stages = 5,
    sense = :Max,
    upper_bound = 500.0,
    optimizer = Gurobi.Optimizer,
) do subproblem, t
    @variables(subproblem, begin
        x_invest_decision, Bin, SDDP.State, (initial_value = 1)
        0 <= x_total_invest <= 100, SDDP.State, (initial_value = 10)
        u_exit_decision, Bin
    end)
    @constraints(subproblem, begin
        x_invest_decision.out + u_exit_decision == x_invest_decision.in
        x_total_invest.out == x_total_invest.in + 10*x_invest_decision.out
    end)
    SDDP.add_objective_state(
        subproblem;
        initial_value = 100.0,
        lipschitz = 1.0
    ) do p_firm_value, ω
        return p_firm_value + ω.fv_noise
    end
    Ω = [
        (fv_noise = f, downturn = w)
        for f in [10, 20, 30, 40]
        for w in [0.0,0.0,0.0,-50.0, -100.0]
    ]
    SDDP.parameterize(subproblem, Ω) do ω
        p_firm_value = SDDP.objective_state(subproblem)
        @stageobjective(
            subproblem, 
            p_firm_value * u_exit_decision - x_total_invest.out - ω.downturn,
        )
        return
    end
end
SDDP.train(model)
simulations = SDDP.simulate(model, 1)

I don’t think your logic around the exit_decision was quite correct though, so I made a small change. I assume you still have some changes coming around the total_invest state.

1 Like

Thank you so much! The error is now fixed and as you pointed out, I need to look into the states and control variables I defined. I really appreciate the suggestions and help!

1 Like