I have some simple examples of exactly that here
# In this example we solve a robust linear programming problem using Optim. The problem is taken from [wikipedia](https://en.wikipedia.org/wiki/Robust_optimization#Example_1)
# $$\text{maximize}_{x,y} \; 3x+2y \quad \text{s.t}. x,y > 0, \quad cx+dy < 10 ∀ c,d ∈ P$$
# Where $c$ and $d$ are uncertain. We encode the constraint into the cost and solve it using 4 different algorithms
using MonteCarloMeasurements, Optim, ForwardDiff, Zygote
const c = 1 ∓ 0.1 # These are the uncertain parameters
const d = 1 ∓ 0.1 # These are the uncertain parameters
# In the cost function below, we ensure that $cx+dy > 10 \; ∀ \; c,d ∈ P$ by looking at the worst case
Base.findmax(p::AbstractParticles;dims=:) = findmax(p.particles,dims=:)
function cost(pars)
x,y = pars
-(3x+2y) + 10000sum(pars .< 0) + 10000*(pmaximum(c*x+d*y) > 10)
end
pars = [1., 1] # Initial guess
cost(pars) # Try the cost function
This file has been truncated. show original
# In this script, we will design a PID controller by optimization. The system model has uncertain parameters, and we pay a price not only for poor performance of the closed-loop system, but also for a high variance in the performance. In addition to this, we place a constraint on the 90:th percentile of the maximum of the sensitivity function. This way, we will get a doubly robust controller as a result :) To avoid excessive amplification of measurement noise, we penalize noise amplification above a certain frequency.
# We start by defining the system and some initial controller parameters
using MonteCarloMeasurements, Optim, ControlSystems, Plots
using MonteCarloMeasurements: ∓
unsafe_comparisons(true)
p = 1 + 0.1*Particles(100)
ζ = 0.3 + 0.05*Particles(100)
ω = 1 + 0.05*Particles(100)
const P = tf([p*ω], [1, 2ζ*ω, ω^2]) |> ss
const w = exp10.(LinRange(-2,3,100))
params = log.([1,0.1,0.1])
const Msc = 1.2 # Constraint on Ms
# We now define the cost function, which includes the constraint on the maximum sensitivity function
function systems(params::AbstractVector{T}) where T
kp,ki,kd = exp.(params)
C = convert(StateSpace{Continuous, T}, pid(kp=kp,ki=ki,kd=kd)*tf(1, [0.05, 1])^2, balance=false)
G = feedback(P*C) # Closed-loop system
S = 1/(1 + P*C) # Sensitivity function
This file has been truncated. show original
5 Likes
gobs
January 15, 2021, 9:14am
43
Cooooool! Makes me wish I had a reason to do this kind of stuff in my PhD but I’m already distracted enough as it is.
2 Likes
Since it seems like a safe place for crazy ideas…
fund with kick starter
incentivize the heavy lift stuff with gitcoin
reward other contributions with high praise
What do people think a useful MVP would look like?
How much effort would that take?
How much $ is that worth?
3 Likes
The problem here is that MVP isn’t actually that useful. It’s pretty easy to make a simplex solver. It’s hard to make one with good performance which is when it’s useful.
3 Likes
Personally, I feel that the reliability of commercial solvers is even more valuable than the performance improvements. For example, many applications can work just fine with the performance of Clp and Cbc, but it’s problematic to deploy an actual product which crashes (infrequently).
6 Likes