I have a univariate convex function that I would like to optimize. However, it is very expensive to evaluate (due to large variances from Monte Carlo estimation) and I would like to interpolate this function with an algorithm that will guarantee preservation of the convexity.
Does anybody know of a package that does interpolation of functions by, e.g., rational functions with an algorithm which provides such a shape-preserving guarantee? I have looked at RationalApproximations.jl (which does not interpolate or have convexity guarantees) and BaryRational.jl, which does interpolate, but uses the AAA algorithm which from looking at the original paper does not guarantee a shape-preserving property either.
I have an algorithm that I can implement myself if necessary (from the paper Quadratic/Linear Rational Spline Interpolation by Ideon and Oja), but if some software has already been built I would like to save myself the time.
What are the other properties of the interpolant you would like to have? From what you mentioned, something continuous with linear or quadratic pieces? Since you’re optimising, are you happy to directly optimise a piecewise linear function based off your function evaluation points?
For What you want I don’t thin AAA is the right tool. Why not try polynomial or rational least squares / interpolation but add an inequality constraint for the convexity?
I wonder if rather then to try estimate the cost function (which I assume is your convex function) for a given value of a parameter with Monte-Carlo averaging to get the function value with enough precision you could just to the Monte-Carlo approach for DIFFERENT values of the parameter in each ensemble member. In that way you would have a cloud of points and would try to fit a smooth function (either an analytical function which is convex or a smoothing interpolation) across the cloud of points.
Simply maintaining the convexity will do. I know that I could use a piecewise linear spline and obtain that, but in my problem, I would like to minimize the error from interpolation as much as possible so that the Monte Carlo error is the primary “noticeable” source of error. That’s why I’m looking for something a bit higher order.
If I’m able to do some analysis to gather what the worst-case variance of this estimator could look like, I would draw enough samples X_{i} \sim P so that the same samples can be used for many values of c (the cloud of points) and those estimates are the ones I would use for interpolation nodes.
My question is about once I have all of this and I am ready to interpolate a vector of values obtained from this convex function. I’m looking to see if there is a package that exists that does some higher-order (than linear spline) convexity-preserving interpolation in Julia.
I haven’t specifically thought about the option of adding a constraint for the convexity. I wonder if that’s something I would be able to do in, e.g., NonlinearSolve.jl? Thanks so much for the idea and your reply!
I’m not sure why the convexity preserving constraint is essential to you. If you mean by it that the interpolation between points g(c_i) and g(c_i+1) of your vector of values is not larger than the maximum of the two and not smaller than the minimum, I do not see the point here: Because in that situation, your optimization will just find the node of the largest or lowest value and since you decided on the position of the nodes you do not even need an interpolation? Probably I misunderstood something?
Depending on how expensive the MC computation is, I would just do a “fine” grid of c values and then use linear interpolation. You can compute it for grids of various density to figure out how “fine” it needs to be.
Since you have stochastically estimated samples, it doesn’t seem like interpolation is a good idea. Fitting/smoothing with a convex constraint seems more natural, and could be more resistant to noise (and allow fewer samples per point.)
But what are you doing with g(c) after you have your estimates? Maybe there are more and better shortcuts you can take?
This is exactly what I tried to say in my first post. Using few samples per point (thus large error), but more points (which can capture the gradients more easily) with spatial filtering should be better than exact interpolation. For example like this (blue line is the exact function, dots are sampled estimates and the dotted line is the smoothed curve across the points)
MWE
using DIVAnd
using PyPlot
realf(x)=x^2
NSamples=200
noiselevel=1.0
cgrid=collect(-1.0:0.01:1.0)
csample=-1.0 .+ 2.0.*rand(Float64,NSamples)
fsample=realf.(csample) .+ noiselevel*randn(Float64,NSamples)
figure()
plot(cgrid,realf.(cgrid),"-",csample,fsample,".")
?DIVAndfun
apprfun=DIVAndfun((csample,),fsample;xi=(cgrid,),len=1.0,epsilon2=2.0)
figure()
plot(cgrid,realf.(cgrid),"-",csample,fsample,".",cgrid,apprfun.(cgrid),":")
It’s prohibitively expensive using standard MC methods due to an explosion of variances, which is why we are trying to do something clever here, but thanks for the reply!
I’ve given this some thought and this does make sense. I talked about this with my thesis advisor before and he mentioned the interpolation idea, and I got some tunnel vision with it. All I’m doing with the approximation is minimizing it over c > 0, since the underlying application is uncertainty quantification bounds and we are trying to obtain the sharpest upper bound. Thanks for your input!
I misunderstood your first post then! Thanks for providing this MWE. I haven’t seen DIVAnd.jl before, but it looks very useful! I’m going to try integrating this with my code to see how it performs. Marking this as the solution! Thanks again!