Multivariate polynomial regression of discrete data in L-infinity norm

More concisely, if the m \times (n+1)^d matrix V is the Vandermonde matrix of your polynomial basis (of degree n along d dimensions) for the points x_k, then what you want to do is to solve for the coefficients c using:

\min_{c \in \mathbb{R}^{(n+1)^d}} \Vert Vc - y \Vert_\infty

which is a convex optimization problem that you should be able to give almost directly to JuMP.jl or similar, something like:

using JuMP, MathOptInterface
function solve_for_coefs(V, y)
    N = size(V,2)
    @variable(model, coefs[1:N])
    @variable(model, t)
    @constraint(model, [t; V*coefs - y] in MathOptInterface.NormInfinityCone(1 + length(y)))
    @objective(model, Min, t)
    optimize!(model)
    return value.(coefs)
end

In the Chebyshev-polynomial basis, V can be computed by the function FastChebInterp.chebvandermonde(x, lb, ub, order) where x is a length-m array of d-dimensional SVectors x[k], lb and ub are SVectors of the lower and upper bounds you want to use for your Chebyshev polynomials (a box containing the x[k]), and order = (n,n,....) is a tuple of the polynomial degrees along each dimension. From this, you can construct a ChebPoly object p from the coefficients, so that you can then evaluate p(x) (and its derivatives) at arbitrary points x. Something like:

V = FastChebInterp.chebvandermonde(x, lb, ub, order)
coefs = solve_for_coefs(V, y)
p = FastChebInterp.ChebPoly(reshape(coefs, order .+ 1), lb, ub)

(I haven’t tested this code, but it should give the general idea.)

5 Likes