# Basis functions to approximate the inverse of quadratic function

I have many observations of measuring a physical parameter over time.
Each observation is basically a function p(t), t \in [t_1, t_2].
I also have the value \int_{t_1}^{t_2} q (t) dt = Q

I have many observations: \left\{ (p_1(t), Q_1), ( p_2(t), Q_2), \ldots (p_n(t), Q_n) \right\}.
I know that the connection p = f(q) is modeled very well by a quadratic polynomial.

I am after a parametric model for q = g(p).
The idea is top optimize the parameters of the model by linear least squares.
So id the model is q = g_{\boldsymbol{\theta}}(p) then I can integrate over the values by replacing each p value by q and minimize the difference with the value Q.

Nothing special here.
What I am after is a good parametric model to approximate the inverse of a quadratic function.
I can say that the range of values I am looking to work at is limited to q \in [0, 120], p \in [0, 900] (So only in positive quadrant) and I know the quadratic model is “smiling”.

Do you know a reasonable basis functions to approximate the inverse of a smiling quadratic function over the positive quadrant?
The usual suspects would be sqrt or log, but I am now aware of any basis built with them.

I don’t understand. If p = f(q) is modeled very well by a quadratic polynomial, why don’t you simply form that polynomial (by a LS fit or whatever) and then take its root(s) as needed to find q = f^{-1}(p)? Why are you trying to do a separate fit for q = g(p) at all?

Because I don’t have the measurements of p_i, q_i.
I only have the measurements of p over time and the integral of q over that time.
I know, physically, that the connection of p~q is quadratic.
Yet I don’t have access to the actual parameters, If I did, I would solve it as you said.

I’m still not sure I understand your notation. What is p_1(t), p_2(t), \ldots? Are those observations of different p-like functions at the same time t? Or the same function p(t) at different times?

It sounds to me like you actually mean that you have n points at times t_k:

(p_k, Q_k) = \left( p(t_k), \int_{t_0}^{t_k} q(t) dt \right)

for k = 1, \ldots, n, where p(t) \approx c_0 + c_1 q(t) + c_2 q(t)^2 for unknown quadratic coefficients c_0,c_1,c_2. I’m assuming that you also know the times t_k of the observations?

So, your “real” problem here is to determine the quadratic coefficients c_0,c_1,c_2 relating p to q, given your data \{ (p_k, Q_k, t_k) \} . Have you considered solving this by nonlinear least squares in c_0,c_1,c_2?

For example, you could start by fitting some function \tilde{p}(t) to your data \{ t_k, p(t_k) \}, e.g. using a polynomial fit or an interpolating spline. Then, given that interpolant/fit, for any c_0,c_1,c_2 you can define the function \tilde{Q}(t) = \int^t \tilde{q}(t) dt where \tilde{q}(t) is given by a root of \tilde{p}(t), and \tilde{Q}(t) is integrated numerically. You can then choose c_0,c_1,c_2 by doing nonlinear least squares to minimize the difference between \tilde{Q}(t_k) and your observations (t_k, Q_k).

1 Like

Those are observations at different times.
But on each of them the model is static.

My idea was going the other way around.
Build a parameterize model q = g_{\boldsymbol{\theta}} (p) then I can, from p(t) estimate q(t) and then find the optimal \boldsymbol{\theta} by least squares by integrating it and compare to Q.

Let’s say I model q = \theta_1 + \theta_2 p + \theta_3 p^2 + \theta_4 p^3 then I can find the parameters directly by integrating as I know:

Q = \int q(t) dt \approx \int ( \theta_1 + \theta_2 p(t) + \theta_3 p^2(t) + \theta_4 p^3(t)) dt

So this direction is much easier (Even trivial given a linear model) since I have p(t) directly.
I am just wondering what family of functions can approximate functions that look like:

Is there kind of a basis built by parameterization of sqrt? Maybe log?
I am after a model which takes advantage of the knowledge the approximated function is inverse of quadratic function over the positive quadrant. Even better if it also take advantage of knowing the quadratic function is smiling, so the inverse has the form as above.

Could you elaborate more on this?
I thought extracting the root is not differentiable hence it can not be done.

If you try to solve the quadratic equation you get something in the form:

q = \frac{-b + \sqrt{{b}^{2} - 4 a \left( c - p \right) }}{2 a}

It won’t be nice to solve such form as it will be hard to keep the value inside the square root non negative.
I’d try a model in the form:

q \approx {\theta}_{1} + {\theta}_{2} \sqrt{p + {\theta}_{3}}, \; {\theta}_{3} \geq 0

It is not linear but should not be too complex.
I added the constraint to keep the term in the square root non negative (You mentioned p to be non negative).
In case more degrees of freedom are needed, you may add a linear term of p .

1 Like

If you know on physical grounds that p = f(q) is quadratic, I would be loathe to give that up in the name of convenience. Nonlinear least-squares is not that difficult.

However, if you know that the quadratic relationship is of the form p = \alpha q^2 as you seem to be suggesting from your plot, then q has a linear relationship to \sqrt{p}, so you can just do a linear fit that way. You don’t need or want any other basis functions if you have this information, since there is only one degree of freedom \alpha.

1 Like

I am not sure I got how to build it as a nonlinear least squares problem.
I think the root won’t be differentiable. What I am missing?

Why not? Roots are differentiable except at a multiple root (i.e. as long as your quadratic-formula discriminant is nonzero).

1 Like

OK.

You mentioned \tilde{p} (t) using fit / spline. Why?
Reduce noise?

It seems to me that you need a continuous model for p(t) in order to integrate q(t) = f(p(t)), so as to compare it to your data Q_k. Since you only have discrete observations (t_k, p_k), you therefore need to interpolate or fit somehow.

1 Like

I’m not following your problem entirely but it’s pretty easy to answer this part. Chebyshev polynomials will approximate a smooth function on a bounded domain to exponential accuracy. Take a look at ApproxFun.

The problem is that it sounds like their function may not be smooth (analytic) — if they have something like p = \alpha q^2 \Longleftrightarrow q = \sqrt{p/\alpha}, then there is a singularity at p=0. In such a case, polynomial interpolation of q(p) will converge very slowly. (If they know that the quadratic has this precise form, that can be exploited easily as I mentioned above since q is linear in \sqrt{p}, but apparently this is not the case?)

If there is strong physical reason to believe that p(q) is quadratic, then my instinct is that it’s better not to discard this prior knowledge in favor of a generic q(p) fit.

(Even for smooth functions, exponential convergence of Chebyshev polynomials is easiest to obtain if you have the freedom to sample your function at Chebyshev points, which doesn’t seem to be the case here.)

1 Like

If you do nonlinear least squares on explicitly constructed chebyshev functions, then you can get the definite integral basically “for free” right?

It would help to have a more clear statement of the problem. What I think it sounds like to me is that there is a set of values t_k and some observations of both p(t_k) and \int_0^{t_k}p(t)dt and the goal is to find a function p that fits the data?

If the issue is that the function has a sqrt type problem at t=0 but is not exactly just sqrt(kt) then perhaps \sqrt{kt}P(t) where P(t) is a chebyshev poly is the way to go.

Note, because we’re doing least squares you wouldn’t be constructing the Chebyshev from the data, you’d just be selecting an order and constructing it explicitly from trial coefficients, and then optimizing the coefficients to reduce the error.

No, as I understand it the observations are p(t_k) and \int_0^{t_k}q(p(t))dt for an unknown function q(p) and the goal is to find this function q. Furthermore, the OP claims to know on physical grounds that q(p) is the root of a quadratic, i.e. that q^{-1}(q) is an unknown but quadratic function.

You don’t need Chebyshev for p(t) since you are directly given p(t_k) and can use a spline or some other fit. And Chebyshev is a bad idea for q(p) because the root of a quadratic has a singularity, and besides you would be throwing away your a priori physical knowledge that q^{-1}(q) is quadratic (and hence only has 3 free parameters).

(Also, note that the exponential convergence of Chebyshev to smooth functions is much less relevant in situations involving experimental data, which are generally too noisy to construct high-order polynomial fits.)

1 Like

TBH I’m not sure, it seems like OP should clarify, another interpretation of OP is that they merely know how the graph looks visually, i.e. strictly increasing convex/concave.

@stevengj , Described it very well.
What’s available is \int q(t) dt and what is known is the model (Not the actual parameters) of p = f(q) which is quadratic and “smiling”.

The region of interest is the positive quadrant.

@stevengj , in the model I should fit, will it work for high frequency data?
It is very “jumpy” data.
So far the integration was done by multiplying the sample value by the sampling interval.

1 Like

Never done this, but I think that before fitting you could maybe smooth the data or apply a low-pass filter to eliminate the high-frequencies.

I use Chebyshevs for polynomial fits to data all the time. Rather than thinking of them in their usual use as interpolants etc, think of them as a more stable, orthogonal basis for regression compared to 1,x,x^2,x^3. Because of their orthogonality the function you get by truncation is not far from the “full” function (or one with higher number of coefficients). That’s not true for the 1,x,x^2... basis where you’d normally need to change all the coefficients if you truncate a 10th order polynomial to only 5th order for example.