I am interested in determining the bounds of a function
f(x) over a given interval
int. Specifically, I want to determine if
f is uniformly positive/negative over
int or contains at least one root.
f is continuous and finite so one of these three cases should occur.
My approach is as follows:
- Compute a first order Taylor Model with bounded remainder of
- Evaluate the first order Taylor Model at the boundaries of
intand add the remainder bound of the Taylor Model. Take the union of these two intervals.
Since the linear function is monotonic, the above interval bounds the range of
using TaylorModels int = -1.0..1.0 x0 = mid(int) f(x) = 3x^2 + 2x + 5 tm = TaylorModel1(1, x0, int) ftm = f(tm) left = ftm.pol(int.lo - x0) + ftm.rem right = ftm.pol(int.hi - x0) + ftm.rem bounds = union(left,right) lower_bound = inf(bounds) upper_bound = sup(bounds)
Now there are a few cases to consider:
lower_bound > 0then
fis uniformly positive in
upper_bound < 0then
fis uniformly negative in
fpossibly contains one (or more) roots in
To confirm that
f has a zero crossing, I break the given interval into sub-intervals and repeat the above procedure. If I can determine at least one sub-interval where
f is uniformly positive and at least one sub-interval where
f is uniformly negative then there is guaranteed to be a zero crossing.
I was wondering if there is a more straightforward way of achieving this.
- Is there a smarter way of doing this for special functions? e.g. if
f(x)is polynomial in
- Is there a way to use higher order Taylor Models? I am using first order because of monotonicity of the linear function.
I need the method to be general enough so as to work with multi-variables as well i.e. for functions