Can numerical methods be used to prove a root does NOT exist?

This reminds me of Kahan's spy function, which proves that there is no deterministic algorithm which can integrate all functions numerically. We can apply the same idea to your root existence routine. Consider a function spy, which simply records its input. Pass it to your routine. Now use the record produced by spy to create a continuous function malicious which uses those points to (say) always evaluate to 1 at the grid, yet still has a root. A $C^{\infty}[a, b]$ example would be an inverted bump function scaled to fit between each pair of grid nodes and dip down under $0$ at the peak. I'm sure you could also construct a power series that functions as a suitable malicious as well.

Now let's say you use a random set of nodes. Use any one of the divergent sequences which approximate the Dirac delta function (delta sequence), truncate the series (so it meets whatever regularity criterion you want) and place the peak at a random point. Now you have some very low probability that your root existence routine will find a positive and negative value of your function, meaning of course that it can't prove anything.


In exact arithmetic, one way to do this is to obtain a modulus of continuity $\omega_f$. This is (non-uniquely) defined by the property that if $|x-y|<\omega_f(x,\epsilon)$ then $|f(x)-f(y)|<\epsilon$. (There is of course a maximal $\omega_f$, but we rarely have it in hand.)

Suppose you have such a $\omega_f$ and you evaluate $f$ at some points $x_i$. If $|x_i-x_{i+1}|<\omega_f(x_i,|f(x_i)|)$ for all $i$, then $f$ has no zero in the interval.

In inexact arithmetic, you can do something similar, you just need to require your approximate values of $f$ to be bounded a bit further away from zero, depending on your precision level.