Is there something fundamental that makes integrals nontrivial to solve?

That's something I once wondered myself. And I still wonder about it. But we might get some insight by thinking about something similar.

Think about the squaring function $n \mapsto n^2$ for positive integers. Whenever we square a positive integer, we always get something of the same kind back (namely another positive integer). Not only that, but the set of squares is a subset of the set of integers, so in some sense the squaring function takes us from a larger space into a smaller space (from the set $\{1,2,3,...\}$ to the set $\{1,4,9,...\}$).

So when we define the inverse of the squaring function (the square root) over positive integers, we sometimes get the same kind back ($4 \mapsto 2$), but not if we're trying to take the square root of something that is not in the smaller space of perfect squares. $\sqrt{2}$ can't be expressed as an integer, just like $\int \sqrt{\sin x \cos x} \, dx$ can't be expressed in terms of elementary functions.

Likewise, derivatives will take things from the space of elementary functions, and map them to a smaller space (the set of elementary functions whose antiderivatives are also elementary).

You can see this behavior often times whenever inverses are taken. Multiplication will take integers to integers, but division sometimes won't, giving rise to the rational numbers. Addition maps positive integers to positive integers, but subtraction doesn't always, giving rise to the negative numbers. Differentiation maps elementary functions to elementary functions, but integrals don't have to.


For an oversimplified but satisfying perspective, think about knots. It's usually pretty easy to take a piece of rope and twist and tie it into a strong, bulky knot. Getting it undone... that's another story. One can generalize this scenario to many parts of mathematics. Inverse processes are generally more complex and difficult than their counterparts. For instance, it can be pretty easy to toy around with elementary functions and come up with a horrible-looking but easily written and invertible function without much work; but finding the explicit formula for the inverse is another story. Antidifferentiation is the inverse process (if you will) of differentiation. You can toy around all day and come up with some pretty horrible functions which have antiderivatives, but finding them can be a pain. I hope this is somewhat satisfying, even if it's not very rigorous.


This question interested me emough to make me do a bit of searching and link following. TL;DR: there is, and it's the Turing Halting problem!

Fiest I found this SO answer: How can you prove that a function has no closed form integral? , which is pretty much an expansion of what WB-man explains very simply above, but more formally. And then I found the answer, currently fourth down, pointing to the Risch Algorithm https://en.wikipedia.org/wiki/Risch_algorithm

So: there is an algorithm that may obtain an anti-derivative in finite time. But it's not a true algorithm in that it may fail to terminate (halt), and therefore cannot tell us anything about the possible non-existence of the closed-form anti-derivative.