Can all mathematical operations be encoded with a Turing Complete language?

No.

Or, as it would usually expressed more positively: It is possible to mathematically define things that are not computable.

For example we can define the function $$ f(M) = \begin{cases} 1 & \text{if $M$ is a lambda term that has a normal form} \\ 0 & \text{otherwise} \end{cases} $$ which is definitely not computable -- it's essentially the halting problem.

Defining this function symbolically in all its gory detail is not particularly enlightening. It can be done, however, and the crucial point that makes it non-computable is that we need to express an unbounded quantification over all natural numbers, in order to say "there is (or isn't) a reduction sequence of some finite length that leads to a normal form".

In terms of formalism, we're almost there with the minimization operator. But it is usually specified that $\mu x.E$ fails to terminate if there is no suitable $x$. In order to express mathematical definitions in general we need a total minimization operator $\bar\mu$ which cannot be expressed computably: $$ \bar\mu x.E(x) = \begin{cases} \text{the smallest $x\in\mathbb N$ such that $E(x)=0$} & \text{if such an $x$ exists} \\ 0 & \text{otherwise} \end{cases} $$ If we had this operator we, would be able to express every function that is definable in first-order logic. But it cannot be included in an actual programming language, because there would be no general way for an implementation to decide to return $0$ other than checking infinitely many cases.


How do we know all this covers all computable functions?

This is one instance of Church-Turing's thesis. Until we have a precise definition of "computable function", it can be said to be a matter of faith -- but it is a practical experience that all attempts to define computable functions have turned out to lead to equivalent concepts, at least those of them that didn't turn out to be considered insufficiently expressive.

In particular, nobody has proposed a computing mechanism that appears to be practically realizable and cannot be expressed in (any given one of) those formalisms. So usually we just take our favorite formalism (they are all proved to be equivalent with impeccable rigor) to be the definition of "computable".


But I've come away looking for proof. How do we know all this covers all computable functions?

The Church-Turing hypothesis is a belief. So the common used term "computable function" is mostly that what can be computed in these models of computation.

As far as I understand it, at the heart of the classical computability models there is a mismatch in cardinality:

We have countable many programs (be it finite strings, or natural numbers) and uncountable many machine functions (the assignments between possible input and output of a machine).

If you decide for a particular computation model, you will assign part of the machine functions to the programs. Due to the mismatch in cardinality, there are machine functions left over which do not get a program assigned, these are the uncomputable functions.

The classical models of computation deliver the same computable functions, because they share certain properties.

Is there an obvious branch of Mathematics for which this is not covered? there a shortcoming in Lambda Calculus where it is yet to cover off
Mathematical operations?

E.g. there is a branch of analysis, computable analysis, which has its own model of computation. I remember vaguely that multiplication of any real number by 3 is not possible in that model.

There is also a connection between constructivist mathematics and computation theory.

Maybe this question will attract an expert who can elaborate on this.