Should an undergrad accept that some things don't make sense, or study the foundation of mathematics to resolve this?

Mathematics is really about relations between things. Therefore, while constructions are good and useful, you should never take them very seriously.

Construction agnosticism

Consider the real and complex numbers. Given $\mathbb R$, you can construct a ring which is isomorphic to $\mathbb C$ by taking pairs of real numbers and defining addition and multiplication in the usual way. Note here that I say that you can define a ring isomorphic to $\mathbb C$. We can ask the following question:

  • Is the ring we have defined actually the ring of complex numbers $\mathbb C$ itself?

But you shouldn't ask yourself that question. (Just because you can ask a question, doesn't mean you should.) It won't do you any harm; it's just not useful to.

What really matters? That there is a ring which has all the properties that we want from the complex numbers — such a ring exists. Then we can say: let $\mathbb C$ be such a ring, and then study the maps $f: \mathbb C \to \mathbb C$.

When we say "the" complex number field, the word "the" is a red herring; we just want to talk about some ring which works that way. Similarly, we don't really care about "the" real numbers. If $\mathbb C$ is "the" complex number field, then it contains a principal ideal domain $Z$ coinciding with the abelian group generated by the multiplicative identity; a quotient field $Q$ induced by $Z$; and a subfield $F$ which is the analytic completion of $Q$. We can then call these "the" real numbers. Just as we don't care if "the" complex numbers consist of ordered pairs of objects from some ring $S$, we don't care if "the" real numbers are the ring $S$ or some set of ordered pairs $(s,0)$ for $s \in S$ and $0$ the additive identity of $S$. It just doesn't matter — we just care about the proper subfield $F \subseteq \mathbb C$ which has all of the same properties as the real numbers, so we may as well adopt the convention that this subfield is the field of real numbers.

This applies to the integers as well. The von Neumann construction of the natural numbers makes $3 = \{ \varnothing, \{\varnothing\}, \{\varnothing,\{\varnothing\}\}\} $. Does this mean that $3$ "really is" a set which e.g. has the empty set as a member? Not really, because these ideas are totally irrelevant to what we care about the number $3$. We could consider any other "construction" of the natural numbers, in which case $3$ might not be a set at all (for instance, if we consider a set theory in which the natural numbers are atoms), in which case it is not only irrelevant to consider the maps $f:3\to3$, but these would not even be defined. All we care about is that $3$ is part of a collection of objects $\mathbb N$ which forms a monoid with some specific properties. The "true identity" of $3$ is beside the point.

Mathematical "interfaces" (in place of "foundations")

If this ambiguity bothers you, you can think of it axiomatically as follows: treat each of the sets we care about — such as $\mathbb N$, $\mathbb Z$, $\mathbb Q$, $\mathbb R$, and $\mathbb C$ — as underspecified objects, where we specify all of the properties about them that we could care about, and only those properties.

  • von Neumann's construction of ordinals describes a certain countable well-ordered monoid: we don't define $\mathbb N$ to be that monoid, but merely say that it is isomorphic to it, leaving further details to be filled in later.

  • From equivalence classes of ordered pairs of elements of $\mathbb N$, you can define an ordered ring $Z$, which contains a monoid $M \cong \mathbb N$ and which is the closure of $M$ under differences. Now, $\mathbb N$ can never be a subset of this set of equivalence classes; but it can be a subset of some other set. We never pretended to characterize precisely what object $\mathbb N$ is, so who is to say that $\mathbb N$ is not itself contained in a ring which is isomorphic to $Z$? Nobody, that's who; without loss of generality we may define $\mathbb Z$ to be a ring isomorphic to $Z$, and declare as a refinement of the earlier specification that in fact $\mathbb N$ is contained in $\mathbb Z$.

  • We may similarly declare that $\mathbb Z \subseteq \mathbb Q$, where $\mathbb Q$ is isomorphic to the ring of equivalence classes of ordered pairs over $\mathbb Z$ in the usual way. We also declare that $\mathbb Q \subseteq \mathbb R$, where $\mathbb R$ is isomorphic to the field of Dedekind cuts over $\mathbb Q$, or (equivalently) to the field of equivalence classes of Cauchy sequences, or any of the typical constructions of the real numbers. The set $\mathbb R$ isn't defined to be any particular one of these constructions, because (a) any of these constructions is as good as the others, and (b) we don't really care about any of the details lying underneath any of the constructions, so long as the properties we care about hold for each.

You should think of these refinements as axioms which we add during the process of doing mathematics. A definition is in the first place is only an axiom: one which defines a constant, such as defining by asserting ∀x:¬(x∈∅). These mathematical underspecifications — mathematical interfaces — are also axioms: having proven that a certain sort of monoid exists satisfying the Peano axioms, we assert that $\mathbb N$ is such a monoid, saying nothing more until it suits us to; and similarly declaring that $\mathbb C$ is a sort of number field of a kind which we've proven exists, and which happens to contain the field $\mathbb R$ which we mentioned previously without quite defining it completely.

Fundamentally, this approach to mathematics is not really all that different from what we usually do: it merely substitutes complete descriptions (what Bertrand Russell would call simply "a description") for objects, with partial descriptions. But pragmatically, in the real world as in mathematics, partial descriptions tend to be all that we care about (and in the real world, they are all that we ever have access to). Embracing this allows you to focus on what really matters.

If you are a mathematical "realist", in which the real numbers has an identity separate from our descriptions of it and has some fixed location in the mathematical firmament, this sort of wishy-washiness as to the "exact identity" of these objects may bother you. After all, if you imagine the possible identities of the objects $\mathbb N$, $\mathbb Z$, $\mathbb R$, etc. as you subsume them into more and more complicated objects, it would seem that the set of objects with which a set such as $\mathbb N$ could be identified recedes to infinity as our mathematical framework grows more elaborate. To this I can only say, "so much the worse for realism". If you want the freedom to construct objects and only concern yourself with the relationships that matter, in the end it is better to abandon this preoccupation with the precise identity of a mathematical object, and engage in mathematics as the creative, descriptive, and above all incomplete and ongoing endeavor that it is.


It really depends on what kind of things you are talking about.

You certainly shouldn't accept theorems as true just because someone tells you they hold - especially not at an undergraduate level. Any good math curriculum should start with the very basics (say, natural numbers), and work its way up from there, proving every theorem that is encountered as the curriculum progresses.

What you seem to be struggling with, however, doesn't seeem to be a particular theorem, but rather notation. There, my advice is to just not take notation too seriously.

Take $\mathbb{R}$ for example. People will often say "... the set of real numbers $\mathbb{R}$...". That makes it sound as if there was exactly one set of real numbers, and that set, and nothing else, went by the name of $\mathbb{R}$. The problem with this is - it isn't true. There are different ways to construct the real numbers (as dedekind cuts, as sequences of rationals, as infinite decimals, ...), and strictly speaking all of these constructions yield a different set. After all, the set of all possible sequences of rationals certainy isn't the same as the set of all infinite decimals, which in turn isn't the same as all dedekind cuts of the rationals. But you can define an ordering and the basic arithmetic operations on all of these sets, and then find bijections between these sets which are compatible with these operations and the ordering. Thus, for all intents and purposed, it doesn't matter which of these sets of real numbers you pick, and hence $\mathbb{R}$ really stands for any of those, instead of for one particular one.

What you have discovered that there's another way to arrive at $\mathbb{R}$ - you can start with $\mathbb{C}$, and view $\mathbb{R}$ as a subset. Now, it's true that, strictly peaking, $\mathbb{R}$ cannot be that subset if you also want $\mathbb{C}$ to be the set $\mathbb{R}^2$. But if you view $\mathbb{R}$ as just something that stands for any representation of the real numbers, that problem goes away. $\mathbb{C} = \mathbb{R}^2$ then just tells you that one way to view $\mathbb{C}$ is as the set of pairs of real numbers, and $\mathbb{R} \subset \mathbb{C}$ tells you that one way to view $\mathbb{R}$ is as the set of complex numbers with imaginary part zero.

So again - don't take notation too seriously. It's just a tool used to communicate ideas, just as any language is. And just as any other language, it's sometimes ambiguous, and sometimes a bit confusing. The trick is to try to see beneath that.


Since you seem to be dealing with complex analysis currently, here's a list of things (off the top of my head, surely incomplete, and in no particular order) that I think are worthwhile to think about and try to understand

  • What does $f$ being holomophic mean geometrically (compare to the situation in $\mathbb{R}^2$)
  • Why $z^n$ is integrable, except for $n=-1$. Watch how that single fact shapes the whole subject (c.f. laurent series)
  • Exponential function and logarithms. Especially the fact that $x \to e^x$ is not injective over $\mathbb{C}$.
  • What does exponentation mean over $\mathbb{C}$. $z^n$ for $n \in \mathbb{Z}$ is clear enough, but what about $z^q$ for $q \in \mathbb{Q}$?

Aaaaauaauaghaguaghaguaghaguahguaghaughaguahguahguahgh!

Sorry.

I don't mean this as an insult, so please don't take this the wrong way, but your entire approach is precisely backwards. To be fair, just from the way you express yourself, it sounds like you do actually have a pretty decent understanding of what you're studying, and I'm thrilled to hear that you think it's "fun" to think about the underlying concepts. (Yes! It is fun! That's why we do it! I mean, yeah, it's also useful for science and stuff, but pure math is art, not just a means to an end.) So while you may have exaggerated your inclination to just "gloss over" the core of your studies rather than plumbing their depths, I'm still going to put forth my argument for why I disagree with the approach you described.

As a student, even though you're paying for a degree, your goal should be to understand mathematics--and therefore your attempt to understand the underlying theory should never be a last-minute "if I have time" activity. This is true regardless of whether you're an undergraduate or not. It seems both dangerous and unfulfilling to assume that the "tricks" are they key to "how mathematics is done," while true understanding is only in the purview of "advanced" mathematicians (whether that means graduate students, post-docs, professors, or whatever).

Instead of remembering "tricks" and procedures to solve particular problems, try to figure out why each trick works, then try to re-derive the trick yourself. If you really understand the core concepts, then in theory you should be able to take a test without having memorized anything; you can simply re-derive every problem-solving method you need. (In fact, when I was an undergraduate in mathematics I used to tell people that I'd chosen that major because I hated memorization, and memorization was generally useless for the way I took tests.) In practice, you'll probably find that the "tricks" are actually easier to remember once you understand them in this way, and if you "mostly" remember a trick but don't quite remember some details (for instance, the order of a couple terms, or whether an operation is an addition or a subtraction), you can quickly figure out the missing parts and then proceed with confidence.

Regarding your particular question about the real and complex numbers considered as formal sets, I hope you've found your discussion in the comments with Malice Vidrine enlightening; I don't know that I have much to add. (Though personally I do consider $\mathbb{R}$ to be a subset of $\mathbb{C}$, which should add a bit of weight to Vidrine's statement that the structure of the thing matters more than the formal definition.)