Why would some elementary number theory notes exclude 0|0?

Some books may avoid allowing $a=0$ because it is in some ways natural to translate the statement "$a\mid b$" to the statement "$\frac{b}{a}$ is an integer," which of course makes no sense when $a=0.$ Some books may avoid the $0\mid 0$ case because we do not have uniqueness of factorization of $0$ up to units, as we do with all other integers. Some books may avoid the $a=0$ case because $0\mid b$ if and only if $b=0,$ so it isn't very interesting. I contend that it would be better to allow $a=0$ (for full generality, if not full "interestingness") and simply point out that such translations as described above may be ill-formed, and that factorizations of $0$ are necessarily far from unique.

As a side note, the relation $a\mid b$ can shown to be a (non-strict) partial order relation on the non-negative integers if (and only if) we allow $0\mid 0.$ For more on that, see this excellent answer.


There is no mathematical reason to deny $0\mid 0$ of truth (and I don't think that even those who would not write that, would explicitly write $0\nmid0$ either), but there might be a linguistic one: the symbol is pronounced "divides", and I think most people would avoid saying "$0$ divides $0$" because we all know one cannot divide $0$ by $0$. However if the symbol were to be pronounced "has as multiple" (which in all other circumstances means the same thing), then I think there would be little opposition to writing $0\mid 0$.

There is one occasion where forbidding $0\mid 0$ would have a slight advantage (though I don't personally think it is a sufficient argument): if in number theory one writes $b/a$ (not $\frac ba$) for the exact quotient of $b$ by$~a$ (as one often does), then use of this notation of course needs to be justified by showing that $a\mid b$; if one admits $0\mid 0$ however it also requires showing (separately) that $a\neq0$. Strengthening the meaning of $a\mid b$ to also imply $a\neq0$, that separate check is superfluous. And there is the practical point that the rules for divisibility seldom allow a left argument to become zero if there was not already one before: the "additive law" $a\mid b\land a\mid c\implies a\mid b+c$ (which may produce $0$ in the second argument) does not have a symmetric counterpart.

I just looked up what is done in Concrete Mathematics (whose authors are particularly meticulous about notation and precise definitions), and it is a very interesting case: not only do they use a different symbol for the divisibility relation, namely $m\backslash n$ (adding immediately that '$m\mid n$' is much more common), they also give it a particular definiens: $m>0$ and $n=mk$ for some integer $k$ (equation (4.1)). So not only do they deny that $0$ divides anything, they also deny this (at least with their notation) to all negative integers. To palliate this strictness, they then define the relation "$n$ is a multiple of $m$" (without any special symbol) as the same thing but without the condition that $m>0$; thus non-positive numbers have multiples, even though they do not divide anything. Their convention is clearly an opportunistic one (it would be impossible follow suit when extending the notation to, say, arbitrary integral domains), but I think I can see the main advantage: it allows omitting "positive" that would otherwise almost always be needed when talking about divisors of numbers. They can for instance define prime numbers as the positive numbers that have exactly $2$ divisors; this would require a second use of "positive" (or replacing $2$ by $4$) if negative divisors were admitted. The fact that its authors seem to live happily with their convention throughout the book indicates that it is not hard to ensure that the first argument in a divisibility relation is not only always nonzero, but in fact always positive; the explanation is probably more or less the same as I invoked above (no additive structure for divisors).

As a side note, this convention does require leaving $\gcd(0,0)$ undefined (in contrast with what most answers to this question say, which indicate that putting $\gcd(0,0)=0$ is perfectly in line with the interpretation of $\gcd$ in terms of the divisibility poset): not only do $0$ and $0$ have no greatest common divisor in the usual order (which can be remedied by interpreting "greatest" with respect to divisibility), but worse the value $0$ is not among those common divisors in the first place. Again it is remarkable that one is almost never led to a practical use of $\gcd(a,b)$ where there is a serious risk that one would inadvertently have $(a,b)=(0,0)$, even though having one of $a,b$ accidentally being zero is hard to avoid.


I see no reason whatsoever for excluding $0\mid 0$. The “divisibility” relation on the natural numbers $$ a\mid b \quad\textit{for}\quad \text{there exists $c$ such that $b=ac$} $$ is an order relation. I find the similarity with $$ a\le b \quad\textit{for}\quad \text{there exists $c$ such that $b=a+c$} $$ very appealing. The two operations define in the same fashion two quite different order relations; the latter is total, the former isn't.

More than that, we get a lattice with minimum $1$ and maximum $0$.

Since $a\mid 0$ is used, because the Euclidean algorithm terminates precisely when we arrive at a pair of the form $(r_n,0)$ that shows the greatest common divisor is the last non-zero remainder, I don't see why excluding $0\mid 0$ from consideration. The case of $\gcd(0,0)$ is subsumed in the analysis of the cases $\gcd(a,a)=a$ or $\gcd(a,0)=a$, and the algorithm is applied only for computing $\gcd(a,b)$ with $a>b>0$.

Yes, students frequently confuse $a\mid b$ with $\frac{b}{a}$, but it's the teacher's job avoiding this confusion: using also $0\mid 0$ can help.