Why worry about commutativity but not associativity in The Fundamental Theorem of Arithmetic?

Writing $a(bc)$ is not distinct from writing $(ab)c$, or any variant in this form.

What you talk about can occur abstractly, just not in $\mathbb{Z}$. Magmas and quasigroups do not insist upon associativity, and in such situations, a singular factorization cannot always occur. But associativity of multiplication (and also addition) holds for all rings - this is an axiom. So your conclusion that $abc$ factors to both $a(bc)$ and $(ab)c$ is not logical. Think of multiplication as a binary function, from $\mathbb{Z}$ to $\mathbb{Z}$, not as a formation of a word. Only notationally is $a(bc)$ in $\mathbb{Z}$, as it maps to an individual $z\in \mathbb{Z}$. But $a(bc)$ intrinsically, as a symbol, is a function waiting for valuation. This is why it is said that $(ab)c=a(bc)$; both form a map to synonymous locations.

I don't know if your complaint is with notion or notation, but if you follow what I'm saying, you should try to construct a ring without associativity of multiplication to assist you in grasping why this works in $\mathbb{Z}$. Try looking at octonions - polish notation (or any notation) won't fix factorization in that situation.


"Uniquely" means that there is exactly one way to write an integer as a $k$-ary product of primes (up to permutation of the factors).

Since thanks to associativity, all placements of parentheses give the same product, it does not matter which of the concatenations of binary operations one uses for the definition of the $k$-ary product.

One symmetric way to think about it, is to define it as an equivalence class of all these expressions.

If you insist on Polish notation, then we get, say, $30=*_3 2\ 3\ 5 $ where $*_3$ denotes the ternary multiplication operator.


Note This answer was merged from another question, so it may not be completely consistent with the question above.

By the associativity and commutativity of multiplication we can normalize prime products by right-associating brackets and ordering primes least-first. Now unique factorization amounts to

$$\rm 2^{u_0} (3^{u_1} (5^{u_2} \cdots p_k^{u_k}))\ =\ 2^{v_0} (3^{v_1} (5^{v_2} \cdots p_k^{v_k}))\ \ \Rightarrow\ \ u_i = v_i,\ \ i = 0,\ldots,k $$

Equivalently, the multiplicative monoid of positive integers is freely generated by the primes, i.e. it is isomorphic to the free monoid of "exponent vectors" $\:\in \mathbb N^{\mathbb N}\:,\:$ i.e. if $\rm\:v = (v_0,v_1,\ldots)\in \mathbb N^{\mathbb N}$ is a sequence of naturals with finite support then the monoid map $\rm\:v\mapsto 2^{v_0} (3^{v_1} (5^{v_2} \cdots ))\:$ yields an isomorphism $\rm\ (\mathbb N^{\mathbb N},\: +)\:\cong (\mathbb P\:,\: \cdot)\:.\:$ That this map is onto means the primes are generators, i.e. existence of prime factorization; that it is $1$ to $1$ is the inference displayed above - that there are no nontrivial multiplicative relations between the generators, i.e. uniqueness of prime factorizations.

Your question has to do not with the above semantics of unique factorization but rather with the syntactic issues of how one chooses to represent terms of (free) monoids. There are of course various possibilities. Monoid terms may be represented as strings, multisets, or bags, depending on what is convenient for man or machine. But these low-level representational details have little to do with the high-level concepts. As I stressed in your prior questions, if one spends too much time dwelling on such low-level representaional matters then one risks missing the forest for the trees.

Associative normalization is indeed built-in to the representation - whether it be notation employed by humans or bags/multisets by machines. But that's not a flaw but, rather, a feature. There is no need to speak of different associations of products when working in monoids because associativity holds universally in monoids by hypothesis (axiom). This universal normalization is done once and for all so that one can focus on the essence of the matter. Associativity of multiplication is no more of a concern in monoid expressions than is associativity of addition in polynomial expressions. The same holds true even in real-world contexts. Remote control manuals don't say anything about the associativity of a string of button presses because there is no need to. It is assumed by the context that $\rm(A\ B)\ C = A\ (B\ C)$. Hence it makes no difference whether one places $\rm(A\ B)$ onto a macro button $\rm D$ then executes $\rm D C$, or if one places $\rm (B C)$ onto a macro button $\rm E$ and then executes $\rm A\ E$. Such hypotheses are built-in in by default in many contexts - whether rigorously or informally.

Compare the above to the reply that you'd receive by phoning customer support for your DVR and asking them the analogous question about associativity of button presses. Being a philosopher, perhaps you may find that exercise more interesting than this one. Their informal explanations may reveal more about such epistemological matters than any replies here.