Why would the category of sets be intuitionistic?

You wrote:

Suppose our intuition for the phrase "subset of $X$" comes from the idea of having an effective total function $X \rightarrow \{0,1\}$ that returns an answer in a finite amount of time. In this case, the subsets of $X$ ought to form a Boolean algebra.

Unfortunately, this is not a workable intuition at all. If you insist that all subsets be classified by "effective total functions $X \to \{0,1\}$" then the subsets of $\mathbb{N}$ are not even closed under countable unions and countable intersections. Or to put it another way, you are not allowed to say "for all $n \in \mathbb{N}$" and "there exists $n \in \mathbb{N}$" in an unrestricted way. How then are we supposed to do any actual mathematics? For instance, number theorists would not even be allowed to state

"For every $n \in \mathbb{N}$ there is $k > n$ such that both $k$ and $k+2$ are prime."

without proving that there is a decision procedure which, on input $n$, decides whether there is a pair of twin primes above it. So, intuitionistic mathematicians never proposed such a view.

The idea that subsets be classified by termination of a partial effective procedure is equally unworkable, you just need to go one level up. It allows you to write statements of the form $\exists n \in \mathbb{N} . \phi(n)$ for decidable $\phi$, but not $\forall n \in \mathbb{N} . \phi(n)$.

General subsets are not classified by any kind of decision procedures, or semidecision procedures, or even procedures with access to oracles. You can always beat those with diagonalization to create more subsets. In computational terms, subsets should be thought of as being classified by completely arbitrary computational evidence. Then, among all the subsets some are classified by effective decision procedures (the decidable subsets), and some are classified by entirely uninformative computational evidence (those are the complemented subsets which are equal to their double complements).

Now, you ask why anybody would think of sets as intuitionistic objects. First of all, since you already subscribe to the idea that intuitionistic toposes make sense, you might be interested to know that every such topos begets an intuitionistic set theory, and vice versa, see Relating first-order set theories and elementary toposes by Awodey et al. In this sense, the distinction between intuitionistic set theories and intuitionistic toposes (and intuitionistic type theories) is not all that important. It is known that we may often (but not always) pass between them as desired.

There are mathematical reasons for considering intuitionistic set theories, rather than toposes. For example, in theoretical computer science people tried to figure out how to do synthetic domain theory in toposes, and there were always some obstacles. In the end Alex Simpson switched to intuitionistic set theories to solve some of the problems. Speaking a bit vaguely, he needed replacement and a non-trivial object $P$, or a set, with magical properties, such as $P$ containing $P^P$ as a retract. In classical set theory this is not possible due to cardinality reasons, but in intuitionistic set theory one can have such sets (and replacement). In topos theory there is no replacement.

Apart from the fact that there are good mathematical reasons for using intuitionistic set theories, one can also ask for philosophical or foundational reasons. I can only relate a personal view here. After having worked for some time with intuitionistic type theory and the internal languages of realizability toposes, I have definitely developed the idea that "bare sets", "bare types" and "bare objects" are not really just "bare collections of elements". They behave much more like topological spaces. The "intrinsic" topological structure of mathematical objects cannot be separated from the objects themselves. This point has been made by many people, but perhaps the most illuminating is the work of Martin Escardó, who puts the topological way of thinking to do surprising and interesting things in functional programming and type theory. Under this view the classical sets are just the "boring" intuitionistic sets. They are like the indiscrete spaces, and who would want to study those?

Once we subscribe to the idea that even the "bare sets" or "bare objects" may carry intrinsic structure, i.e., they are not just "Cantor's dust", then the double complement not being equal to the original is the norm rather than the exception. To give a couple of examples:

  1. Suppose we work in presheaves on a category with two objects and two parallel arrows, which is just the topos of directed graphs. The double complement of a subgraph is not the original graph, but rather the full graph induced on the vertices of the original subgraph.

  2. Suppose we work in a realizability topos, where an element of a subset carries computational evidence expressing why it is in the subset. Then an element $g$ of the subset $S = \{f : [0,1] \to \mathbb{R} \mid \exists x \in [0,1] . f(x) = 0\}$ of $[0,1] \to \mathbb{R}$ carries the computational evidence $\langle{p,q\rangle}$ where $p$ is a program for computing $g$ and $q$ is a program for calculating a real $a \in [0,1]$ such that $g(a) = 0$. An element of the double complement carries only $p$. Since $q$ cannot be algorithmically recovered just from $p$, the double complement is therefore not equal to the original.

We see here that double complement acts as a sort of regularization operation. In fact, that is precisely what it is in the topological interpretation of intuitionistic connectives: the complement of an open set $U$ in the topology is its exterior, and the double exterior of $U$ is its regularization. By observing that complements already are "regular" (the complement of a subgraph is a full subgraph, the complement of a subset has trivial computational evidence, the exterior of an open set already is regular) we come to expect that triple complements are the same thing as complements.


There are many answers that could be given, but I think the standard "algorithmically oriented" answer is the BHK/Curry-Howard interpretation, according to which a "subset $A$ of $X$" is specified by saying what a "proof of $x\in A$" means, for any $x\in X$, in such a way that whether or not something is such a proof is decidable (in finite time, algorithmically -- usually by a type-checking algorithm). This is totally separate from the question of whether or not we have an algorithm that can produce such a proof (either decidably or semidecidably).

In particular, the "complement" $A^c$ is defined by saying that a proof of "$x\in A^c$" is (by definition of $A^c$) a proof that "$x\in A$" leads to a contradiction. If we have a proof of $x\in A$, then by combining it with any proof of $x\in A^c$ we could get a contradiction, which is itself (by definition) a proof that $x\in (A^c)^c$; thus $A\subseteq (A^c)^c$. Since the "complement" is inclusion-reversing, we also have $((A^c)^c)^c \subseteq A^c$, hence $((A^c)^c)^c = A^c$.

It's quite reasonable to object that this is not really a "complement". Sometimes it is called a "pseudocomplement". But whatever we call it, it's there.


There's a third intuition: our intuition for subsets comes from the notion of predicate — a subset $S \subseteq X$ corresponds to the condition on $x \in S$ (for $x$ of type $X$).

The idea of subsets as characteristic functions $X \to \{ 0, 1 \}$ comes from ideas about semantics:

  • Predicates on $X$ correspond to truth-valued functions on $X$
  • $\{0, 1\}$ is the set of truth values

Of course, this only holds water if you already believe that truth is two-valued, which you don't if you disbelieve the mathematical universe is classical.

The topological character of truth turns out to follow naturally from an idea about what implication means:

$P,Q \vdash R$ if and only if $P \vdash Q \to R$

Since we equate having two premises $P,Q$ with having a single premise $P \wedge Q$, it follows that the class of predicates on any object — and thus subsets of that object — is a Heyting algebra.

Since it's the very nature of Set that's under discussion, the Heyting algebra is automatically a complete Heyting algebra, and thus a locale.


All that said, my position is that foundations (and thus Set) are classical, and even if I were persuaded to someday go all-in with intuitionistic logic, it would be in the form of something like working exclusively in the internal language of a nonboolean topos.