Very Large Cardinal Axioms and Continuum Hypothesis

It is known in the folklore (I could never find a source for the results) that $I_0$, $I_1$, and $I_2$ cardinals are all indestructible by small forcing, meaning that we can force ${\rm CH}$ over a universe with such a large cardinal without destroying it. I have the argument for $I_0$ written up in lecture notes here. It is basically Noah's argument with a few additional details.

Hamkins showed here that ${\rm GCH}$ can be forced over a universe with an $I_1$-cardinal without destroying it and very recently Dimonte and Friedman here showed that ${\rm GCH}$ can be forced without destroying an $I_0$, $I_1$, or $I_2$ cardinal. Indeed, they showed much more generally that any weakly increasing class function $F$ on the regular cardinals satisfying $\text{cf}(F(\alpha))>\alpha$ such that $F\upharpoonright\lambda$ is definable over $V_\lambda$ (where $\lambda$ comes from the definition of the $I_0$, $I_1$, $I_2$ cardinal) is consistent with such a large cardinal. Thus, almost any natually defined continuum pattern (on the regular cardinals), such as say $2^\kappa=\kappa^{++}$, is consistent with these large cardinals.


Here's a brief sketch of why, assuming $ZFC+I_0$ is consistent, so is $ZFC+CH+I_0$. (This is just Levy-Solovay.)

Suppose $\lambda$ is $I_0$ - that is, there is a nontrivial elementary embedding $j$ of $L(V_{\lambda+1})=M$ into itself with critical point $\kappa<\lambda$. Note that the critical point $\kappa$ of $j$ must be much larger than $2^{\aleph_0}$, and hence that the usual poset $\mathbb{P}$ forcing $CH$ is in $V_\kappa$. This means that $$j(\mathbb{P})=\mathbb{P}\text{ and }j\upharpoonright \mathbb{P}=id_\mathbb{P} .$$ So, taking $G$ a $\mathbb{P}$-generic filter, we have $j$"$G=G$ and we can lift the embedding $j:M\rightarrow M $ to an embedding $j^+: M[G]\rightarrow M[G] $ extending $j$. It's now straightforward to check that this $j^+$ witnesses that $\lambda$ is $I_0$ in $V[G]$; in particular, $M[G]=L(V_{\lambda+1})^{V[G]}$, because $G$ is of small rank.

I've written this hastily; please let me know if I screwed something up. EDIT: See Victoria Gitman's comment below.


As to GCH, as I stated in my comment I think this is likely to require inner model theory for the relevant cardinal, which is well above the current limits of inner model theory. But I may be wrong about this.


There are some candidate axioms that are beginning to surface on the internet that appear to have large cardinal characteristics and could potentially settle questions like the CH. If they are consistent, they live somewhere in the area above $I_1$ but how they relate to $I_0$ is still not known. (At least not to me.)

I say they "appear" to have large cardinal characteristics because what counts as a large cardinal is not really formalized. Maybe it should be for a question about Godel's Program to really get a solid answer. Then again, maybe things are fine with our current understanding of the notion of large cardinal.

I mention them because they are a little uncharacteristic of large cardinals in that they don't seem to line up with the Levy-Solovay phenomenon in quite the right way. In particular, these axioms are affected (read: "killed") by small forcings like those that can alter the value of CH in the model.

This might not seem that interesting given that Hamkins has shown that certain kinds of large cardinals are already known to be sensitive to (read: "killed") by small forcing, and so they also don't seem to fall into the Levy-Solovay scheme. These large cardinals, say supercompact, are "prepared" through forcings that are designed to guarantee they remain supercompact in any further large forcing extension (thus they are called "indestructible"). But Hamkins shows that these same large cardinals are also vulnerable to small forcing as a result of the preparation. (There are obviously more details here and my apologies to Prof. Hamkins if I'm misreading some of your results in this area).

However, the large cardinals I allude to seem to be forcing-fragile for altogether different reasons. Some of these reasons include the newly discovered fact that ground models can be definable in their extensions, and structural (and maybe also semantic) requirements of the models involved in the relevant embeddings.