Connectedness of the linear algebraic group SO_n

Each connected component of an algebraic group has the same dimension. Thus if it has a connected subvariety whose complement has a lower dimension, it is connected. (If it had two connected components, only one could contain the connected subvariety, and so the other would have a lower dimension.)

The subvariety where $I_n+A$ is invertible is birational to $\mathbb A^{\frac{n(n-1)}{2}}$ and thus has dimension $\frac{n(n-1)}{2}$.

The complement is where $A$ has a $-1$ eigenvalue. But since every eigenvalue to an orthogonal matrix must have its inverse also an eigenvalue, the determinant is the product of all the eigenvalues which are their own inverse, which are just $-1$ and $1$. Thus, if the determinant is $1$, the number of $-1$ eigenvalues is even, so is at least $2$. We can split the matrix into a $2$-dimensional $-1$-eigenspace and a matrix in $SO(n-2)$. These things are paramaterized by a $SO_{n-2}$-bundle on $G_{2}^{n-2}$, whose dimension is $2(n-2)+\frac{(n-2)(n-3)}{2}=\frac{n(n-1)}{2}-1$. If the $-1$-eigenspace is more than two-dimensional there is more than one way to express a matrix in this way, but that can only decrease the dimension.


[EDIT: I've tightened my wording and added a couple of references which I went back to out of curiosity.]

Will's answer has the elements needed for a concrete reply to the question, but the question itself has caused some confusion about the setting and terminology which are worth clarifying. First of all, the underlying field should be of characteristic different from 2, since it gets more subtle to talk about quadratic forms and orthogonal groups in characteristic 2. (This is done however in work of Chevalley and Borel in the algebraic groups framework, where the groups are included in the classification.)

Originally the study of orthogonal groups as Lie groups was carried out by Weyl, Chevalley, and many others. Here the (polynomial) condition on $n \times n$ matrices is just that the transpose of a matrix must equal its inverse. The orthogonal matrices then form a compact real Lie group $\mathrm{O}(n)$ or a noncompact complex Lie group $\mathrm{O}(n, \mathbb{C})$ of dimension $n(n-1)/2$. In the euclidean topology, the latter group is homeomorphic to the former group times a vector space. So connectedness questions can be settled in the compact case.

Since eigenvalues of an orthogonal matrix occur along with their inverses, $\det=\pm 1$ and matrices of det $-1$ form a closed normal subgroup $\mathrm{SO}(n)$ or $\mathrm{SO}(n, \mathbb{C})$ giving in Lie theory the rank $\ell$ series: $B_\ell$ with $n=2\ell+1$ odd, $D_\ell$ with $n=2\ell$ even. It's worth following the case $n=5$ in Will's calculations. To show that the compact group is connected in the topological group setting, Chevalley in Theory of Lie Groups uses induction on $n$ and the characterization of the successive quotients as spheres.

Now in the Chevalley-Borel setting of linear algebraic groups (over an algebraically closed field $K$), much of the previous study carries over with modifications. For linear algebraic groups given the Zariski topology, irreducibility of the underlying variety fortunately coincides with connectedness in that coarse topology; the term "connected" is then preferred. The irreducible (= connected) components of an algebraic group $G$ are disjoint and equidimensional as well as finite in number (unlike some Lie groups): these are just the cosets of the identity component $G^\circ$. We denote the points of the group over $K$ as $\mathrm{SO}_n(K)$, but the scheme language probably adds nothing useful to the study of connectedness here.

The most standard elementary way to show that a linear algebraic group is connected is to show that it is generated by suitable irreducible subsets such as closed connected subgroups. For the classical matrix groups, this is usually done by showing that the group is generated by transvections, hence by connected 1-dimensional unipotent groups. With some care this approach even handles special orthogonal groups in characteristic 2.

On the other hand, the question here raises the possibility of appealing (in characteristic not 2) to a Cayley transform. Here one is able to map isomorphically a nonempty open subset of an affine space (dense in the Zariski topology) onto a nonempty open subset of the matrix group in a concrete way. Then it has to be seen, as Will shows, that none of the hypothetical extra irreducible/connected components of $\mathrm{SO}_n(K)$ can lie in the excluded hypersurface given by nonvanishing of a determinant. Dimension counting seems necessary here.

The only source I can quote for this slightly esoteric approach is a terse exercise 2.2.2(2) in Springer's book Linear Algebraic Groups, where much is left to the reader's ingenuity. (Are there earlier sources?) Springer himself was attracted to this approach, I think, because he used the Cayley transform for classical groups to realize an isomorphism between unipotent and nilpotent varieties in the group and its Lie algebra.

Earlier arguments appear in at least two places. [Note in each case that for the standard structure theory (over an arbitrary field) involving an isotropic split torus in diagonal form, orthogonal groups are written as matrices using an orthogonal direct sum of hyperbolic planes; over $K$ this translates to the conventional format above.]

1) Chevalley's 1956-58 seminar Classification des groupes algebriques semi-simples (typeset text, Springer 2005, edited by Cartier). In Expose 22, Chevalley gives an argument for connectedness of some of the linear groups roughly analogous to the inductive argument for compact Lie groups.

2) The second edition of Borel's original notes Linear Algebraic Groups (Springer GTM 126, 1991). In the added Section 23 he discusses examples involving groups of rational points of various classical groups, observing in particulqr that over $K$ the relevant groups are Zariski-connected. (Characteristic 2 requires as usual extra discussion, as does type $D_\ell$.) Here the argument relies on the standard structure theory, showing in effect that a hypothetical coset representative for $G/G^\circ$ must in fact represent an element of the Weyl group and thus lie in $G^\circ$.


This is a more elementary solution, which would make sense in the context of Springer's book. We consider $SO(V,\langle,\rangle)$, or for short $SO(V)$, where $V$ is $n$-dimensional and $\langle,\rangle$ a nondegenerate symmetric bilinear form on $V$. (Recall that any two such are equivalent.)

We'll show by induction on $n$ that $SO(V)$ is connected. Springer's exercise shows that any $g \in SO(V)$ that doesn't have eigenvalue $-1$ is in the identity component of $SO(V)$.

When $n = 1$, $SO(V) = 1$ and when $n = 2$ we see directly that $SO(V)$ is connected. (In fact, $SO_2 = \{\left(\begin{matrix} a & b\\ -b & a\end{matrix}\right) : a^2 + b^2 = 1\}$ and $SO_2 \cong \mathbb G_m$ as algebraic groups via $\left(\begin{matrix} a & b\\ -b & a\end{matrix}\right) \mapsto a+bi$ with $i^2 = -1$.)

In general, fix $g \in SO(V)$ and we'll show that $g$ is contained in the identity component. Note that $V = \oplus V_\lambda$ is a direct sum of generalised eigenspaces and that $V_\lambda \perp V_\mu$ if $\lambda \ne \mu^{-1}$. Thus we get an orthogonal decomposition $V = V_{-1} \perp V_1 \perp \perp_{\{\lambda \ne \lambda^{-1}\}} (V_\lambda \oplus V_{\lambda^{-1}})$. In particular, $V_{-1}$ is a nondegenerate subspace and we have the orthogonal decomposition of $g$-invariant subspaces $V = V_{-1} \perp V_{-1}^\perp$. Also, we get that $\dim V_\lambda = \dim V_{\lambda^{-1}}$ if $\lambda \ne \lambda^{-1}$, so by considering the determinant we see that $\dim V_{-1}$ even (as remarked by Will), say it equals $2d$.

It follows that $g$ is contained in the subgroup $SO(V_{-1}) \times SO(V_{-1}^\perp)$ of $SO(V)$. By induction we are thus reduced to the two extreme cases: $V = V_{-1}$ or $V = V_{-1}^\perp$. The second case is treated by the Springer exercise, so we may assume that all eigenvalues of $g$ are $-1$ and $n = 2d$.

Now $-g$ is unipotent, hence by the Springer exercise we know it is in the identity component of $SO(V)$. Multiplying by $-1 \in SO(V)$ we deduce that $g$ is in the same component as $-1$. Finally, after choosing an appropriate basis $-1$ lies in the subgroup $SO_2 \times \dots \times SO_2$ of $SO_{2d}$, so we are done by the $n = 2$ case.