How many "super imaginary" numbers are there?

Your $h$-based number system is called split-complex numbers, and what you called $h$ is usually called $j$. A related system introduces an $\epsilon$ satisfying $\epsilon^2=0$, and this gives dual numbers. Linear transformations guarantee these two systems and complex numbers are the only ways to extend $\mathbb{R}$ to a $2$-dimensional commutative associative number system satisfying certain properties. However:

  • The Cayley-Dickson construction allows you to go from real numbers to complex numbers and thereafter double the dimension as often as you like by adding new square roots of $-1$, taking you to quaternions, octonions, sedenions etc.;
  • Variants exist in which some new numbers square to $0$ or $1$ instead, e.g. you can have split quaternions and other confusingly named number systems;
  • If you really like, you can take any degree-$d$ polynomial $p\in\mathbb{R}[X]$ with $d\ge 2$ and create a number system of the degree-$<d$ polynomial functions of a non-real root of $p$ you've dreamed up, e.g. $\mathbb{C}$ arises from $p=X^2+1$.

There are stockpiles of algebraic constructions out there... Direct sums, direct products, quotients, sub-structures, free algebras, polynomial rings, localisations, algebraic closures, completions, to name a few. This gives us so many different ways to create new algebraic structures, i.e. produce new and beautiful objects that you can calculate with. Some of those can be called "numbers", if you wish, but this is just a question how to label them: the real push is to investigate the constructed structures, see what they are useful for, and apply them to solving various problems.

Your construction is a commutative subring of $M_2(\mathbb R)$. Another construction gives you the same structure: quotient $\mathbb R[x]/(x^2-1)$, i.e. residues of polynomials in one variable under division by the polynomial $x^2-1$. What you've got there is a ring with zero divisors, for example, $0=h^2-1=(h-1)(h+1)$ but $h-1\ne 0$ and $h+1\ne 0$. This makes it harder to solve equations with those numbers. Thus, this structure, still being interesting, is harder to work with (and produces fewer results) than e.g. complex numbers.

My big point was: with the machinery mathematics has these days, it is not that hard to invent new numbers, but it is as hard as ever to invent new useful numbers. A whole other challenge is to invent new constructions, which would produce new structures in ways never seen before.


Geometric Algebra (GA) allows for infinite-dimensional analogues of complex numbers. It subsumes from scalars, through vectors, to normals (ie: bivectors), through quaternions, tensors, etc. The whole basis of it is that you perform algebra in a coordinate-free way, and yet constructs such as imaginary numbers and quaternions just show up as special cases. The fact that it is coordinate-free makes it easy to work with high dimensional cases. GA is a mathematics language designed to align with geometric intuition. The key to all of it is joining together the dot product and the cross-product in a way that generalizes to all dimensions.

$$ u v = (u \cdot v) + (u \wedge v) $$

The geometric product has a commutative part, and an anti-commutative part. So the geometric product does not commute in general. But scalars commute with everything. If $e_1$ is perpendicular to $e_2$, and $e_3$ is perpendicular to them both, then they form a basis for doing 3D geometry. The basis vectors are akin to $x$, $y$, and $z$ axis. Multiplication of these basis vectors anti-commute and self-annihilate like this:

$$ e_1 e_2 = -e_2 e_1 $$

The same basis vector times itself cancels out. $$ e_1 e_1 = 1 $$

Which causes determinants to just fall out of the definition for instance. Multiply two 2D vectors, where we have scalar coefficients:

$$ (a_1 e_1 + a_2 e_2) (b_1 e_1 + b_2 e_2) $$

Just distribute across them as typical, but don't commute anything yet: $$ a_1 e_1 b_1 e_1 + a_1 e_1 b_2 e_2 + a_2 e_2 b_1 e_1 + a_2 e_2 b_2 e_2 $$

Collect scalars together $$ a_1 b_1 e_1 e_1 + a_1 b_2 e_1 e_2 + a_2 b_1 e_2 e_1 + a_2 b_2 e_2 e_2 $$

Anti-commute and cancel vectors to simplify $$ a_1 b_1 + a_1 b_2 e_1 e_2 + -a_2 b_1 e_1 e_2 + a_2 b_2 $$ $$ (a_1 b_1 + a_2 b_2) + (a_1 b_2 - a_2 b_1) e_1 e_2 $$ Note that we multiplied a pair of 1D objects (vectors), and got back a sum that is a 0D object (scalar) plus a 2D object (bivector). A bivector represents a plane for rotation. The bivector is the dual of a cross-product vector. But with GA, you can have the functionality of a cross-product in all dimensions - not just 3D.

In 2D, $e_1 e_2$ functions as $I$, one of the imaginary planes.