Prove that $n+1$ vectors in $\mathbb{R}^n$ cannot be linearly independent

Let denote $(e_1,\ldots,e_n)$ the standard basis of $\mathbb R^n$ and suppose that $(f_1,\ldots,f_{n+1})$ a set of linearly independent vectors. We can write $$f_1=\sum_{k=1}^n a_k e_k$$ and since $f_1\ne 0$ there's $a_k\ne 0$ and WLOG suppose $a_1\neq0$ so $$e_1=\frac{1}{a_1}\left(f_1-\sum_{k=2}^n a_k e_k\right)$$ hence we see that $(f_1,e_2,\ldots,e_n)$ spans $\mathbb R^n$.

Now repeat $n$ times the same method (induction) and we find that $(f_1,\ldots,f_{n})$ spans $\mathbb R^n$ so the vectors $f_{n+1}$ is a linear combination of the other vectors $f_i$ which is a contradiction.

If the induction step was not obvious, consider the following:

Since we have established that $(f_1,e_2,\ldots,e_n)$ spans $\mathbb R^n$, write

$$f_2=b_1f_1+\sum_{k=2}^n b_k e_k$$

Since $f_2\neq0$, there's at least one $b_l\neq0$. Also note that not all $(b_2,b_3,\ldots,b_n)$ can be zero because otherwise it would imply that $f_2=b_1f_1$ contradicting the assumed linear independence of $(f_1,\ldots,f_{n+1})$. So we can take $b_l\neq0$ where $l\geq 2$. WLOG suppose $b_3\neq0$. Now,

$$e_3=\frac{1}{b_3}\left(f_2-b_1f_1-b_2e_2-\sum_{k=4}^n b_k e_k\right)$$

From this we see that, $(f_1,e_2,f_2,e_4,\ldots,e_n)$ spans $\mathbb R^n$, where we replaced $e_3$ with $f_2$. The assumed linear independence of the $f_i$'s means that we can repeat this process to replaces all the $e_i$'s.


You can prove the following, directly.

Let $V$ have dimension $n$, and let $\{w_1,\ldots,w_r\}$ be a set of linearly independent vectors in $V$. Then $r\leq n$.

P Let $\{v_1,\ldots,v_n\}$ be a basis for $V$. Write for each $i=1,2,\ldots,r$

$$w_i=\sum_{j=1}^n {\alpha_{ij}}v_j$$

Consider the homogeneous system of $n$ equations and $r$ unknowns, $1\leq j\leq n$. $$\sum_{i=1}^r a_{ij}x_i=0$$

and suppose we have a solution $\{\omega_1,\ldots,\omega_r\}$. This means that $$\begin{align}\sum_{i=1}^r\omega_i w_i&=\sum_{i=1}^r\omega_i\sum_{j=1}^na_{ij}v_j\\&=\sum_{j=1}^n\left(\sum_{i=1}^r a_{ij}\omega_i\right)v_j=0\end{align}$$

thus by linear independence we must have $\omega_i=0, i=1,\ldots, r$. It follows that the homogeneous system admits only the trivial solution, so we must have $r\leq n$. Thus any set of $k>n$ vectors must be linearly dependent. This rests on the fact that any homogeneous system with more equations than unknowns must admit a nontrivial solution, which is something (I guess) most of (us) students are aware of.


Take your idea: you put the $n+1$ vectors as rows of a matrix, so you get an $(n+1) \times n$ matrix.

Now, do Gaussian elimination. You get a zero row. That zero row is a nontrivial linear combination of the rows of your matrix.

Thus, linear dependence.

It may help to first say some words about the rowspace of this matrix being the same as the span of your $n+1$ vectors, or somesuch.