Why doesn't the definition of dependence require that one can expresses each vector in terms of the others?

Your intuition for linear (in)dependence is very close. Based on your intuition, the definition you're looking for is:

$\{v_1, ..., v_k\}$ is linearly dependent if there exists an index $i$ and scalars $c_1, ..., c_k$ (excluding $c_i$) such that $v_i = \sum_{j \ne i} c_j v_j.$

You can prove that this is equivalent to the standard definition.

Notice how this differs from your proposed definition:

(1) It says there exists a $v_i$, not for all $v_i$.

(2) There is no zero restriction on the $c_i$.

(1) is important because all it takes is a single redundancy to get linear dependence. Not all vectors have to expressible in terms of the others. To see why this is the case, just think about the case where a set $\{v_1, \ldots, v_k\}$ is already dependent and then I suddenly add a $v_{k+1}$ which cannot be expressed as a linear combination of $v_1, \ldots, v_k$. Adding a vector to a dependent set shouldn't turn it into an independent set.

As for (2), the standard definition needs to say that $c$'s can't be all 0 because you don't want $\sum 0 v_i = 0$ to imply dependence. But with the above definition, you've already singled out a vector to have a coefficient of 1 (which is not 0) so you don't need any condition on the c's anymore.


Let me address your last question (and hopefully it will help with clarifying some of your misconceptions):

Are they essentially the same definition except for this weird edge case?

No, not only in that case. Consider e.g. the following set of three vectors in $\mathbb{R}^2$:

$$\mathbf{v}_1=\begin{bmatrix}1\\0\end{bmatrix}, \quad \mathbf{v}_2=\begin{bmatrix}2\\0\end{bmatrix}, \quad \mathbf{v}_3=\begin{bmatrix}0\\1\end{bmatrix}.$$

It's easy to see that this set is linearly dependent according to the standard definition because $$2\mathbf{v}_1+(-1)\mathbf{v}_2+0\mathbf{v}_3=\mathbf{0}.$$ However it doesn't satisfy your definition. Although vectors $\mathbf{v}_1$ and $\mathbf{v}_2$ can be expressed as (nontrivial) linear combinations of the other ones, viz. $\mathbf{v}_1=0.5\mathbf{v}_2+0\mathbf{v}_3$ and $\mathbf{v}_2=2\mathbf{v}_1+0\mathbf{v}_3$, we can't do the same with the last vector because the equation $$\mathbf{v}_3=c_1\mathbf{v}_1+c_2\mathbf{v}_2$$ clearly has no solutions.

Let me try to describe informally what I think is going on here. The standard definition of linear dependency basically says that there's some dependency somewhere, but not necessarily everywhere, as you seem to believe.

As @AOrtiz already said, one way to think of dependency is that it means redundancy in the given system of vectors. Look at it this way. Given a set of vectors, we may want to construct its span, i.e. the set of all linear combinations of those vectors. If the original set is linearly dependent, then it's redundant in the sense that you can remove some (but not arbitrary!) vectors and still have the same span. The standard definition of linear dependence helps us detect if that's the case.


I find that many of my students think the same way. Instead of thinking about null linear combinations, they usually prefer to think in terms of vectors as linear combinations of other vectors. And honestly, I probably do too. The definition of linear independence that is most intuitively geometric to me, is that no vector in the list can be expressed as linear combination of the others. This is equivalent to the other definitions of linear independence.

The negation of this is that some vector (not all vectors) in the list can be written as a linear combination of others. That is linear dependence. It has nothing to do with non-zero linear combinations (otherwise, as you pointed out, adding $0$ to the list will preserve linear independence). The zero vector is always a linear combination of the other vectors, adds nothing to the span, and therefore nothing to the dimension.

There are other cases, aside from $0$, where not every vector in a linearly dependent list can be expressed a a linear combination of others. For example,

$$((1, 0), (2, 0), (0, 1))$$

Some vectors (i.e. $(1, 0)$ and $(2, 0)$) can be expressed as linear combinations of the others, but not all. There is still dependency in the list.

Hope that helps.