Calculate variance from a stream of sample values

I'm a little late to the party, but it appears that this method is pretty unstable, but that there is a method that allows for streaming computation of the variance without sacrificing numerical stability.

Cook describes a method from Knuth, the punchline of which is to initialize $m_1 = x_1$, and $v_1 = 0$, where $m_k$ is the mean of the first $k$ values. From there,

$$ \begin{align*} m_k & = m_{k-1} + \frac{x_k - m_{k-1}}k \\ v_k & = v_{k-1} + (x_k - m_{k-1})(x_k - m_k) \end{align*} $$

The mean at this point is simply extracted as $m_k$, and the variance is $\sigma^2 = \frac{v_k}{k-1}$. It's easy to verify that it works for the mean, but I'm still working on grokking the variance.


You can keep two running counters - one for $\sum_i x_i$ and another for $\sum_i x_i^2$. Since variance can be written as $$ \sigma^2 = \frac{1}{N} \left[ \sum_i x_i^2 - \frac{(\sum_i x_i)^2}{N} \right] $$

you can compute the variance of the data that you have seen thus far with just these two counters. Note that the $N$ here is not the total length of all your samples but only the number of samples you have observed in the past.