what is the difference between average and expected value?

The concept of expectation value or expected value may be understood from the following example. Let $X$ represent the outcome of a roll of an unbiased six-sided die. The possible values for $X$ are 1, 2, 3, 4, 5, and 6, each having the probability of occurrence of 1/6. The expectation value (or expected value) of $X$ is then given by

$(X)\text{expected} = 1(1/6)+2\cdot(1/6)+3\cdot(1/6)+4\cdot(1/6)+5\cdot(1/6)+6\cdot(1/6) = 21/6 = 3.5$

Suppose that in a sequence of ten rolls of the die, if the outcomes are 5, 2, 6, 2, 2, 1, 2, 3, 6, 1, then the average (arithmetic mean) of the results is given by

$(X)\text{average} = (5+2+6+2+2+1+2+3+6+1)/10 = 3.0$

We say that the average value is 3.0, with the distance of 0.5 from the expectation value of 3.5. If we roll the die $N$ times, where $N$ is very large, then the average will converge to the expected value, i.e.,$(X)\text{average}=(X)\text{expected}$. This is evidently because, when $N$ is very large each possible value of $X$ (i.e. 1 to 6) will occur with equal probability of 1/6, turning the average to the expectation value.


The expected value, or mean $\mu_X =E_X[X]$, is a parameter associated with the distribution of a random variable $X$.

The average $\overline X_n$ is a computation performed on a sample of size $n$ from that distribution. It can also be regarded as an unbiased estimator of the mean, meaning that if each $X_i\sim X$, then $E_X[\overline X_n] = \mu_X$.


From my experience so far in statistics, I have more often heard "average" when discussing samples and in nonparametric statistics. I have first seen the definition of the expected value in a frequentist parametric statistic context, and we understood the expected value as the average of the outcomes when repeatedly repeating the procedure (the average is an unbiased estimator of the mean), which is basically the average you are discussing.

Hence, often, when the average is discussed, we mean the sample average (funny word play there). We compute the sample average on a given set of random variables (sample), that is a set of outcomes of a distribution. This average may yield different properties with regards to the estimation of the "actual average" of the underlying distribution, for instance you may consider how the mathematical definition of the sample average behaves when passing to the limit (taking the sample size to infinity), etc.; but the expected value is functionally associated to distribution with a given parameter,- a distribution that can further generate samples with different sample averages.

Suppose $X_1,X_2,...,X_n$ is a sample of i.i.d. random variables. Observe that we have, in general $$\frac{\sum_{k=1}^nX_k}{n}\neq E(X_i).$$

The terms are used interchangeably, but one must be careful with what exactly is being discussed.