If a coin toss is observed to come up as heads many times, does that affect the probability of the next toss?

If you don't know whether it is a fair coin to start with, then it isn't a dumb question at all. (EDIT) You ask if the coin will be biased towards Tails to account for the all of the heads. If the coin was fair, then the answer from tilper addresses this well, with the overall answer being "No". Without that assumption of fairness, the overall answer becomes "No, and in fact we should believe the coin is biased towards heads.".

One way to think about this is by thinking of the probability $p$ of the coin landing heads to be a random variable. We can assume that we know absolutely nothing about the coin to start with, and take the distribution for $p$ to be a uniform random variable over $[0,1]$. Then, after flipping the coin some number of times and collecting data, we change our distribution accordingly.

There is actually a distribution which does exactly this, called the Beta Distribution, which is a continuous distribution with probability density function $$f(x) = \frac{x^{\alpha-1}(1-x)^{\beta-1}}{B(\alpha,\beta)}$$ where $\alpha-1$ represents the number of heads we've recorded and $\beta-1$ the number of tails. The number $B(\alpha,\beta)$ is just a constant to normalize $f$ (however, it is actually equal to $\frac{\Gamma(\alpha)\Gamma(\beta)}{\Gamma(\alpha+\beta)}$).

The following graphic (from the wikipedia article) shows how $f$ changes with difference choices of $\alpha,\beta$:

PDF of Beta Distribution for different parameters

As $\alpha \to \infty$ (i.e. you continue getting more heads) and $\beta$ stays constant (below I chose $\beta=5$), this will become extremely skewed in favor of $p$ being close to $1$.

enter image description here


You have wandered into the realm of Bayesian versus frequentist statistics. The Bayesian philosophy is most attractive to me in understanding this question.

Your "basic understanding of probability" can be interpreted as a prior expectation for what the distribution of heads and tails should be. But that prior expectation actually is itself a probabilistic expectation. Let me give some examples of what your prior expectation (called the Bayesian prior) might be:

1) Your prior might be that the frequency of heads is exactly $0.5$ no matter what. In that case, no number of consecutive heads or tails would shake that a priori certainty, and the answer is that your posterior estimate of the distribution is that it is $0.5$ heads.

2) Your prior might be that the probability of heads is a normal distribution with mean $0.5$ and standard deviation $0.001$ -- you are pretty sure the coin will be pretty close to fair. Now by the time the coin has landed on heads 100 consecutive times, the posterior estimate of the distribution is peaked sharply at around $0.995$. Your Bayesian prior has allowed experimental evidence to modify your expectation.

The alternative approach is frequentists. That says "I thought (null hypothesis) that the coin was fair. If that were true, the likelihood of this result of no tails in a hundred flips is very small. So, I can reject the original hypothesis and conclude that this coin is not fair.

The weakness of the Bayesian approach is that the result depends on the prior expectation, which is somewhat arbitrary. But when a lot of trials are involved, an amazing spectrum of possible priors lead to very similar a posteriori expectations.

The weakness of the frequentist approach is that in the end you don't have any expectation, just the idea that your null hypothesis is unlikely.


Is the probability of flipping heads/tails still .5 each?

If you already had the assumption that the coin is fair, then yes. Although statistically it's highly unlikely to flip heads a "very large number of times" in a row, the probability will not be changed by past outcomes. This is because each coin toss is independent from all other coin tosses.

Or has it changed in favor of tails because the probability should tend to .5 heads and .5 tails as you approach an infinite number of trials?

Already answered above but here's more detail. The law of large numbers (LLN) says that if we flip a fair coin a bunch (thousands, millions, billions, etc.) of times in a row, the outcomes should be approximately half heads and half tails. But LLN doesn't tell you anything about the probability of the next (or any) trial.