Why do different operations on significant digits give different values?

The first thing to realize is that sig fig rules are just a rule of thumb. They're a quick and dirty way of doing error propagation, but if you want a totally correct error propagation, do real error propagation.

In your example, however, the rules can be interpreted in a way that might make sense, depending on the context. In a computation like $2.50+2.50+2.50+2.50+2.50$, the context could be that each 2.50 is an independent measurement, and could have its own positive or negative error. Suppose each one has error bars of about $\pm 0.01$. Adding 5 of these errors is likely to give something fairly small, since there will be some cancellation. (If they're independent and identically distributed, then they make error bars of $0.01\times\sqrt{5}$.)

But in the calculation of $2.50\times5$, suppose the context is that the 5 is an exact value, but the $2.50$ has error bars of $\pm 0.01$. You only get one chance to measure it. If it's off by $0.01$, then the result is off by $0.05$, which is enough to make the digit in the hundredths' place pretty meaningless.

First, if I'm measuring something that has an anticipated error of $\pm 0.01$ and I get $2.50$ five times in a row, I'm going to worry that my measuring equipment is stuck on $2.50$, or that the process that I'm measuring has variation of $\pm 0.01$ that's really slow, or that my experiment is otherwise broken, etc.

But let's say I'm measuring something with variation of $\pm 0.01$. Further, let's say that the variation is Gaussian and $\pm 0.01$ is $3 \sigma$ (I don't know what people use in practice -- $3 \sigma$ seems about right), then let's say that each measurement has error that's truly independent, and I measure five times.

If I sum those measurements, then the result is going to be a Gaussian random variable with $\sigma_{sum} = \sqrt{5}\sigma$. That means my $3 \sigma$ point is $\pm 0.023$ -- not $\pm 0.01$. The more numbers I sum together, the worse it gets. And that's in the best-case scenario where the only contribution to that original $\pm 0.01$ is from truly random error. If I'm doing a measurement that has perfect repeatability and the entire $\pm 0.01$ worth of error is systematic, then after summation the error range would be $\pm 0.05$.

Which is a really long way of saying significant figures are a rule of thumb. If you're doing serious science that requires making sense of statistical data, you need to know statistics. If you're taking a class that requires you to do all the bookkeeping with significant digits, then you should use whatever rules your prof does, and plan on learning statistics if you're going to make a career out of whatever the class is about.