Has CPU speed already broken Moore's law?

The first thing, remember that Moore's Law isn't a law, it's just an observation. And it doesn't have to do with speed, not directly anyway.

Originally it was just an observation that component density pretty much doubles around every [time period], that's it, nothing to do with speed.
As a side effect, it effectively made things both faster (more things on the same chip, distances are closer) and cheaper (fewer chips needed, more chips per silicon wafer).

There are limits though. As chip design follows Moore's law and the components get smaller, new effects appear. As components get smaller, they get more surface area relative to their size, and the current leaks out, so it makes you need to pump more electricity into the chip. Eventually you lose enough juice that you make the chip hot and waste more current than you can use.

Though I'm not sure, this is probably the current speed limit, that the components are so small they're harder to make electronically stable. There's new materials to help this some, but until some wildly new material appears (diamonds, graphene) we're gonna get close to raw MHz speed limits.

That said, CPU MHz isn't computer speed, just like horsepower isn't speed for a car. There are a lot of ways to make things faster without a faster top MHz number.

LATE EDIT

Moore's law always referred to a process, that you can double density on chips at some regular repeating timeframe. Now it seems sub-20nm process may be stalled. New memory is being shipped on the same process as old memory. Yes, this is a single point, but it may be a harbinger of the future.

ANOTHER LATE EDIT An Ars Technica Article all but declaring it dead. Was fun having you around for 50 years.


Moore's law describes a long-term trend in the history of computing hardware. The number of transistors that can be placed inexpensively on an integrated circuit has doubled approximately every two years. It's not about clock speed.

Also, a CPU's clock speed is not a reliable indicator of its processing power.


The faster the clock speed the larger the voltage drops need to be to make a coherent signal. The larger the voltage needs to spike up, the more power is required. The more power that is required, the more heat your chip will give off. This degrades the chips faster and slows them down.

At a certain point, it is simply not worth it to increase the clock speed any more, as the increased temperature would be more than it would be to add another core. This is why there is an increase in the number of cores.

By adding more cores, the heat goes up linearly. I.e. there is a constant ratio between clock speed and power draw. By making cores faster, there is a quadratic relationship between heat and clock cylces. When the two ratios are equal, its time to get another core.

This is independent of Moore's Law, but since the question is about the number of clock cycles, not the number of transistors, this explanation seems more apt. It should be noted that Moore's law does give limitations of it's own though.

EDIT: More transistors means more work is done per clock cycle. This happens to be a very important metric that sometimes gets overlooked (it is possible to have a 2Ghz CPU outperform a 3Ghz CPU) and this is a major area of innovation today. So even though clock speeds have been steady, processors have been getting faster in the sense that they can do more work per unit time.

EDIT 2: Here is an interesting link that has more information on related topics. You may find this helpful.

EDIT 3: Unrelated to the number of total clock cycles (number of cores * clock cycles per core) is the issue of parallelism. If a program cannot parallelize it's instructions, the fact that you have more cores means nothing. It can only use one at a time. This used to be a much larger problem than it is today. Most languages today support parallelism far more than they used to, and there are some languages (mostly functional programming languages) that have made it a core part of the language (see Erlang, Ada and Go as examples).

Tags:

Cpu

Cpu Speed