Why are relatively simpler devices such as microcontrollers so much slower than CPUs?

There are other factors that contribute to the speed.

  • Memory: Actual performance is often limited by memory latency. Intel CPUs have large caches to make up for this. Microcontrollers usually don't. Flash memory is much slower than DRAM.

  • Power consumption: This is often a big deal in embedded applications. Actual 200 MHz Intel CPUs consumed more than 10 watts (often much more), and needed a big heat-sink and a fan. That takes space and money, and it's not even counting the external logic and memory that went with it. A 20 MHz AVR takes about 0.2 watts, which includes everything you need. This is also related to the process -- faster transistors tend to be leakier.

  • Operating conditions: As Dmitry points out in the comments, many microcontrollers can operate over a wide voltage and temperature range. That ATMega I mentioned above works from -40C to 85C, and can be stored at anything from -65C to 150C. (Other MCUs work up to 125C or even 155C.) The VCC voltage can be anything from 2.7V to 5.5V (5V +/- 10% for peak performance). This Core i7 datasheet is hard to read since they trim the allowed VCC during manufacturing, but the voltage and temperature tolerances are certainly narrower -- ~3% voltage tolerance and 105C max junction temperature. (5C minimum, but when you're pulling >100 amps, minimum temperatures aren't really a problem.)

  • Gate count: Simpler isn't always faster. If it were, Intel wouldn't need any CPU architects! It's not just pipelining; you also need things like a high-performance FPU. That jacks up the price. A lot of low-end MCUs have integer-only CPUs for that reason.

  • Die area budget: Microcontrollers have to fit a lot of functionality into one die, which often includes all of the memory used for the application. (SRAM and reliable NOR flash are quite large.) PC CPUs talk to off-chip memory and peripherals.

  • Process: Those 5V AVRs are made on an ancient low-cost process. Remember, they were designed from the ground up to be cheap. Intel sells consumer products at high margins using the best technology money can buy. Intel's also selling pure CMOS. MCU processes need to produce on-chip flash memory, which is more difficult.

Many of the above factors are related.

You can buy 200 MHz microcontrollers today (here's an example). Of course, they cost ten times as much as those 20 MHz ATMegas...

The short version is that speed is more complicated than simplicity, and cheap products are optimized for cheapness, not speed.


A major underlying technical reason for the slow speeds is that cheap/small MCUs only use on-chip flash memory for program storage (i.e. they don't execute from RAM).

Small MCUs generally don't cache program memory, so they always need to read an instruction from flash before they execute it, every cycle. This gives deterministic performance and #cycles/operation, is just cheaper/simpler, and avoids PC-like issues where code and data are mixed creating a new set of threats from buffer overflows, etc.

The latency of reading from flash memory (on the order of 50-100ns) is much slower than reading from SRAM or DRAM (on the order of 10ns or below), and that latency must be incurred every cycle, limiting the clock speed of the part.


Why do people ride a bicycle or a small motorbike, when you have a Formula 1 car? Surely it must be better to drive say 300 km/h and get everywhere instantly?

To put it simply, there's no need to be faster than they are. I mean, sure there is a bit and faster microcontrollers do enable some things, but what are you going to do in say a vending machine that is in continuous use for maybe 1 hour a day? What are you going to do in a say remote controller for a TV?

On the other hand, they have other important capabilities, like low power consumption, being MUCH simpler to program and so on. Basically, they're not processors and do different things.