Is the cache size or number of cores more important when weighing CPU performance?

CPU Cache

Lower CPU cache will result in a higher probability of cache misses, which will significantly degrade performance. That being said, another entire core will allow the computer to run at least two threads simultaneously, so it's a trade-off when you have to pick one over the other.

CPU cache size vs. miss rate

From the above graph, we can see that when the cache size is over 1 MB, the probability of a cache miss is already extremely low, and shows diminishing returns with increasing cache size.

CPU Cores

Extra CPU cores, on the other hand, can show drastic speed increases when applications take advantage of the multiple cores.

For most real-world applications, the extra execution core will provide a better gain in performance over the extra cache. Both cache size and core count are vitally important when weighing a computer's performance, but when you are dealing with a relatively low amount of cores to begin with, extra cores will usually provide significant performance gains.

...


Extra core, easy, for most applications. You can do twice the calculations under optimal conditions. Cache helps, but it does not double speed.


The right choice (dual-core vs extra cache capacity) depends on the target applications that are going to be run on the laptop.

Dual core processors will theoretically reduce the execution time by half compared to a single core processor. However, in practise, the speedup of 2x is rarely achieved due to the challenges of writing parallel applications. Amdahl's law (link) shows that even if application has 90% of the execution perfectly parallelized (a challenging task for large applications) the speedup is 1.82X instead of 2X. The speedup from the second core will only decrease for applications that are not implemented in a scalable manner.

A larger L2 cache capacity will reduce the miss rate in caches as shown in the answer by @Breakthrough. However, the answer draws the incorrect conclusion from the plot that cache capacity beyond 1MB will only provide marginal improvements. The point of diminishing returns depends on the application (particularly, the working set size link) of the application. Most applications are likely to have working set size beyond 1MB and, hence, larger caches will help improve performance by avoiding long-latency DRAM accesses (processors operate upto 3 orders of magnitude faster than main memory)

Finally, while my answer might sound like favoring larger cache over a second processor, I would like to point out that most modern CS curriculum focus on introducing parallel programming to students. Therefore, multi-core processors make more sense even though they aren't clearly the better choice performance-wise