When Intel / AMD choose their Nanometer Processes, why were the specific numbers, 5, 7, 10, 14, 22, 32, 45, etc chosen?

There are a number of different reasons for this.

The numbers aren't chosen

Modern CPU manufacturing processes, at least for top-of-the line mainstream CPUs such as Intel Xeon and Core, AMD Epyc and Ryzen, etc. are at the very edge of what is currently physically possible and economically viable.

Since the laws of physics and the laws of economics are the same for all players, it is to be expected that they all end up using the same technology. The only way this could be different is if one company manages a totally game-changing technological breakthrough without any other company noticing. Given the highly competitive nature, the amount of research and development invested by all companies, and the comparatively small community where everybody knows what the others are up to, this is highly unlikely.

So, in other words: Intel and AMD don't choose the process node size, they just use the best thing that is currently available, and that happens to be similar for both companies.

The numbers aren't real

The numbers are marketing terms chosen by an industry think tank. They don't accurately capture every detail of the various processes. There may very well be differences in the processes that have more impact than the node size.

For example, Intel is currently using the improved second generation of its 10nm process. Yet, both the first generation and the improved second generation of this process are lumped together under the same name "10nm" in the roadmap in your question.

Which brings us to our two next points. The first is a throwback to point #1, the second is a throwback to this very second point:

The numbers aren't chosen by Intel and AMD

As mentioned, the numbers are marketing terms chosen by an industry think tank. They aren't actually chosen by Intel and AMD.

The numbers are predictions

There is another way in which the numbers aren't real: not only are they marketing terms, that don't fully capture all the details, they are also predictions.

Now, as you probably know, predictions are hard. Especially predictions of the future. Case in point: the roadmap you show in your question has a 5nm process node for 2020, but actually, the current top-of-the line offerings are 10nm by Intel and 7nm by AMD, Apple, and nVidia. IBM's current top-of-the-line is the POWER9, launched in 2017 on a 14nm process. The POWER10 will probably be available in 2021 and manufactured in either 10nm or 7nm.

As you can see, the prediction is actually doubly wrong: it predicts that Intel and AMD will be in lockstep, and it predicts that the process node size will be 5nm, yet Intel and AMD are not in lockstep and neither of the two has hit 5nm yet.

The numbers are kind of a self-fulfilling prophecy

No company wants to be caught failing to hit the predicted process improvements. So, they work very hard to "hit the mark", but not harder, since these improvements are very expensive. (Moore's Second Law predicts that as chips get exponentially cheaper (for the same performance) or exponentially more performant (for the same price), chip fabrication gets exponentially more expensive.)

This is similar to what happened with Moore's Laws: originally, when Gordon Moore wrote down his Laws, he wrote them down as historical observations and projected their trend lines 10 years into the future without actually having solid statistical grounds to do so. 10 years later, he revised them (he had originally projected a doubling every year, which he then revised to a doubling every two years.) However, since then, Moore's Laws have morphed from historical observations to rough predictions to market expectations, where a manufacturer that doesn't hit the projected improvements of Moore's Laws will have to justify that failure to the market, the shareholders, and the stakeholders.

Also note that despite the ramifications of not being able to hit Moore's Law, actual development has dropped below the curve predicted by Moore's Law in 2012, and seems to be flattening out.

The ISTR had a similar effect.

Note, however, that the industry think tank which published the ISTR is actually no longer using it since 2017. They have created a new set of predictions called the ISDR, which are more based on "pull" created by new applications than on "push" created by process improvements.


To make microchips with lots of transistors in great quantities, you will need one of these:

https://www.asml.com/en/products/euv-lithography-systems

This is the market leader in the industry (they are from an area in The Netherlands that is known to be big in pigs and... chip machines). If you buy their latest and greatest machine today, the chips that come out will have a 5 nm path width. Some years ago the paths were a bit wider, they will periodically have better offerings like every manufacturer. So it is not so much Intel's choice as it is a matter of what the latest ASML machines can do.

[Edit]

As Akiva's comment rightfully state, this relays the question from Intel to ASML.

Gullible answer

With every generation they do the best they can given the state of their R&D.

More cynical answer

Taking a modest yet just significant enough step every couple of years is convenient to the entire industry. Chip machine makers can sell a series of machines (which go by 40 million to over 100 million dollars a piece) for a couple of years, then when every potential client has one release a new version and play the same trick again. Chip makers are fine with this, they can do the same thing to their clients, offering bigger and better chips every couple of years. You are fine with this, you can by a new flashy device every couple of years when you get bored with the old one.

I honestly do not know the real answer, it is probably somewhere in between the two.


Gordon Moore started at Shockley Labs in the Bay Area, along with several other diverse and creative spirits. When those folk tired of the headgames of Shockley, they arranged for financing from Sherman Fairchild (of Fairchild Corp), and founded Fairchild Semiconductor.

Here is the key point --- at Fairchild, Dr Moore and the other (7) founders had to INVENT all their equipment. Chemically (which was Moore's specialty), mechanically (precision alignment), metallic splattering of aluminum, and OPTICALLY.

The initial optics were simply the lenses from a twin-lense reflex camera. Given typical 35 mm camera lenses can support resolution of 50_lines to 100 lines per millimeter, which at 1,000 microns per millimeter tells us the best resolution was somewhere between 20 microns and 10 microns.

That sufficed for about a decade. But the other parts of the fab --- the etching, the sputtering (before implanters came along), the precision and repeatable positioning, the light-sensitive photoresist, etc ALL HAD TO BE INVENTED.

And Gordon Moore was in the ideal situation, contributing every day, to see the results of the "Gee, this is a lot of fun, most of the time, as we move mankind along this incredible ability to manufacture.".

He could see the physical limits were far down the road, so he initially predicted a 2:1 change every 2 years.

That rapid binary change has eased up. Its very hard. Simple camera lenses no longer suffice. And lots of software is needed also, to prewarp the production systems to fold the fringing effects of photons into useful final results.

Its very hard. And slow.... to fool mother nature.