Can it be proven that running a GPU at high temps is bad for the card?

Let us study the failure mechanisms, and see how they are affected by heat. It is very important to remember that just because a failure mechanism happens faster with temperature, the GPU will not necessarily fail faster! If a sub-component that lasts 100 years at room temperature only lasts 20 years if it is hot, but another sub-component only lasts 1 year to begin with (but is unaffected by heat), the lifespan of your product will hardly change with temperature.

I will ignore the cycling issue talked about by Simeon as this is not my expertise.

On the board-level, I can think of one main component that will 'break' with head: Electrolytic capacitors. These capacitors dry out, and it is well understood that they dry out faster when heat is applied. (tantalum capacitors also tend to have a shorter lifespan but I don't know how this changes with heat).

But what about the silicon?

Here, as I understand it, there are a few things that can cause failure. One of the main ones here is electromigration. In a circuit, the electrons going through bits of metal will actually physically move around atoms. This can get so bad that it will cause gaps in the conductors, which can then lead to failure.

This image gives a good illustration (from Tatiana Kozlova, Henny W. Zandbergen; In situ TEM observation of electromigration in Ni nanobridges):

enter image description here

This process increase exponentially with temperature, and thus indeed, the chip will last less time if temperature is higher and electromigration is the main cause of failure.

Anther mechanism is oxide breakdown, where inside the circuit the transistors will suffer gate-punch-through. This is also temperature dependent. However, voltage has a much bigger impact here.

There is also VT shift, either due to drift of dopants or due to hot-carrier-injection. Dopant drift increase with temperature (but unlikely to be an issue, esp. with digital circuits, as this is a very slow process). I am not sure about the temperature dependence of hot-carrier-injection, but I think again voltage is a much more important factor here.

But then there is an important question: How much does this decrease the lifespan? Knowing this, should you make sure that your graphics card stays cool all the time? My guess is no, unless an error was made at the design stage. Circuits are designed with these worst-case situations in mind, and made such that they will survive if they are pushed to the limits for the rated lifetime of the manufacturer. In the case of people overclocking circuits: The increase in voltage they often use to keep the circuit stable (as it can speed the circuits up a bit) will do far more harm than the temperature itself. In addition, that increase in voltage will lead to an increase in current, which will significantly speed up electromigration issues.


Yes it is been proven that heat degrades electrical components. Metals expand when they heat up, solder (used for electrical circuit connections) is a metal alloy so it will expand when heated up. Constant heating and cooling will cause the joints to constantly expand and contract which can lead to cracking and eventually failure of the joint.

                                                      Graph of Failure rate vs Temperature

The graph above shows how Arrhenius'Law gives a correlation between an increase in heat and semiconductor failure. This paper details the effects of heat on electronic components. It deals more with things at the electron level, which is a bit outside my scope of knowledge


The relationship between the increase in the junction temperature of a semiconductor and the reduction of its MTBF (Mean Time Between Failure) is well understood.

This technical note from Micron talks about this

In practice, the failure rate will increase exponentially once the junction temperature approaches and exceeds ~125˚C, so if you are operating well below that temperature small increments may not be that critical.