Minimum Switching Frequencies in Boost Converters

Two reasons...

  1. Higher frequencies means you can use smaller, cheaper and lighter components.

  2. Under a certain frequency (about 50KHz) audible noise is generated. At the higher end it will drive your pets nuts, lower it will drive you and your users nuts.

The trick is to come to a balance. Make the frequency high enough to limit the costs while low enough to be able to find suitable switches that are not too lossy.

There is also another trade-off. Lower frequencies means more ripple you need to deal with, but then again high frequencies means more EMI noise.

Getting the right balance is a bit of an art.


Why are switching frequencies for boost converters above the 100kHz range?

A powerful boost converter could operate in the low/medium kHz range and might do so because the power transistors used are inherently slow devices. The trick is to operate at a frequency where static losses approximately equal dynamic losses.

If I understand correctly, as the frequency increases from 100kHz upwards, the ripple current that is created from the inductor decreases, the current change over time decreases in the inductor, and components can be smaller because they don't have to deal with larger (relative) currents.

Ripple current sets the scene for how much energy is stored by the inductor and given to the capacitor cyclically. At higher frequencies this transfer is done more times per second hence, for the same power delivered to a load, the ripple current could be smaller but this doesn't quite deliver the same power (energy proportional to current squared) and so the inductance has to be reduced and this increases the ripple current. If you try and factor in the possibility of running discontinuous or continuous conduction mode then it's not as clear cut as you might think.

Components can be smaller, yes.

However, they're countered by decreased efficiency from switching losses in the MOSFET, as well as losses from the core of the inductor.

Yes and no. Switching losses do increase but some core losses reduce such as saturation. However, eddy current losses (usually smaller than core saturation) will tend to increase and that is why you see significant development in making cores suitable for switching above 1 MHz.

So, given that you can increase efficiency by decreasing frequency, why don't switching frequencies occur in lower ranges; the 100Hz-10kHz range, for example?

At low frequencies the inductor saturation is a big factor - lower the frequency and saturation losses can suddenly sky-rocket. If you maintain the balance between dynamic and static losses in your MOSFETs that is usually the best frequency to aim for (as mentioned early on).

Is it that the current changes that the inductor has to deal with are too high and the inductor wiring resistive losses starts to dominate as the main source of power loss?

Lower frequency means less energy transferred per second and this means you have to run at higher currents (for the same power out) but don't get obsessed about this. Running CCM (continuous conduction mode) means the ripple current can be very small to transfer the same energy.


There are a lot of different factors that dictate the choice of the switching frequency for any converter. One of them is magnetics and capacitor size which tend to reduce as frequency goes up. If you go lower in frequency, not only these components get larger but you will also suffer from acoustic noise when you enter the audio range. The second important factor is efficiency. If you permanently switch at 100 kHz in light-load condition, switching losses will affect efficiency big time. As a result, a lot of today dc-dc converters implement a so-called frequency foldback mode which reduces the switching frequency as the load current gets lighter. It improves efficiency a lot. Controllers usually stop folding above 20 kHz for acoustic noise reasons and enter skip cycle if the load current further drops. If this skip sequence occurs at low peak current, you do not hear anything.

One important factor is the crossover frequency \$f_c\$ which is usually selected well below half of the switching frequency. For instance, should you want an aggressive 50-kHz crossover, you can see that with a 100-kHz \$F_{sw}\$ it won't respect the Nyquist criterion. You would need to push \$F_{sw}\$ to 250 kHz for instance. However, you need to keep in mind the nasty right half-plane zero (RHPZ) plaguing all indirect-energy transfer converters such as boost or a buck-boost structures. A RHPZ is the mathematical representation of the delay inherent to the boost operation: first store energy in the inductance \$L\$ then release it to the load. If the current demand grows, you can't instantaneously answer as with a buck converter as you need to first store more energy in the inductor. If you fail doing this because there is not enough volt-second or the inductor is too large, then \$V_{out}\$ first drops and you have momentarily reversed the control law until the inductor current builds up to the right value. You fight this RHPZ (in voltage-mode or current-mode, same position) by adopting a crossover frequency 30% below the lowest RHPZ position. For a boost converter operated in continuous conduction mode (CCM), the RHPZ is located at \$\omega_z=\frac{{R_L}(1-D)^2}{L}\$ so you see that adopting a smaller \$L\$ by pushing the switching frequency will also relegate the RHPZ higher (so more bandwidth) and it is another parameter to account for when selecting \$F_{sw}\$.

So we have seen component size, acoustic noise, crossover frequency and of course, EMI. EMI is a big criteria in selecting the switching frequency depending what the boost converter is going to supply (a RF-sensitive head, measurement circuits etc.) or what standard you need to pass. For instance, despite the possibility to switch at much higher frequency, the vast majority of ac-dc adapters for notebooks operate at 65 kHz. Why? Because the second harmonic \$H_2\$ is below 150 kHz, the start frequency of the CISPR22 standard. So if you account for the natural harmonic attenuation, you may have less work to reduce the emission level switching at 65 kHz (because you will deal with \$H_3\$ already lower) rather than with the fundamental at full power if you were switching at 200 kHz. Hope this was not too much verbiage! : )