Host CPU does not scale frequency when KVM guest needs it

I have found the solution thanks to the tip given by Nils and a nice article.

Tuning the ondemand CPU DVFS governor

The ondemand governor has a set of parameters to control when it is kicking the dynamic frequency scaling (or DVFS for dynamic voltage and frequency scaling). Those parameters are located under the sysfs tree: /sys/devices/system/cpu/cpufreq/ondemand/

One of this parameters is up_threshold which like the name suggest is a threshold (unit is % of CPU, I haven't find out though if this is per core or merged cores) above which the ondemand governor kicks in and start changing dynamically the frequency.

To change it to 50% (for example) using sudo is simple:
sudo bash -c "echo 50 > /sys/devices/system/cpu/cpufreq/ondemand/up_threshold"

If you are root, an even simpler command is possible:
echo 50 > /sys/devices/system/cpu/cpufreq/ondemand/up_threshold

Note: those changes will be lost after the next host reboot. You should add them to a configuration file that is read during boot, like /etc/init.d/rc.local on Ubuntu.

I have found out that my guest VM, although consuming a lot of CPU (80-140%) on the host was distributing the load on both cores, so no single core was above 95%, thus the CPU, to my exasperation, was staying at 800 MHz. Now with the above patch, the CPU dynamically changes it frequency per core much faster, which suits better my needs, 50% seems a better threshold for my guest usage, your mileage may vary.

Optionally, verify if you are using HPET

It is possible that some applicable which incorrectly implement timers might get affected by DVFS. This can be a problem on the host and/or guest environment, though the host can have some convoluted algorithm to try to minimise this. However, modern CPU have newer TSC (Time Stamp Counter) which are independent of the current CPU/core frequency, those are: constant (constant_tsc), invariant (invariant_tsc) or non-stop (nonstop_tsc), see this Chromium article about TSC resynchronisation for more information on each. So if your CPU is equipped with one of this TSC, you don't need to force HPET. To verify if your host CPU supports them, use a similar command (change the grep parameter to the corresponding CPU feature, here we test for the constant TSC):

$ grep constant_tsc /proc/cpuinfo

If you do not have one of this modern TSC, you should either:

  1. Active HPET, this is described here after;
  2. Not use CPU DVFS if you have any applications in the VM that rely on precise timing, which is the one recommended by Red Hat.

A safe solution is to enable HPET timers (see below for more details), they are slower to query than TSC ones (TSC are in the CPU, vs. HPET are in the motherboard) and perhaps not has precise (HPET >10MHz; TSC often the max CPU clock) but they are much more reliable especially in a DVFS configuration where each core could have a different frequency. Linux is clever enough to use the best available timer, it will rely on first the TSC, but if found too unreliable, it will use the HPET one. This work good on host (bare metal) systems, but due to not all information properly exported by the hypervisor, this is more of a challenge for the guest VM to detect badly behaving TSC. The trick is then to force to use HPET in the guest, although you would need the hypervisor to make this clock source available to the guests!

Below you can find how to configure and/or enable HPET on Linux and FreeBSD.

Linux HPET configuration

HPET, or high-precision event timer, is a hardware timer that you can find in most commodity PC since 2005. This timer can be used efficiently by modern OS (Linux kernel supports it since 2.6, stable support on FreeBSD since latest 9.x but was introduced in 6.3) to provide consistent timing invariably to CPU power management. It allows to build also easier tick-less scheduler implementations.

Basically HPET is like a safety barrier which even if the host has DVFS active, the host and guest timing events will be less affected.

There is a good article from IBM regarding enabling HPET, it explains how to verify which hardware timer your kernel is using, and which are available. I provide here a brief summary:

Checking the available hardware timer(s):
cat /sys/devices/system/clocksource/clocksource0/available_clocksource

Checking the current active timer:
cat /sys/devices/system/clocksource/clocksource0/current_clocksource

Simpler way to force usage of HPET if you have it available is to modify your boot loader to ask to enable it (since kernel 2.6.16). This configuration is distribution dependant, so please refer to your own distribution documentation to set it properly. You should enable hpet=enable or clocksource=hpet on the kernel boot line (this again depends on the kernel version or distribution, I did not find any coherent information).
This make sure that the guest is using the HPET timer.

Note: on my kernel 3.5, Linux seems to pick-up automatically the hpet timer.

FreeBSD guest HPET configuration

On FreeBSD one can check which timers are available by running:
sysctl kern.timecounter.choice

The currently chosen timer can be verified with:
sysctl kern.timecounter.hardware

FreeBSD 9.1 seems to automatically prefer HPET over other timer provider.

Todo: how to force HPET on FreeBSD.

Hypervisor HPET export

KVM seems to export HPET automatically when the host has support for it. However, for Linux guest they will prefer the other automatically exported clock which is kvm-clock (a paravirtualised version of the host TSC). Some people reports trouble with the preferred clock, your mileage may vary. If you want to force HPET in the guest, refer to the above section.

VirtualBox does not export the HPET clock to the guest by default, and there is no option to do so in the GUI. You need to use the command line and make sure the VM is powered off. the command is:

./VBoxManage modifyvm "VM NAME" --hpet on

If the guest keeps on selecting another source than HPET after the above change, please refer to the above section how to force the kernel to use HPET clock as a source.


It is not the guest that triggers the upscale - the host must do this. So you have to lower the according trigger-level on the host.


on the host, a kvm cpu looks like a process. The scaling mechanism doesn't watch processes, only the overall cpu consumption.

and it is generally best practice to disable cpu scaling/throttling/etc when running VMs