Understanding RedHat's recommended tuned profiles

Here's the schedule of tuned-adm configurations...

I think it helps to see them in tabular form. The main thing to note is that the default RHEL6 settings suck!! The other thing is that the enterprise-storage and virtual-guest profiles are identical except for reduced swappiness on the virtual guest side (makes sense, right?).

tuned profiles

As for a recommendation on storage I/O elevator, you have a few layers of abstraction on the storage layer. Using the noop scheduler would make sense if you were using RDMs or presenting storage directly to your virtual machines. But since they're going to live on NFS or VMFS, I still like the additional tuning options afforded by the deadline scheduler.

Tuned profiles can be changed on-the-fly on running systems, so if you have any concerns, test with your application and specific environment and benchmark.


Have a watch of Shak and Larry's performance tuning videos from Summit, they talk about the tuned profiles in depth.

  • Part 1 - http://www.youtube.com/watch?v=fATEiBJ3pKw
  • Part 2 - http://www.youtube.com/watch?v=km-vLELmWLs

One of the biggest intended takeaways is that the profiles are only a recommended starting point, not immutable numbers which are magically perfect for every environment.

Start with one profile and have a play around with the settings. Generate a good production-like test workload and measure metrics which are important to your business.

Change one thing at a time and record every result at every iteration. When you're done, review the results and pick the settings that gave the best results. That's your ideal tuned profile.


The short answer is that any tuning is guesswork and only has value when backed up with empiricial data: Try it. Measure it. If you don't like it, tweak it.

A longer answer:

Increasing the dirty_ratio would probably mean higher write performance ...IO will be blocked for a considerably longer time

No. Increasing the dirty ratio means that your system is less likely to get into a state where it needs to start blocking on writes. The downside is that there's more memory used and greater risk of data loss in an outage.

meaning that running tasks will get more time to finish their work

Processes will usually yield before their time slice expires. The problem with a VM is that your machine may be competing for CPU and L1/L2 cache with other VMs - high levels of task switching (due to pre-empting) has a big impact on throughput. The kind of applications which are usualyl deployed into VMs are ones which are CPU bound (web servers, application servers).

Yes, the increase in throughput (which applies to all types of application) will come at the cost of an increase in latency - but the latter is of the order of microseconds when most transactions are taking milliseconds. If you need real time capability/very low latency then you shouldn't be using a VM.