Performance Tuning
Virtual machines generally show similar performance to physical hardware and the overheads are reducing as the virtualisation technology improves. However, in some cases, performance can be affected such as high disk I/O rates or additional network latency. A few hints and tips to investigate are described below. For questions or issues concerning the performance of a VM please contact the cloud team via the service desk at https://cern.service-now.com/service-portal/.
Define a workload
In order to test different options, it is necessary to define a representative workload such as the time to perform a compilation or transfer a file. With memory caching, these tests can vary the number of times you run them as the disk contents could be already in memory and therefore served more quickly. Tuned-Adm
Tuned-adm is a Red Hat tool for setting kernel parameters, buffer sizes and disk I/O schedulers according to the workload. Red Hat identify some common profiles and find the appropriate setting for these. Thus, there is no need to go into the details of each kernel parameter. By default, a VM is set up with virtual-guest as the setting when a VM is created from the standard images. However, this may need to be set if user defined images are used. The virtual-guest is a good initial starting point for benchmarking the workload. However, for applications such as web servers, it may be that alternative profiles are better suited. tuned-adm list provides a set of profiles to investigate.
RAM disks
Some I/O patterns are very intense on a small volume of data which does not need to be permanently stored. These areas are often stored in /tmp or /var/tmp but applications generally allow you to configure the TMPDIR variable or a configuration parameter for other areas. Depending on the size of the temporary data, a RAM disk can be used to mount a section of RAM as a file system. This gives excellent performance at the expense of eating into the RAM available for the application. Selecting a larger flavor of VM may allow a RAM disk to be allocated for this purpose. To mount a RAM file system, use the mount command.
Care should be taken not to swap memory. This is an expensive operation and would lead to a reduction in overall performance.
AFS client configuration
AFS, by default, stores blocks from the remote server onto a local partition. This creates additional I/O load which can be avoided by using a smaller memory cache.
In the /etc/sysconfig/afsd
file, memory cacheing can be enabled using
Network offloading
Linux kernels, by default, offload some processing to the network card. In the case of a VM, this does not improve the performance. The effects are most visible on high performance networks.
To disable this, run
The current settings can be checked with ethtool -k
.
This can be made persistent by setting the ETHTOOL_OPTS value in the network scripts such as /etc/sysconfig/network-scripts/ifcfg-eth0
IO caching with ZFS
The combination of an ephemeral drive on SSDs and a volume can be used to leverage ZFS caching features for read (l2arc) and write (ZIL). The setup would be like this:
Prepare the system
- Resize the LVM partition to 20GB
- Disable Puppet
Install ZFS
- Add zfs-kmod repo
- Install ZFS packages and load the module
- Create the data pool from the attached Cinder volume
- Set sensible defaults
- Create partition for ZIL (4GB)
- Create partition for l2arc (20GB)
- Attach the 3rd partition as ZIL
- Attach the 4th partition as L2ARC
- Check the layout of the ZFS pool with
- Create the file system
- Potential additional settings
- Set the mountpoint
Save the setting
Once a good combination of settings is found, the configuration of the machine (or the associated configuration management system) should be permanently set so it persists over reboots. A tool such as Puppet can ensure this is done consistently over a number of virtual machines