Virtual machines generally show similar performance to physical hardware and the overheads are reducing as the virtualisation technology improves. However, in some cases, performance can be affected such as high disk I/O rates or additional network latency. A few hints and tips to investigate are described below. For questions or issues concerning the performance of a VM please contact the cloud team via the service desk at https://cern.service-now.com/service-portal/.
Define a workload
In order to test different options, it is necessary to define a representative workload such as the time to perform a compilation or transfer a file. With memory caching, these tests can vary the number of times you run them as the disk contents could be already in memory and therefore served more quickly. Tuned-Adm
Tuned-adm is a Red Hat tool for setting kernel parameters, buffer sizes and disk I/O schedulers according to the workload. Red Hat identify some common profiles and find the appropriate setting for these. Thus, there is no need to go into the details of each kernel parameter. By default, a VM is set up with virtual-guest as the setting when a VM is created from the standard images. However, this may need to be set if user defined images are used. The virtual-guest is a good initial starting point for benchmarking the workload. However, for applications such as web servers, it may be that alternative profiles are better suited. tuned-adm list provides a set of profiles to investigate.
tuned-adm profile default
Some I/O patterns are very intense on a small volume of data which does not need to be permanently stored. These areas are often stored in /tmp or /var/tmp but applications generally allow you to configure the TMPDIR variable or a configuration parameter for other areas. Depending on the size of the temporary data, a RAM disk can be used to mount a section of RAM as a file system. This gives excellent performance at the expense of eating into the RAM available for the application. Selecting a larger flavor of VM may allow a RAM disk to be allocated for this purpose. To mount a RAM file system, use the mount command.
mkdir /tmp/ram mount -t tmpfs -o size=4G tmpfs /tmp/ram/
Care should be taken not to swap memory. This is an expensive operation and would lead to a reduction in overall performance.
AFS client configuration
AFS, by default, stores blocks from the remote server onto a local partition. This creates additional I/O load which can be avoided by using a smaller memory cache.
/etc/sysconfig/afsd file, memory cacheing can be enabled using
# OpenAFS 1.6.x AFSD_ARGS="-afsdb -daemons 10 -fakestat -memcache -nosettime -volumes 1024"
Linux kernels, by default, offload some processing to the network card. In the case of a VM, this does not improve the performance. The effects are most visible on high performance networks.
To disable this, run
ethtool -K eth0 gro off tso off gso off
The current settings can be checked with
This can be made persistent by setting the ETHTOOL_OPTS value in the network scripts such as
IO caching with ZFS
The combination of an ephemeral drive on SSDs and a volume can be used to leverage ZFS caching features for read (l2arc) and write (ZIL). The setup would be like this:
Prepare the system
- Resize the LVM partition to 20GB
- Disable Puppet
- Add zfs-kmod repo
[zfs-kmod] name=ZFS on Linux for EL $releasever (KMOD) baseurl=http://linuxsoft.cern.ch/mirror/archive.zfsonlinux.org/epel/6/kmod/$basearch/ enabled=1 gpgcheck=0 priority=9
- Install ZFS packages and load the module
yum install zfs modprobe zfs
- Create the data pool from the attached Cinder volume
zpool create -o ashift=12 -f zfspool01 /dev/disk/by-id/virtio-106b905d-b1aa-4bf8-9
- Set sensible defaults
zfs set mountpoint=none zfspool01 zfs set compression=lz4 zfspool01
- Create partition for ZIL (4GB)
- Create partition for l2arc (20GB)
Number Start End Size Type File system Flags 1 1049kB 420MB 419MB primary ext4 boot 2 420MB 21.9GB 21.5GB primary lvm 3 21.9GB 26.2GB 4295MB primary 4 26.2GB 47.7GB 21.5GB primary
- Attach the 3rd partition as ZIL
zpool add zfspool01 log /dev/vda3
- Attach the 4th partition as L2ARC
zpool add zfspool01 cache /dev/vda4
- Check the layout of the ZFS pool with
- Create the file system
zfs create -o refquota=500G zfspool01/data1
- Potential additional settings
zfs set recordsize=4k zfspool01/data1 zfs set sync=always zfspool01/data1 # this may cost performance!
- Set the mountpoint
zfs set mountpoint=/data zfspool01/data1
Save the setting
Once a good combination of settings is found, the configuration of the machine (or the associated configuration management system) should be permanently set so it persists over reboots. A tool such as Puppet can ensure this is done consistently over a number of virtual machines