Multiple Interfaces
This feature is currently available on request only.
If you think you need this, please open a service desk ticket to the Cloud team.
Requirements
Due to networking constraints at CERN, instances in this kind of setup need to be hosted in the same network service - all IPs on a subset of related subnets.
The cloud team will provide a usable subnet for each use case - to be used below as a MYSUBNET replacement.
Overview
The setup below is aimed at a single active->multiple standby setup, meaning a single instance serving requests at a time. Goal is to have a floating network interface moved around different VMs providing failover.
The steps can be summarized as:
- Launch all instances as normal VMs, with one network interface
- Create one additional network interface, independent from the VMs
- Attach this network interface to one of the VMs, which will become the active one
The network interface can be moved around different instances, even if the VM where it is currently attached is no longer available.
Setup
Below is an example of such a deployment, step by step.
-
Launch the required number of instances
You probably want to split the machines across different physical hosts, check anti-affinity to achieve it.openstack server create --image MYIMAGE --keypair MYKEYPAIR VM1 openstack server create --image MYIMAGE --keypair MYKEYPAIR VM2
-
Create the independent port (network interface)
You can name this port as you wish - MYPORT above - and if required pass an additional --mac-address parameter with a fixed MAC. If possible/available this name is used as the DNS name in LanDB. -
Attach the port to the instance that should be active
-
Confirm that the interface was added to your instance in LanDB.
-
Login to the VM and check the new ethernet interface is available
You should see both eth0 and eth1, and can go ahead with the interface configuration as desired.$ ssh root@VM1 # ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether fa:16:3e:31:ce:ba brd ff:ff:ff:ff:ff:ff inet 137.138.xx.yy/24 brd 137.138.xx.yy scope global noprefixroute dynamic eth0 valid_lft 603899sec preferred_lft 603899sec inet6 fe80::f816:3eff:fe31:ceba/64 scope link noprefixroute valid_lft forever preferred_lft forever 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether fa:16:3e:ac:a6:ef brd ff:ff:ff:ff:ff:ff
Configuring the interface
You need to change the following sysctl setting:
Make sure it's persisted (if you use puppet there's a module for that):
You can then trigger a DHCP request on the second interface:
# dhclient eth1
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether fa:16:3e:6a:49:96 brd ff:ff:ff:ff:ff:ff
inet 137.138.31.21/24 brd 137.138.31.255 scope global noprefixroute dynamic eth0
valid_lft 603150sec preferred_lft 603150sec
inet6 fe80::f816:3eff:fe6a:4996/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether fa:16:3e:ac:a6:ef brd ff:ff:ff:ff:ff:ff
inet 137.138.31.44/24 brd 137.138.31.255 scope global dynamic eth1
valid_lft 604796sec preferred_lft 604796sec
One option is to add the configuration for eth1 on all VM instances, even if it's only attached/active in one at a time. This simplifies the switch by only bringing eth1 up after moving the port.
Here's an example:
# vim /etc/sysconfig/network-scripts/ifcfg-eth1
# Generated by dracut initrd
NAME="eth1"
ONBOOT="yes"
NETBOOT="yes"
IPV6INIT="no"
BOOTPROTO="dhcp"
TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPV6_AUTOCONF="no"
IPV6_DEFROUTE="no"
IPV6_FAILURE_FATAL="no"
DHCPV6C=no
PERSISTENT_DHCLIENT=1
NOZEROCONF=1
Moving the interface
A manual example of moving the network interface between VMs:
Common Questions
How to automate failover between active and standby?
The required commands are explained above, but a system triggering them is out of scope of these instructions. Corosync and Pacemaker are popular options for such a setup - there's a puppet module to help with such a setup.