Configure failover on a Compute Instance
In cloud computing, failover is the concept of rerouting traffic to a backup system should the original system become inaccessible. Compute Instances support failover through the IP Sharing feature. This allows two Compute Instances to share a single IP address, one serving as the primary and one serving as the secondary. If the primary Compute Instance becomes unavailable, the shared IP address is seamlessly routed to the secondary Compute Instance (failover). Once the primary Compute Instance is back online, the IP address route is restored (failback).
Why should I implement failover?
When hosting web-based services, the total uptime and availability of those services should be an important consideration. There’s always a possibility that your Compute Instance may become inaccessible, perhaps due to a spike in traffic, your own internal configuration issues, a natural disaster, or planned (or unplanned) maintenance. When this happens, any websites or services hosted on that Compute Instance would also stop working. Failover provides a mechanism for protecting your services against a single point of failure.
The term high availability describes web application architectures that eliminate single points of failure, offering redundancy, monitoring, and failover to minimize downtime for your users. Adding a load balancing solution to your application’s infrastructure is commonly a key component of high availability. Managed solutions, like NodeBalancers, combine load balancing with built-in IP address failover. However, self-hosted solutions like Nginx or haproxy do not include built-in IP failover. Should the system running the load balancing software experience downtime, the entire application goes down. To prevent this, you need an additional server running your load balancing software and a mechanism to failover the IP address. On our cloud platform, this is accomplished through the IP Sharing feature and some additional software configuration.
For many production applications, you may want to consider a load balancing tool that goes beyond basic failover. NodeBalancers combines load balancing with built-in failover. If you are using self-hosted load balancing software, such as NGINX or HAProxy, on your own Compute Instances, you must use the IP Sharing feature to provide failover for IP addresses.
IP Sharing availability
Failover is configured by first enabling IP Sharing and then configuring software on both the primary and secondary Compute Instances. IP Sharing availability varies by region. Review the list below to learn which core compute regions support IP Sharing and how it can be implemented. To learn about IP Sharing in distributed compute regions, see IP Sharing and failover in Distributed Compute Regions.
Data center | IP Sharing support | Failover method | Software | ID |
---|---|---|---|---|
Amsterdam (Netherlands) | Supported | BGP-based | lelastic / FRR | 22 |
Atlanta, GA (USA) | Supported | BGP-based | lelastic / FRR | 4 |
Chennai (India) | Supported | BGP-based | lelastic / FRR | 25 |
Chicago, IL (USA) | Supported | BGP-based | lelastic / FRR | 18 |
Dallas, TX (USA) | Supported | BGP-based | lelastic / FRR | 2 |
Frankfurt (Germany) | Supported | BGP-based | lelastic / FRR | 10 |
Fremont, CA (USA) | Undergoing network upgrades | - | - | 3 |
Jakarta (Indonesia) | Supported | BGP-based | lelastic / FRR | 29 |
London (United Kingdom) | Supported | BGP-based | lelastic / FRR | 7 |
London 2 (United Kingdom) | Supported | BGP-based | lelastic / FRR | 44 |
Los Angeles, CA (USA) | Supported | BGP-based | lelastic / FRR | 30 |
Madrid (Spain) | Supported | BGP-based | lelastic / FRR | 24 |
Melbourne (Australia) | Supported | BGP-based | lelastic / FRR | 45 |
Miami, FL (USA) | Supported | BGP-based | lelastic / FRR | 28 |
Milan (Italy) | Supported | BGP-based | lelastic / FRR | 27 |
Mumbai (India) | Supported | BGP-based | lelastic / FRR | 14 |
Mumbai 2 (India) | Supported | BGP-based | lelastic / FRR | 46 |
Newark, NJ (USA) | Supported | BGP-based | lelastic / FRR | 6 |
Osaka (Japan) | Supported | BGP-based | lelastic / FRR | 26 |
Paris (France) | Supported | BGP-based | lelastic / FRR | 19 |
São Paulo (Brazil) | Supported | BGP-based | lelastic / FRR | 21 |
Seattle, WA, USA | Supported | BGP-based | lelastic / FRR | 20 |
Singapore | Supported | BGP-based | lelastic / FRR | 9 |
Singapore 2 | Supported | BGP-based | lelastic / FRR | 48 |
Stockholm (Sweden) | Supported | BGP-based | lelastic / FRR | 23 |
Sydney (Australia) | Supported | BGP-based | lelastic / FRR | 16 |
Tokyo (Japan) | Supported | BGP-based | lelastic / FRR | 11 |
Toronto (Canada) | Supported | BGP-based | lelastic / FRR | 15 |
Washington, DC (USA) | Supported | BGP-based | lelastic / FRR | 17 |
- If a data center is marked as undergoing network upgrades, customers may encounter issues enabling IP Sharing and configuring failover. For Compute Instances that already have IP Sharing enabled, this feature should still function as intended. Once the network upgrades are completed, IP Sharing will be supported through the BGP method. Review documentation on our planned network infrastructure upgrades to learn more about these changes.
- IP failover for VLAN IP addresses is supported within every data center where VLANs are available. It does not depend on the IP Sharing feature. It depends on ARP-based failover software, such as keepalived.
IP address failover methods
-
ARP-based (legacy method): Supports IPv4. This method is no longer supported.
-
BGP-based : Supports IPv4 (public and private) and IPv6 routed ranges (/64 and /56). This is currently being rolled out across our fleet in conjunction with our planned network infrastructure upgrades. Since it is implemented using BGP routing, customers can configure it on their Compute Instances using lelastic (our tool) or software like FRR, BIRD, or GoBGP.
While keepalived is not used directly for failover, you can still make use of
vrrp_scripts
for health checks. You might do so if you wish to retain some of your existing keepalived functionality when migrating to a BGP-based failover method.
Configure failover
The instructions within this guide enable you to configure failover using IP Sharing and the lelastic tool we provide, which is based on GoBGP that automates much of the configuration. While lelastic enables many basic implementations of failover, you may want to consider using FRR or any other BGP client if your implementation is more advanced. See Configure IP failover over BPG using FRR.
- If you've included your Compute Instances in a placement group, the group needs to use Anti-affinity as its Affinity Type, which spreads them out in a data center. The opposite Affinity Type, Affinity physically places Compute Instances close together, sometimes on the same host. This defeats the purpose of fail over.
To configure failover, complete each section that follows. Currently, only a two Compute Instance failover solution is supported, with one as the primary and the other as the secondary. Configuring more than two Compute Instances or setting both Compute Instances to the same role is not supported.
1. Create and share the Shared IP address
-
Log in to Cloud Manager.
-
Determine which two Compute Instances are to be used within your failover setup. They both must be located in the same data center. If you need to, create those Compute Instances now and allow them to fully boot up.
To support the BGP method of IP Sharing and failover, your Compute Instance must be assigned an IPv6 address. This is not an issue for most Compute Instances as an IPv6 address is assigned during deployment. If your Compute Instance was created before IPv6 addresses were automatically assigned, and you would like to enable IP Sharing within a data center that uses BGP-based failover, contact Support.
-
Disable Network Helper on both Compute Instances. For instructions, see the Network Helper guide.
-
Of the IP addresses assigned to your Compute Instances, determine which IP address you wish to use as the shared IP. You may want to add an additional IPv4 address or IPv6 range (/64 or /56) to one of the Compute Instances, as this avoids temporary connectivity loss to applications that may be using your existing IP addresses. See Managing IP addresses for instructions. Each additional IPv4 address costs $2 per month.
-
On the Compute Instance that is not assigned the IP address you selected in the previous step, add that IPv4 address or IPv6 range as a Shared IP using the IP Sharing feature. See Managing IP addresses for instructions on configuring IP sharing.
When IP Sharing is enabled for an IP address, all connectivity to that IP address is immediately lost until it is configured on Lelastic, FRR, or another BGP routing tool. This is not an issue when adding a new IP address, but should be considered if you are enabling IP Sharing on an existing IP address that is actively being used.
2. Add the shared IP to the networking configuration
Adjust the network configuration file on each Compute Instance, adding the shared IP address and restarting the service.
-
Add the shared IP address to the system's networking configuration file. Within the instructions for your distribution below, open the designated file with a text editor (such as nano or vim) and add the provided lines to the end of that file. When doing so, make the following replacements:
- [shared-ip]: The IPv4 address you shared or an address from the IPv6 range that you shared. You can choose any address from the IPv6 range. For example, within the range 2001:db8:e001:1b8c::/64, the address
2001:db8:e001:1b8c::1
can be used. - [prefix]: For an IPv4 address, use
32
. For an IPv6 address, use either56
or64
depending on the size of the range you are sharing.
Review the configuration file and verify that the shared IP address does not already appear. If it does, delete associated lines before continuing.
-
Ubuntu 18.04 LTS and newer: Using netplan. The entire configuration file is shown below, though you only need to copy the
lo:
directive.network: version: 2 renderer: networkd ethernets: eth0: dhcp4: yes lo: match: name: lo addresses: - [shared-ip]/[prefix]
To apply the changes, reboot the Compute Instance or run:
sudo netplan apply
-
Debian and Ubuntu 16.04 (and older): Using ifupdown. Replace [protocol] with
inet
for IPv4 orinet6
for IPv6.... # Add Shared IP Address iface lo [protocol] static address [shared-ip]/[prefix]
To apply the changes, reboot the Compute Instance or run:
sudo ifdown lo && sudo ip addr flush lo && sudo ifup lo
If you receive the following output, you can safely ignore it: RTNETLINK answers: Cannot assign requested address.
-
CentOS/RHEL: Using NetworkManager. Since NetworkManager does not support managing the loopback interface, you need to first add a dummy interface named shared (or any other name that you wish). Instead of editing the file directly, the nmcli tool is used.
nmcli con add type dummy ifname shared
Next, add your Shared IP address (or addresses) and bring up the new interface. Run the commands below, replacing [protocol] with
ipv4
for IPv4 oripv6
for IPv6 (in addition to replacing [shared-ip] and [prefix])nmcli con mod dummy-shared [protocol].method manual [protocol].addresses [shared-ip]/[prefix] nmcli con up dummy-shared
Since the loopback interface is not used, you must also add the
-allifs
option to the lelastic command (discussed in a separate section below).
- [shared-ip]: The IPv4 address you shared or an address from the IPv6 range that you shared. You can choose any address from the IPv6 range. For example, within the range 2001:db8:e001:1b8c::/64, the address
3. Install and configure lelastic
Next, we need to configure the failover software on each Compute Instance. For this, the lelastic utility is used. For more control or for advanced use cases, follow the instructions within the Configuring IP failover over BPG using FRR guide instead of using lelastic.
-
Install lelastic by downloading the latest release from the GitHub repository, extracting the contents of the archived file, and moving the lelastic executable to a folder within your PATH. This same process can be used to update lelastic, making sure to restart the lelastic service (detailed in a later step) to complete the upgrade. Before installing or updating lelastic, review the releases page and update the version variable with the most recent version number.
version=v0.0.6 curl -LO https://github.com/linode/lelastic/releases/download/$version/lelastic.gz gunzip lelastic.gz chmod 755 lelastic sudo mv lelastic /usr/local/bin/
CentOS/RHEL: If running a distribution with SELinux enabled (such as most CentOS/RHEL distributions), you must also set the SELinux type of the file to
bin_t
.``` sudo chcon -t bin_t /usr/local/bin/lelastic ```
-
Next, prepare the command to configure BGP routing through lelastic. Replace [id] with the ID corresponding to your data center in the table above and [role] with either
primary
orsecondary
. You do not need to run this command, as it is configured as a service in the following steps.lelastic -dcid [id] -[role] &
Additional options:
-
-send56
: Advertises an IPv6 address as a /56 subnet (defaults to /64). This is needed when using an IP address from a IPv6 /56 routed range. -
-allifs
: Looks for the shared IP address on all interfaces, not just the loopback interface.CentOS/RHEL: Since the Shared IP address is configured on the eth0 interface for NetworkManager distributions (like CentOS/RHEL), you must add the
-allifs
option to the lelastic command.
See Test failover to learn more about the expected behavior for each role.
-
-
Create and edit the service file using either nano or vim.
sudo nano /etc/systemd/system/lelastic.service
-
Paste in the following contents and then save and close the file. Replace $command with the lelastic command you prepared in a previous step.
[Unit] Description= Lelastic After=network-online.target Wants=network-online.target [Service] Type=simple ExecStart=/usr/local/bin/$command ExecReload=/bin/kill -s HUP $MAINPID [Install] WantedBy=multi-user.target
-
Apply the correct permissions to the service file.
sudo chmod 644 /etc/systemd/system/lelastic.service
-
Start and enable the lelastic service.
sudo systemctl start lelastic sudo systemctl enable lelastic
You can check the status of the service to make sure it's running (and to view any errors)
sudo systemctl status lelastic
If you need to, you can stop and disable the service to stop failover functionality on the particular Compute Instance.
sudo systemctl stop lelastic sudo systemctl disable lelastic
Test failover
Once configured, the shared IP address is routed to the primary Compute Instance. If that Compute Instance becomes inaccessible, the shared IP address is automatically routed to the secondary Compute Instance (failover). Once the primary Compute Instance is back online, the shared IP address is restored to that Compute Instance (failback).
You can test the failover functionality of the shared IP using the steps below.
-
Using a machine other than the two Compute Instances within the failover configuration (such as your local machine), ping the shared IP address.
ping [shared-ip]
Review the output to verify that the ping is successful. The output should be similar to the following:
64 bytes from 192.0.2.1: icmp_seq=3310 ttl=64 time=0.373 ms
If you are sharing an IPv6 address, the machine from which you are running the
ping
command must have IPv6 connectivity. Not all ISPs have this functionality. -
Power off the primary Compute Instance or stop the lelastic service on that Compute Instance. Once the service has stopped or the Compute Instance has fully powered down, the shared IP address should be routed to the secondary Compute Instance.
sudo systemctl stop lelastic
-
Verify that the shared IP is still accessible by again running the ping command. If the ping is successful, failover is working as intended.
Updated 4 days ago