Upgrade Avi Load Balancer LSC Deployment from OEL 7.9 to OEL 8.10
search cancel

Upgrade Avi Load Balancer LSC Deployment from OEL 7.9 to OEL 8.10

book

Article ID: 387371

calendar_today

Updated On:

Products

VMware Avi Load Balancer

Issue/Introduction

The purpose of this KB is to outline the steps for upgrading an Avi Load Balancer Linux Server Cloud deployment, including the Controllers and Service Engines (SEs) running on hosts with OS Oracle Enterprise Linux 7.9 (OEL 7.9) to Oracle Enterprise Linux 8.10 (OEL 8.10).

 

Pre-requisites: 

  1. The Avi LSC deployment (Controllers and Service Engines) on bare metal host with host OS OEL 7.9 and supported Avi kernel for the Service Engines
  2. The current Avi version being used should support OEL 8.10 as the host OS. OEL 8.10 support is available on Avi versions 22.1.6-2p8 and 30.2.3. For more details on supported version please refer the below documentation :

    https://techdocs.broadcom.com/us/en/vmware-security-load-balancing/avi-load-balancer/avi-load-balancer/30-2/vmware-avi-load-balancer-installation-guide/preparing-for-installation/system-requirements.html

Environment

Linux Server Cloud

 

Resolution

 

Upgrade Steps

 

1. Initiate a maintenance window for the upgrades (Avi, host OS and kernel).
2. Take a backup of the existing Avi configuration from the Avi Controller shell and store it at a persistent location:

3. Perform the underlying host OS upgrade for the Controllers first, followed by the Service Engines:
      a. Refer section Steps to perform the OS upgrade for controller cluster below.
      b. Refer section Steps to perform the OS upgrade for serviceengine below.
4. Download and install the Avi supported Kernel on the bare metal as part of host OS upgrade in step 7.
5. Make sure to have the Avi supported kernel as the default kernel to be loaded as part of host OS upgrade in step 7.

 

Update Kernel example on OEL 8.10 host: 

 $ yum install kernel-4.18.0-553.16.1.el8_10
 $ grubby --default-kernel # check current default kernel
 $ ls -l /boot/vmlinuz-* # list installed kernels
 $ grubby --set-default /boot/vmlinuz-4.18.0-553.16.1.el8_10.x86_64   # set default kernel to newly installed kernel
 $ grubby --default-kernel     # verify default kernel has changed to  4.18.0-553.16.1.el8_10
 $ reboot
 $ uname -r  # verify kernel version is 4.18.0-553.16.1.el8_10

 

6. Again check the health status of all the SEs, Controllers, virtual services, servers, pools etc. and verify if the traffic is going through well.

 

Steps to perform the OS upgrade for the Controller Cluster:

  1. Go to the leader node and remove the two follower nodes
  2. On the two removed follower nodes, execute the below commands on the host : 

systemctl disable avicontroller
systemctl stop avicontroller

 

3.Upgrade the OS from OEL 7.9 to OEL 8.10 on the removed follower nodes

Make sure after host OS upgrade, podman version 4.9.4-rhel  is installed on the host

sudo dnf install -y podman

[root@localhost ~]# podman version
Client:       Podman Engine
Version:      4.9.4-rhel
API Version:  4.9.4-rhel
Go Version:   go1.22.9 (Red Hat 1.22.9-1.module+el8.10.0+90476+bb48cc15)
Built:        Wed Mar 26 06:37:45 2025
OS/Arch:      linux/amd64
[root@localhost ~]#

 

4. After upgrading the host OS, execute the below command to clean the stale iptables rules :

sudo iptables -L && sudo iptables -F && sudo iptables -X && sudo iptables -P INPUT ACCEPT && sudo iptables -P FORWARD ACCEPT && sudo iptables -P OUTPUT ACCEPT

 

After executing  the above commands, ensure that the stale iptables rules and chains are removed :

 

[root@localhost ~]# sudo iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination

 

5. Start the Avi Controller via the host shell:

systemctl enable avicontroller
systemctl start avicontroller

 

Note: 

All the ports used by the Avi container instance should be allowed in firewall rules on the host (check required ports here :  https://techdocs.broadcom.com/us/en/vmware-security-load-balancing/avi-load-balancer/avi-load-balancer/22-1/vmware-avi-load-balancer-installation-guide/preparing-for-installation/networking-considerations/ports-and-protocols.html)


6. Verify Avi LSC Controller is running on correct version on detached follower nodes.

7. Add the detached follower nodes back to the cluster and wait for cluster to be operational with state HA active.

8. Now in order to do the OS upgrade of remaining one node, again break the cluster by removing two nodes (including node on which OS upgrade will be done). 

Note: If the node on which OS upgrade is remaining is the leader node then, bring that node down and then bring it up back again and wait for the cluster to become HA active. Here, we are just switching the leader node to become the follower node. This is required as the leader node can’t be removed from the cluster.

To bring down the controller node below commands can be executed on the host, wait for cluster to come-up (HA compromised) and then bring the node back again (HA active) :

 

systemctl disable avicontroller
systemctl stop avicontroller

Wait for cluster to come-up (HA compromised)

systemctl enable avicontroller
systemctl start avicontroller

Wait for cluster to come-up (HA active)

Once the above cluster is up and HA active then we can remove the follower nodes

9. Upgrade the OS on remaining node and make sure the Avi Controller is running on the correct version (refer step 4 to 6)

10. Add the removed nodes back to the cluster and wait for the cluster to come up with the state "HA active".



Steps to perform the OS upgrade for Service Engine :

1. Pick one of the Service Engines, disable it, wait for all the VS migration from that SE and for successful disable (Note: Please make sure there is enough capacity on other SE hosts to host the VSs which are migrated from the disabled SE)

2. Go to the Linux Server Cloud UI (Infrastructure -> Clouds) and remove that SE host from the cloud configuration.

Clean  up the SE host by executing the below commands :

rm -f /etc/systemd/system/avi*.service*
rm -f /usr/sbin/avi* 
rm -rf /opt/avi 
rm -rf /home/opt/avi

3. Upgrade the SE host OS from OEL 7.9 to OEL 8.10 and apply the Avi LSC supported kernel. Here, we can also do a fresh installation of OEL 8.10 on the SE host and prepare it for addition to the LSC cloud.

For host preparation refer to https://techdocs.broadcom.com/us/en/vmware-security-load-balancing/avi-load-balancer/avi-load-balancer/30-2/vmware-avi-load-balancer-installation-guide/preparing-for-installation/system-requirements.html


Before adding the OEL 8.10 host to the LSC cloud configuration, make sure NetworkManager is enabled and active on the host :

 

[root@localhost ~]# systemctl status NetworkManager
● NetworkManager.service - Network Manager
   Loaded: loaded (/usr/lib/systemd/system/NetworkManager.service; enabled; vendor preset: enabled)
   Active: active (running) since Fri 2025-04-18 08:18:18 EDT; 1min 8s ago
     Docs: man:NetworkManager(8)
 Main PID: 123781 (NetworkManager)
    Tasks: 3 (limit: 100645)
   Memory: 7.3M
   CGroup: /system.slice/NetworkManager.service
           └─123781 /usr/sbin/NetworkManager --no-daemon

 

4. Add the host back to the Linux Server Cloud configuration, verify VS placement, and ensure traffic flow resumes.

5. Perform the above steps for all the Service Engines in the deployment in succession.

Steps to perform the OEL 7.9 to OEL 8.10 OS upgrade : 

Steps for OEL upgrade are provided by the OS vendor. One of the methods is documented at https://docs.oracle.com/en/learn/ol-linux-leapp/index.html#install-the-leapp-utility