AVI Load balancer controller with Openstack cloud upgrade is failing at step "Unable to complete task migrate_config"
search cancel

AVI Load balancer controller with Openstack cloud upgrade is failing at step "Unable to complete task migrate_config"

book

Article ID: 405416

calendar_today

Updated On:

Products

VMware Avi Load Balancer

Issue/Introduction

When customer was trying to upgrade AVI Load balancer controller, upgrade is taking long time at step and failing at "Unable to complete task migrate_config".

migrate_config task kicks in on controller once all the upgrade related steps are completed during the controller upgrade.

 

Environment

AVI Load Balancer Version: 22.1.x & 30.x.x

Cloud: OpenStack

 

Cause

  • In Open-stack cloud environments AVI Load Balancer controller try to reach metadata server(169.254.169.254) located in open-stack to get the VM related information.
  • If metadata server(169.254.169.254) is not reachable from controller migrate_config task will fail and it causes upgrade to fail.

Steps to Verify:

Login to Controller SSH (Bash) using admin credentials:

  • From controller bash Check reachability to ip address "169.254.169.254"
admin@##-##-##-##:~$ route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         ##.##.##.##     0.0.0.0         UG    0      0        0 eth0
##.##.##.##     0.0.0.0         255.255.255.0   U     0      0        0 eth0
##.##.##.##     0.0.0.0         255.255.0.0     U     0      0        0 docker0
169.254.169.254 ##.##.##.##     255.255.255.255 UGH   0      0        0 eth0

admin@##-##-##-##:~$ ping 169.254.169.254
PING 169.254.169.254 (169.254.169.254) 56(84) bytes of data.
^C
--- 169.254.169.254 ping statistics ---
9 packets transmitted, 0 received, 100% packet loss, time 8181ms
  • If we notice reachability issues migrate_config task will be take forever to complete.

Look for these logs:

Navigate to folder "/var/lib/avi/log" and check for the log under "upgrade-coordinator.log"

Check if you are observing "Exception: Unable to complete task migrate_config" 

upgrade_events {
        task_name: "MigrateConfig"
        sub_events {
          ip {
            addr: "node1.controller.local"
            type: DNS
          }
          start_time: "xxxx-xx-xx xx:xx:xx"
          end_time: "xxxx-xx-xx xx:xx:xx"
          status: false
          message: "UC::[Thu Jul 10 10:57:59 2025]Error while running task:MigrateConfig\nUnable to complete task migrate_config on [\'node1.controller.local\'].\nTraceback (most recent call last):\n  File \"/opt/avi/python/lib/avi/upgrade/upgrade_tasks.py\", line 260, in start\n    self.run()\n  File \"/opt/avi/python/lib/avi/upgrade/upgrade_tasks.py\", line 475, in run\n    raise Exception(\"Unable to complete task %s on %s.\" % (self.task, list(self.err_nodes.keys())))\nException: Unable to complete task migrate_config on [\'node1.controller.local\'].\n.::UC\n"
          duration: 3600
        }

Navigate to folder "/var/lib/avi/log" and check for log under "portal-webapp"

If controller is unable to reach the metadata server we notice these logs (we are unable to fetch VM related information)

2025-07-10 11:30:11,340] INFO [ovfenv_parser.openstack_config_drive_parse:161] label config not found in /dev/disk/by-label
[2025-07-10 11:30:16,370] INFO [ovfenv_parser.openstack_instance_meta_parse:176] Unable to obtain meta_data from infra
[2025-07-10 11:30:16,370] INFO [ovfenv_parser.acropolis_config_drive_parse:203] label config not found in /dev/disk/by-label
[2025-07-10 11:30:16,370] INFO [ovfenv_parser.ovfenv_parse:630] Hypervisor type: KVM
[2025-07-10 11:30:16,402] INFO [hypervisor_utils.get_hypervisor_type:45] Hypervisor type: KVM
[2025-07-10 11:30:16,419] INFO [ovfenv_parser.ovfenv_parse:484] Hypervisor Manufacturer Product type: OpenStack Compute
[2025-07-10 11:30:16,433] INFO [ovfenv_parser.run_cmd:342] cmd: ['mkdir', '-p', '/mnt/cdrom'], out: , err:
[2025-07-10 11:30:16,447] INFO [ovfenv_parser.run_cmd:342] cmd: ['mount', '/dev/cdrom', '/mnt/cdrom'], out: , err:mount: /mnt/cdrom: special device /dev/cdrom does not exist.

[2025-07-10 11:30:16,463] INFO [ovfenv_parser.run_cmd:342] cmd: ['ls', '/mnt/cdrom/se_config'], out: , err:ls: cannot access '/mnt/cdrom/se_config': No such file or directory




Resolution

  • Make sure metadata server "169.254.169.254" is reachable from AVI Load Balancer controller VM before doing the upgrade.
  • If "169.254.169.254" is not reachable, please involve your open-stack team to resolve the issue before attempting another upgrade.