Deployed NSX Load balancers are in a Missing status
search cancel

Deployed NSX Load balancers are in a Missing status

book

Article ID: 317091

calendar_today

Updated On:

Products

VMware Aria Suite

Issue/Introduction

Symptoms:
  • NSX Load balancers are in a state of Missing under the Deployments tab
  • NSX Load balancers are also missing within Infrastructure > Resource > Networks > Load Balancers
  • When opening the Deployments tab and hovering your cursor over the LB object the following message is seen
    This resource no longer exists in the deployment.
  • An error message is seen when clicking on IP Ranges
Note: Re-adding the Cloud Account does not workaround the issue.


Environment

VMware vRealize Automation 8.5.x

Resolution

This issue is resolved in VMware vRealize Automation 8.6.

Workaround:

Prerequisites

Note:  Patch downloads have been moved to Broadcom Support.  The patch for this issue is named: vRA-8.5.0-HotFix-2781728.

  1. Simultaneously take new snapshots, without memory, on all appliance nodes, before continuing.

Steps to apply patch

  1. Replace provisioning-service container on all vRA nodes
    1. Upload the patch prov-service-hf2781728.tar to all vRA nodes
      scp prov-service-hf2781728.tar root@vra8.vmware.com:~/
    2. SSH into each vRA node and do the following
      docker image load -i prov-service-hf2781728.tar
      docker tag provisioning-service:patch-id provisioning-service_private:latest
  2. Restart provisioning-service pods with the new image
    1. Get provisioning-service pod ids
      kubectl -n prelude get pods | grep provisioning-service
      
    2. Restart each pod
      kubectl -n prelude delete pod provisioning-service-app-pod-id
  3. Persist the changes so they can survive restarts and redeployments with deploy.sh 
    1. On each node, run
      /opt/scripts/backup_docker_images.sh



Additional Information

Steps to rollback the patch

  1. SSH into each vRA node and do the following
    1. View the provisioning-service docker images
      docker images|grep provisioning-service
      Sample output:
      provisioning-service   patch-id   0862920c1caa   4 days ago   587MB
      provisioning-service_private   latest   0862920c1caa   4 days ago   587MB
      provisioning-service   4b0a304   ab7c981f989a   9 months ago   586MB
  2. Tag the original image, provisioning-service:4b0a304 in this case, as the latest
    docker tag provisioning-service:4b0a304 provisioning-service_private:latest
  3. Restart provisioning-service pods to use the original image  
    1. Get provisioning-service pod ids
      kubectl -n prelude get pods | grep provisioning-service
    2. Restart each pod
      kubectl -n prelude delete pod provisioning-service-app-pod-id
  4. Persist the changes so they can survive restarts and re-deployments with deploy.sh
    1. On each node, run
      /opt/scripts/backup_docker_images.sh