Aria Automation resource totals may remain with the original Cloud Zone after migration in vSphere and reconciliation
search cancel

Aria Automation resource totals may remain with the original Cloud Zone after migration in vSphere and reconciliation

book

Article ID: 396325

calendar_today

Updated On:

Products

VCF Operations/Automation (formerly VMware Aria Suite)

Issue/Introduction

  • Moving VMs from one cluster to another in vSphere, resources are not recalculated properly in Aria Automation
  • Need to add a new cloud zone with the new cluster inside and VMs were moved in the background by leveraging live migration
  • Storage is not calculated at all and still at 0 for the destination cloud zone
  • The old storage from the old cluster is also attached to the new cluster, so no storage migration took place.

Environment

  • VMware Aria Automation 8.18.x

Cause

The group placement link property is not correctly reassigned in some scenarios. This can be confirmed using this internal REST API call:

curl -X GET --location 'https://<fqdn>/provisioning/uerp/resources/compute/<compute_uuid>' \
--header 'Authorization: Bearer <token>' 

The compute_uuid is the resourceId property of the machine from the deployment UI. 

 

The value of __groupResourcePlacementLink in this response is of this format:

/provisioning/resources/group-placements/<UUID1>-<UUID2> 

UUID2 here is the UUID of the cloud zone, which should be the destination / current cloud zone.

Resolution

This issue is resolved in Aria Automation 8.18.1-Patch3

 

Where there are still discrepancies in the storage totals:

  1. Open the impacted cloud zone in edit mode
  2. Change the storage limit to any temporary value, and save it
  3. Reopen the same cloud zone in edit mode and revert the storage limit to its original value.

This will trigger a reconciliation process, and the storage limits should be recalculated within 24 hours.

 

Workaround

To fix up any remaining VMs and disks which are currently showing under wrong cloud zone, please contact Support referencing this article.