HCX Bulk Migration not progressing beyond 0%
search cancel

HCX Bulk Migration not progressing beyond 0%

book

Article ID: 319757

calendar_today

Updated On:

Products

VMware HCX

Issue/Introduction

HCX Bulk Migrations not progressing beyond 0%. There is no indication of any errors in the HCX UI and the migration workflows remain actionable to be halted.

This article provides information to help identify some of the issues that can cause HCX VR Bulk Migrations to not proceed beyond initial stages and show 0% progress indefinitely.

Environment

HCX

Cause

Required communication flows between HCX and the local infrastructure vCenter/ESXi/NSX are unavailable for the migration workflow to proceed. HCX will continuously attempt to establish communication without failing.

  • ESXi firewall blocking ports which is required by HCX Bulk Migrations.
  • DFW blocking ports which is required by HCX Bulk Migrations
  • Local Management Traffic sent to external DFGW and being dropped due to MTU restrictions
  • vSphere Replication not enabled on the expected VMkernel adapter
  • vSphere Replication NFC communication is only done through the Management Network for the IX appliance, even if there is a separate Network Profile configured for VR.
  • VMkernel adapter enabled for vSphere Replication becoming unresponsive

Resolution

  • HCX Bulk Migration requires TCP 31031 and 44046 communication between the ESXI Management VMkernel IP or VMkernel adapter configured for vSphere Replication on all the ESXi Host in the Service Cluster at Source and Target to IX appliance
    • From ESXi host
      nc -zv <IP Address of the IX vSphere replication> 31031
  • The IX Management interface must have a path to any of the VMkernel adapters configured for vSphere Replication NFC (TCP 902). If the VMkernel adapter IP cannot be reached through the Default Gateway configured for the Management Network Profile, then a static route must be configured on the Compute Profile.
  • If no VMkernel adapter is enabled for VR NFC, the ESXi host will default to the Management interface but based on the local networking environment, it may be required to enabled NFC on the ESXi host Management Interface.
  • Verify firewall settings on all ESXi host in the Service Cluster and DFW configuration on NSX to allow all the required communication ports.
  • Navigate to HCX Manager UI -> Interconnect > Service Mesh > Run Diagnostics and review the results for any errors. The diagnostics will test connectivity from the IX appliance to the required components (e.g., vCenter, ESXi hosts, etc.) and identify any issues related to network communication. If there are any errors related to closed ports, review the network and firewall configuration. For more information on the required ports, refer to the VMware Ports and Protocols and Network Diagrams for VMware HCX.
For HCX Health Check, access: HCX - Health Check and Best Practices

Additional Information

Impact/Risks:
Unavailable communication flows will disrupt VR Bulk Migration workflows.