NSX Edge Placement Mismatch Alarm
search cancel

NSX Edge Placement Mismatch Alarm

book

Article ID: 401014

calendar_today

Updated On:

Products

VMware NSX

Issue/Introduction

  • Edges have been vMotioned from one vCenter to another vCenter OR after upgrading NSX T to 3.2.4.1, we are seeing a UI mismatch error for edge placement.
  • Corresponding Compute Manager hosting that Edge VMs is deleted from NSX. Edge Node MP Intent (i.e. internal corfu tables like EdgeTransportNode, DeploymentUnitInstance) has reference of old vCenter (which is deleted from NSX).
  • "vc_id" field from API: GET https://{{manager-ip}}/api/v1/transport-nodes/{{edge-transport-node-uuid}} payload is pointing to a Compute Manager id that is deleted.
  • All efforts to resolve it has been unsuccessful. There's an API call mentioned in Alarm for resolving this issue but when we try to execute the API with correct provided values, it gives "a bad syntax error".

Environment

VMware NSX 3.2.4.1

Cause

Edge node Refresh API operation fails due to stale "vc_id" reference from NSX Edge MP Intent.

Resolution

Perform the below steps to resolve this issue:

  • Check output of Edge transport node state API:
    • GET https://<manager-ip>/api/v1/transport-nodes/<edge-uuid>/state
  • If "node_deployment_state" in Edge transport node state API shows mismatch, similar to as shown below, then mismatch is still present:
{
  "node_deployment_state": {
    "state": "EDGE_VM_VSPHERE_SETTINGS_MISMATCH_RESOLVE",
    "details": [
      {
        "sub_system_id": "EDGE_TRANSPORT_NODE_MISMATCH_ALARMS",
        "state": "EDGE_VM_VSPHERE_SETTINGS_MISMATCH_RESOLVE",
        "failure_message": " configuration on vSphere : {\"CPU Reservation in shares\":\"NORMAL_PRIORITY\",\"Storage Id\":\"datastore-14\"}, intent vSphere configuration :{\"CPU Reservation in shares\":\"LOW_PRIORITY\",\"Storage Id\":\"datastore-50\"}",
        "failure_code": 16087
      }
    ],
    "failure_message": "",
    "failure_code": 0
  }
}
 
  • To resolve this mismatch fire refresh API (Refresh API does not need any request body):
    • POST https://<manager-ip>/api/v1/transport-nodes/<edge-uuid>?action=refresh_node_configuration&resource_type=EdgeNode
  • Now, check again the output of Edge transport node state API: GET https://<manager-ip>/api/v1/transport-nodes/<edge-uuid>/state. If in this Edge transport node state API, "node_deployment_state" for Edge transport node is NODE_READY then then we can say mismatch is resolved from Edge-MP vertical side:
{
  "node_deployment_state": {
    "state": "NODE_READY",
    "details": []
  }
}

 

If issue still persist post running the above mentioned above steps, please open a case with Broadcom support team for the further resolution.
Creating and managing Broadcom support cases