Unable to add a host to the cluster, as The "NEXT" option to proceed is greyed out on SDDC manager UI.
search cancel

Unable to add a host to the cluster, as The "NEXT" option to proceed is greyed out on SDDC manager UI.

book

Article ID: 414794

calendar_today

Updated On:

Products

VMware SDDC Manager

Issue/Introduction

  • Unable to add a host to the cluster, as the "NEXT" to proceed option is greyed out on SDDC Manager UI. However, the same option will be available on another Workload Domain cluster within the same SDDC manager.
  • Able to add a Host to a vSphere Cluster Using the VMware Cloud Foundation API without any issues.
  • Entry in logs: domainmanager.log

The name fetched by vds configuration is : new-vdsname

YYYY-MM-DDT##:##:##.###+0000 DEBUG [vcf_dm,################################,####] [c.v.e.s.c.s.ClusterNetworkConfigFetcher,dm-exec-12]  Returning port group configs: [com.vmware.vcf.rest.api.model.v1.clusters.PortgroupConfig@########, com.vmware.vcf.rest.api.model.v1.clusters.PortgroupConfig@########, com.vmware.vcf.rest.api.model.v1.clusters.PortgroupConfig@########]
YYYY-MM-DDT##:##:##.###+0000 DEBUG [vcf_dm,################################,####] [c.v.e.s.c.s.ClusterNetworkConfigFetcher,dm-exec-12]  Returning vds configuration: {"name":"new-vdsname","portGroups":[{"name":"new-vdsname-pg-mgmt","transportType":"MANAGEMENT","activeUplinks":["uplink2","uplink1"],"standByUplinks":[],"policy":"loadbalance_loadbased","vlanId":####},{"name":"new-vdsname-pg-vmotion","transportType":"VMOTION","activeUplinks":["uplink2","uplink1"],"standByUplinks":[],"policy":"loadbalance_loadbased","vlanId":####},{"name":"new-vdsname-pg-nfs","transportType":"NFS","activeUplinks":["uplink2","uplink1"],"standByUplinks":[],"policy":"loadbalance_loadbased","vlanId":####}],"nsxtSwitchConfig":{"transportZones":[{"name":"nsx01","transportType":"OVERLAY"}]},"inventoryMismatchInfo":[],"uplinks":["uplink2","uplink1"]}

 

Following is the nsx configuration fetched and here the name is : old-vdsname

YYYY-MM-DDT##:##:##.###+0000 DEBUG [vcf_dm,################################,####] [c.v.e.s.c.s.ClusterNetworkConfigFetcher,dm-exec-7]  Preparing nsxt host switch configuration for host switch old-vdsname
YYYY-MM-DDT##:##:##.###+0000 DEBUG [vcf_dm,################################,####] [c.v.e.s.c.s.ClusterNetworkConfigFetcher,dm-exec-7]  nsxtHostSwitchConfiguration: {"vdsName":"old-vdsname","uplinkProfileName":"xxx-x","vdsUplinkToNsxUplink":[{"vdsUplinkName":"uplink2","nsxUplinkName":"uplink-2"},{"vdsUplinkName":"uplink1","nsxUplinkName":"uplink-1"}]}
YYYY-MM-DDT##:##:##.###+0000 DEBUG [vcf_dm,################################,####] [c.v.e.s.c.s.ClusterNetworkConfigFetcher,dm-exec-7]  globalNetworkProfileConfiguration: {"name":"xxx","isDefault":true,"nsxtHostSwitchConfigs":[{"vdsName":"old-vdsname","uplinkProfileName":"###################################","vdsUplinkToNsxUplink":[{"vdsUplinkName":"uplink2","nsxUplinkName":"uplink-2"},{"vdsUplinkName":"uplink1","nsxUplinkName":"uplink-1"}]}]}

 

Environment

SDDC Manager

Cause

The mismatch in vDS name on the VCF inventory and the NSX side.

In VCF inventory VDS name is "new-vdsname" for the cluster.
On the NSX side, the VDS name is "old-vdsname".

Note: This issue can happen because of case-sensitive or complete name mismatch. (Customer would have renamed but not reflecting in TNP)

Resolution

Confirmation:

  1. SSH to NSX Manager and run the below command
    • curl -k -u admin:'your-password' -X GET https://127.0.0.1/policy/api/v1/infra/host-transport-node-profiles/ > /tmp/output.json
  2. Validate the file, it is expected to have the incorrect and old name of the vDS.
    • cat /tmp/output.json

Expected file content:

{
    "host_switch_spec" : {
      "host_switches" : [ {
        "host_switch_name" : "old-vdsname",
      "host_switch_id" : "## ## ## ## ## ## ## ##-## ## ## ## ## ## ## ##",
        "host_switch_type" : "VDS",
        "host_switch_mode" : "STANDARD",
        "ecmp_mode" : "L3",
        "host_switch_profile_ids" : [ {
          "key" : "UplinkHostSwitchProfile",
        "value" : "/infra/host-switch-profiles/########-####-####-####-############"
        } ],
        "uplinks" : [ {
          "vds_uplink_name" : "uplink2",
          "uplink_name" : "uplink-2"
        }, {
          "vds_uplink_name" : "uplink1",
          "uplink_name" : "uplink-1"
        } ],
        "is_migrate_pnics" : false,
        "ip_assignment_spec" : {
          "resource_type" : "AssignedByDhcp"
        },
        "cpu_config" : [ ],
        "transport_zone_endpoints" : [ {
        "transport_zone_id" : "/infra/sites/default/enforcement-points/default/transport-zones/########-####-####-####-############",
          "transport_zone_profile_ids" : [ ]
        } ],
        "not_ready" : false,
      "portgroup_transport_zone_id" : "/infra/sites/default/enforcement-points/default/transport-zones/########-####-####-####-############"
      } ],
      "resource_type" : "StandardHostSwitchSpec"
    },
    "ignore_overridden_hosts" : false,
    "resource_type" : "PolicyHostTransportNodeProfile",
  "id" : <vds-id>,
  "display_name" : "new-vdsname",
  "description" : "name.abc.com",
  "path" : "/infra/host-transport-node-profiles/<vds-id>",
  "relative_path" : "<vds-id>",
    "parent_path" : "/infra",
    "remote_path" : "",
  "unique_id" : "<vds-id>",
  "realization_id" : "<vds-id>",
  "owner_id" : "########-####-####-####-############",
    "marked_for_delete" : false,
    "overridden" : false,
    "_system_owned" : false,
    "_protection" : "NOT_PROTECTED",
  "_create_time" : #############,
"_create_user" : "admin",
"_last_modified_time" : #############,
"_last_modified_user" : "admin",
"_revision" : 0
}
 

Now that we have verified the names do not match, we have 3 options to resolve this.

Solution 1 - NSX Manager UI:

  1. In NSX Manager, navigate to System -> Fabric -> Hosts -> Transport Node Profile -> Edit.
  2. Change the name to the expected name (new-vdsname) and then save.

Solution 2 - NSX Manager CLI:

  1. Output the json for the existing Transport Node Profile to /tmp/TNPoutput.json
    • curl -k -u admin -X GET "https://<NSX-Manager-IP>/policy/api/v1/infra/host-transport-node-profiles/<vds-id>" > /tmp/TNPoutput.json
  2. Replace the old switch name with the new switch name in the json configuration file
    • sed -i 's/old-vdsname/new-vdsname/g' /tmp/TNPoutput.json
  3. Push the updated json config back to the Transport Node Profile
    • curl -k -u admin -X PUT -H "Content-Type: application/json" -d @/tmp/TNPoutput.json  "https://<NSX-Manager-IP>/policy/api/v1/infra/host-transport-node-profiles/<vds-id>"

Solution 3- Using Postman:

  1. GET API for selected TNP is
    • GET https://<nsxmanager-ip>/policy/api/v1/infra/host-transport-node-profiles/<vds-id>
  2. Update the host_switch_name in above returned json
    • "host_switch_name": "new-vdsname",
  3. Run the TNP update API with updated json
    • PUT https://{{nsxmanager-ip}}/policy/api/v1/infra/host-transport-node-profiles/<vds-id>
      // updated json from step2
  4. Once the changes are made, try to add the ESXi host and the NEXT option should not be greyed out.

Additional Information

If the vDS name is not correct at VC level, then it needs to be renamed on VC.