/usr/lib/vmware/vm-support/bin/dump-vdl2-info.py
/usr/lib/vmware/vm-support/bin/dump-vdl2-info.py
VTEP Count: 2
CDO status: enabled (deactivated)
VTEP Interface: vmk10
DVPort ID: a760bff7-####-####-####-71592ef259d1
Switch Port ID: 67108881
Endpoint ID: 0
VTEP Interface: vmk12
DVPort ID: 2906f7dd-####-####-####-2bf2a8fa4332
Switch Port ID: 67108883
Endpoint ID: 2
Logical Network: 69633
VTEP Endpoint ID: 1
Logical Network: 68609
VTEP Endpoint ID: 1
Logical Network: 73760
VTEP Endpoint ID: 1
Logical Network: 73752
VTEP Endpoint ID: 0
Logical Network: 73736
VTEP Endpoint ID: 1
Logical Network: 68608
VTEP Endpoint ID: 0
This issue is resolved in VMware NSX 4.2.0
Workaround:
To resolve the unordered VTEP's issue, please use the following process, this will then allow you to reduce the number of VTEPs and avoid incorrect endpoint mapping. This workaround applies to VMware NSX versions 3.2.2 and above.
GET /api/v1/transport-nodes/<TN-UUID>/state
-> this will list the hostswitch endpoints vmk10, vmk11, etc.GET /api/v1/transport-nodes/<TN-UUID>/state
to verify only one VTEP exists on the host.GET /api/v1/transport-nodes/<TN-UUID>/state
again to verify the correct number and order of VTEP's are configured on the host now.GET /api/v1/transport-node-profiles
GET /api/v1/transport-node-profiles/<TNP-UUID>
"uplinks": [
{
"vds_uplink_name": "Uplink 1",
"uplink_name": "Uplink-1"
},
{
"vds_uplink_name": "Uplink 2",
"uplink_name": "Uplink-2"
},
{
"vds_uplink_name": "Uplink 3",
"uplink_name": "Uplink3"
},
{
"vds_uplink_name": "Uplink 4",
"uplink_name": "Uplink4"
}
"uplinks": [
{
"vds_uplink_name": "Uplink 1",
"uplink_name": "Uplink-1"
},
{
"vds_uplink_name": "Uplink 2",
"uplink_name": "Uplink-2"
}
PUT /api/v1/transport-node-profiles/<TN-UUID>
GET /api/v1/transport-node-profiles
GET /api/v1/transport-node-profiles/<TNP-UUID>
"uplinks": [
{
"vds_uplink_name": "Uplink 1",
"uplink_name": "Uplink-1"
},
{
"vds_uplink_name": "Uplink 2",
"uplink_name": "Uplink-2"
},
{
"vds_uplink_name": "Uplink 3",
"uplink_name": "Uplink3"
},
{
"vds_uplink_name": "Uplink 4",
"uplink_name": "Uplink4"
}
"uplinks": [
{
"vds_uplink_name": "Uplink 1",
"uplink_name": "Uplink-1"
},
{
"vds_uplink_name": "Uplink 2",
"uplink_name": "Uplink-2"
}
PUT /api/v1/transport-node-profiles/<TN-UUID>
GET api/v1/transport-node/<TN-UUID>
GET api/v1/transport-nodes/<TN-UUID>/state
<TN-UUID>
can be found in the NSX-T UI, under: System - Fabric - Hosts, Expand Cluster and click the 3 dots beside the host and click copy ID to Clipboard.If the above workaround fails or if you are on VMware NSX version less than 3.2.2 and you still have incorrect endpoint mapping, you can reboot the host, this will remap the TEP to the correct endpoint and resolve the datapath issue.
Note: In the example given above (in symptoms) of reducing from 4 to 2 uplinks, the host will continue to use vmk10 and vmk12, however the reboot will resolve the endpoint mapping issue and there will be no further functional impact.