NSX-T host configuration shows partial success with error, 'Query teaming failed for dvs: <uuid>; LogicalSwitch full-sync realization query skipped on host'
search cancel

NSX-T host configuration shows partial success with error, 'Query teaming failed for dvs: <uuid>; LogicalSwitch full-sync realization query skipped on host'

book

Article ID: 336826

calendar_today

Updated On:

Products

VMware NSX

Issue/Introduction

To provide a point of reference for troubleshooting the following error

Host configuration; Query teaming failed for dvs <uuid> LogicalSwitch full-sync: Logical full-sync realization query skipped

When it pops up in a partial success status on the NSX-T configuration of a host node

Symptoms:
After upgrading or making changes to a host transport node causing the host profile to reapply we see the below error

logical switch full sync partial success marked up.png

Host configuration; Query teaming failed for dvs <dvs-uuid> LogicalSwitch full-sync: Logical full-sync realization query skipped.

No noticeable impact but what appears to be a cosmetic error showing a Partial Success in the NSX Configuration. The host transport profile seems to be applied to the host transport profile as we can see other aspects of the NSX-T host are working as expected.

An example of the log files thrown off by this error on the host would be

2022-06-26T07:51:13.447Z nsx-opsagent[35750116]: NSX 35750116 - [nsx@6876 comp="nsx-esx" subcomp="opsagent" s2comp="nsxa" tid="35750452" level="WARNING"] Lacp policy for dvs [<uuid1>] is not found.
2022-06-26T07:51:13.447Z nsx-opsagent[35750116]: NSX 35750116 - [nsx@6876 comp="nsx-esx" subcomp="opsagent" s2comp="nsxa" tid="35750452" level="INFO"] Fetched 2 active & 0 standby uplinks for teaming type 555676052 of dvs <uuid1>
2022-06-26T07:51:13.448Z nsx-opsagent[35750116]: NSX 35750116 - [nsx@6876 comp="nsx-esx" subcomp="opsagent" s2comp="nsxa" tid="35750452" level="INFO"] [GetDvsConfigInfo] vdrPort of dvs [<uuid1>] uses teamType 5 and uplinks [Uplink 1,Uplink 2]

2022-06-26T07:51:13.448Z nsx-opsagent[35750116]: NSX 35750116 - [nsx@6876 comp="nsx-esx" subcomp="opsagent" s2comp="nsxa" tid="35750452" level="WARNING"] Lacp policy for dvs [<uuid2>] is not found.
2022-06-26T07:51:13.448Z nsx-opsagent[35750116]: NSX 35750116 - [nsx@6876 comp="nsx-esx" subcomp="opsagent" s2comp="nsxa" tid="35750452" level="ERROR" errorCode="ERR_GLOBAL_PROP_GET_FAILED"] Failed to get teaming for dvs [<uuid2>]: bad0003
2022-06-26T07:51:13.448Z nsx-opsagent[35750116]: NSX 35750116 - [nsx@6876 comp="nsx-esx" subcomp="opsagent" s2comp="nsxa" tid="35750452" level="WARNING"] [GetDvsConfigInfo] Query teaming in dvs [<uuid2>] returned NOT_FOUND

There are 2 host switches - uuid1 and uuid2
     
We can see that fetching the uplink and teaming config for the 1st host switch succeeds but the 2nd returned the error. This could be due to incorrect teaming config on the host switch.

net-dvs output showing teaming config below

switch <uuid2> (vswitch)
    max ports: 9216
    global properties:
        com.vmware.common.opaqueDvs = false , propType = CONFIG
        com.vmware.vrdma.uuid = <vrdma-uuid> , propType = CONFIG
        com.vmware.vds.teamcheck.param: NonIPHASH
        com.vmware.vds.vlanmtucheck.param:

Empty standby and active uplinks could lead to this error
            ranges = 0 51-53
            propType = CONFIG POLICY
        com.vmware.common.alias = <Distributed Switch Name> 87acaf , propType = CONFIG
        com.vmware.common.uplinkPorts:
            uplink1, uplink2
            propType = CONFIG
....
        com.vmware.vswitch.teaming = 0x 3. 0. 0. 0.80. 2. 0. 0. 0. 1. a. 0.34. 0. 0. 0. 1. 0 <repeats 2599 times>
            propType = CONFIG
        com.vmware.vswitch.logicalswitch.teaming = 0x 1. 0. 0. 0.4c.6f.61.64.42.61.6c.61.6e.63.65.64.53.6f.75.72.63.65. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 3. 0. 0. 0.80. 2. 0. 0.34. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0.75.70.6c.69.6e.6b.32. 0 <repeats 2585 times>
            propType = CONFIG

Environment

VMware NSX-T Data Center
VMware NSX-T Data Center 3.x

Cause

This DVS is not syncing properly to the host through the host configuration and host transport profile

An indication that the host transport node configuration and host transport profile need to be investigated to ensure the DVS within the error is configured properly for the host node it is applied too

Resolution

Fixed in NSX-T 3.2.2

Workaround:
Query the UUID listed within the partial success error and use that to find the DVS in question

Take a look at the configuration and determine if it is being used or not used anymore

If the DVS is not being used anymore remove the configuration from the NSX-T host transport node and profile that is applied to it

Once you have done that trigger and re-install NSX-T on the host transport node and you'll see success in the configuration state.

If the DVS is being used confirm things are configured properly and functioning as expected on it and that there are not any problems with the teaming configuration which was the specific problem in this instance

We found that this DVS was not being used so we removed it from the hosts in question and that fixed the error in this instance

If you continue to have this issue on a DVS that should be functional and you have corrected any problems linked to teaming or other configuration found within the logs but you are still getting this error after reapplying the NSX configuration for both the profile and host transport node please open up a support request with VMware NSX-T team

Additional Information

Impact/Risks:
A cosmetic error appears in the UI indicating partial success on the host transport nodes

Potential indication of issue residing on DVS that will need to be investigated further