This issue occurs due to the way ESXi stores the max_vfs parameters. The parameters are stored in a comma separated driver module parameter called max_vfs. The values in the list are the number of VFs to create for each physical PCI device controller by this driver.
However, this parameter is associated with the driver, not the individual device. The devices are therefore implicitly implied by the s:b:d:f ordering of its PCI functions that are controlled by the associated driver. The initial s:b:d:f sorted order is no longer valid when the NIC device is assigned/unassigned to/from an N-DVS switch in enhanced data path mode.
For example, assume that multiple Mellanox ConnectX-4 NIC adapters (driver nmlx5_core) are present in multiple slots and they present NIC physical functions at s:b:d:f addresses as shown below:
nic slot sbdf MAX_VFS
===========================================================
vmnic1 2.0 0000:0b:00.0 1
vmnic2 2.1 0000:0b:00.1 2
vmnic3 3.0 0000:0e:00.0 3
vmnic4 3.1 0000:0e:00.1 4
Note: This is just an example. Your system may be different.
Again, assume that the one of the NICs (say vmnic3) is used with an N-DVS that is configured in enhanced data path mode, the SR-IOV configuration on vmnic3 gets re-applied as '1' (It is always the first value of max_vfs that gets applied):
nic slot sbdf MAX_VFS
===========================================================
vmnic1 2.0 0000:0b:00.0 1
vmnic2 2.1 0000:0b:00.1 2
vmnic3 3.0 0000:0e:00.0 1
vmnic4 3.1 0000:0e:00.1 4
The module parameters for the driver nmlx5_core did not change, but the way in which they are interpreted by ESXi is now errant. The number of virtual functions which initially correspond to the first value of max_vfs now applies to vmnic3. The same behavior is seen with NICs from other driver vendors which support enhanced data path.