A rare race condition between the interrupt virtualization and the VMkernel CPU scheduler in VMs with Directpath I/O device might result in guest kernel soft lockups.
Resolved in VMware ESXi 8.0 Update 3, available under Download Broadcom products and software
Open a SSH session to ESXi host.
Check the current value of the option vtdEnableIntrVirt
in the ESXi boot-time options in the VMkernel.boot.*
namespace,using the following command:
# esxcfg-advcfg --get-kernel vtdEnableIntrVirt
Set a new value for an option using the esxcfg-advcfg
command:
# esxcfg-advcfg --set-kernel "FALSE" vtdEnableIntrVirt
Verify that the option was correctly set:
# esxcfg-advcfg --get-kernel vtdEnableIntrVirt
Reboot the ESXi host to apply the changed setting.
Once the server is rebooted, run the command esxcfg-advcfg --get-kernel vtdEnableIntrVirt
again, to verify that the value still shows as FALSE
.
iovEnablePostedIntr
in the ESXi boot-time options in the VMkernel.boot.* namespace, using the following command:
# esxcfg-advcfg --get-kernel iovEnablePostedIntr
# esxcfg-advcfg --set-kernel "FALSE" iovEnablePostedIntr
# esxcfg-advcfg --get-kernel iovEnablePostedIntr
esxcfg-advcfg --get-kernel iovEnablePostedIntr
to verify if the value still shows as FALSE
Note: In VMware vSphere ESXi 7.0 Update 2 and later 7.0 version, the default value of vtdEnableIntrVirt
is set to TRUE
.