Controlling LUN queue depth throttling in VMware ESXi
search cancel

Controlling LUN queue depth throttling in VMware ESXi

book

Article ID: 311232

calendar_today

Updated On:

Products

VMware vSphere ESXi

Issue/Introduction

  • If the ESX host detects a queue full condition, it may fail the SCSI commands
  • Queue Full may show up as QFULL or Task Set Full state
  • If QFULL conditions exist, the ESX VMkernel log may contain entries similar to:
    • H:0x0 D:0x28 P:0x0 Valid sense data: 0x## 0x## 0x##
    • H:0x0 D:0x08 P:0x0 Valid sense data: 0x## 0x## 0x##

​​​​​​​The hexadecimal 28 in the error is the SCSI status code for the queue full state. The value 0x08 in the above error is the SCSI Status code indicating a device busy state.

  • You may also see a device busy error



Environment

VMware vSphere ESXi 7.x
VMware vSphere ESXi 8.x

Resolution

VMware vSphere ESXi has an adaptive queue depth algorithm that adjusts the LUN queue depth in the VMkernel I/O stack. This algorithm is activated when the storage array indicates I/O congestion by returning a BUSY or QUEUE FULL status. These status codes may indicate congestion at the LUN level or at the port (or ports) on the array. When congestion is detected, VMkernel throttles the LUN queue depth. The VMkernel attempts to gradually restore the queue depth when congestion conditions subside.

This algorithm can be activated by changing the values of the QFullSampleSize and QFullThreshold parameters. When the number of QUEUE FULL or BUSY conditions reaches the QFullSampleSize value, the LUN queue depth reduces to half of the original value. When the number of good status conditions received reaches the QFullThreshold value, the LUN queue depth increases one at a time.

Note: Careful consideration is needed if multiple hosts access the same LUN or array ports. For the adaptive queue depth algorithm to be effective, all hosts accessing the LUN/port must have some form of adaptive queue depth algorithm. If some hosts run the adaptive queue depth algorithm while other hosts do not, the hosts that are not running the algorithm may consume the resources/slots on the array that are freed up by the adaptive hosts. This causes the hosts running the algorithm to exhibit lower disk I/O throughput. This may also increase the I/O congestion that initially triggered the adaptive algorithm.

If hosts running operating systems other than ESXi are connected to array ports that are being accessed by ESXi hosts, and the ESXi hosts are configured to use the adaptive algorithm, either make sure the operating systems use an adaptive queue depth algorithm or isolate those hosts on different ports on the storage array.
 

These parameters can be set either globally, individually, or both depending on business needs as different vendors have different optimal values for their arrays. If you apply both the global option and also set one of the parameters for a specific device, the setting for the specific device always takes precedence over the global setting.

Note: Devices managed by the global parameter will return incorrect LUN queue depth values when queried using the command esxcli storage core device list. Instead, these devices will return these a value of zero:
 
Queue Full Sample Size: 0
Queue Full Threshold: 0
 
 
Do not be alarmed, the correct global LUN queue depth values are being applied to these devices, this can be confirmed using esxtop, for details see: Checking the queue depth of the storage adapter and the storage device (broadcom.com).

From vCenter Web Client to set the parameters globally


  1. Select the ESXi host you want to modify.
  2. Select the Configure Tab 
  3. Click Advanced System Settings
  4. Click Edit, Select the Funnel Icon in the upper right of Key and search for QFull.

     
  5. Set QFullSampleSize to a value greater than zero. The usable range is 0 to 64.
    • For 3PAR, NetApp and IBM XIV storage arrays, set the QFullSampleSize value to 32.
    • For other storage arrays, contact your storage vendor.
       
  6. Set QFullThreshold to a value lesser than or equal to QFullSampleSize. The usable range is 1 to 16.
    • For 3PAR storage arrays, set the QFullThreshold value to 4.
    • ForNetApp and IBM XIV storage arrays, set the QFullThreshold value to 8.
    • For other storage arrays, contact your storage vendor.
The settings take effect immediately. You do not need to reboot the ESXi host.

From ESXi Command Line to set the parameters per-device


Run the following ESXCLI command:.
esxcli storage core device set --device device_uid --queue-full-threshold
Q --queue-full-sample-size S
 
For example:
 
esxcli storage core device set --device device_uid --queue-full-sample-size 32 --queue-full-threshold 4

Settings are persistent across reboots.
You can retrieve the values for a device by using the corresponding list command.

esxcli storage core device list

The command supports an optional --device parameter.

esxcli storage core device list --device device

The recommended values are the same as in earlier releases.
QFullSampleSize:
  • For 3PAR, NetApp and IBM XIV storage arrays, set the QFullSampleSize value to 32.
  • For other storage arrays, contact your storage vendor.
QFullThreshold:
  • For 3PAR storage arrays, set the QFullThreshold value to 4.
  • ForNetApp and IBM XIV storage arrays, set the QFullThreshold value to 8.
  • For other storage arrays, please contact your storage vendor.
The settings take effect immediately. You do not need to reboot the ESX/ESXi host.


 


Additional Information