Newly presented LUNs randomly report a max queue depth of 32, though some LUNs correctly show 128 across all hosts.
All devices originate from the same storage array, and no configuration issues are found on the array side.
When attempted to set the LUN queue depth using esxcli and vsish commands, but both attempts fails with no error.
esxcli storage core device set -d naa.6000097xxxxxxxxxxxxxx -m 128
vsish > set /storage/scsifw/devices/naa.6000097000022000xxxxxxxx/maxQueueDepth 128
vmkload_mod -s nfnic confirms that lun_queue_depth_per_path is defaulted to 32.[root@xxx-n151:~] vmkload_mod -s nfnic
vmkload_mod module information
input file: /usr/lib/vmware/vmkmod/nfnic
Version: 5.0.0.43-1OEM.700.1.0.15843807
Build Type: release
License: Proprietary
Required name-spaces:
com.vmware.nvme#0.0.0.1
com.vmware.vmkapi#v2_6_0_0
Parameters:
report_lun_retry_delay: int
report_lun_retry_delay: Default = 0
nvme_io_throttle_count: int
nvme_io_throttle_count: Default = 1024. Range [1 - 1024]
fnic_fdmi_support: int
FDMI support: Default = 1
portchannel_event_handling: int
Enable portchannel event handling: Default = 1
ecpu_ka_timeout: ulong
nfnic ecpu keep alive timeout: Default = 0. Range [10 - 120] seconds. 0 to turn off.
log_throttle_count: ulong
nfnic log throttle count: Default = 64
lun_queue_depth_per_path: ulong
nfnic lun queue depth per path: Default = 32. Range [1 - 1024]
nvme_max_q_count: int
nvme maximum Qcount per I/O controller: Default = 50
To increase the queue depth value at the driver layer to 128, please run the following command and then reboot the host to the changes to take effect.
-> esxcli system module parameters set -p lun_queue_depth_per_path=128 -m nfnic