Config StoragePoll for High Performance NVMe PCIe Drives From ESXi 8.0
search cancel

Config StoragePoll for High Performance NVMe PCIe Drives From ESXi 8.0

book

Article ID: 313504

calendar_today

Updated On:

Products

VMware vSphere ESXi

Issue/Introduction

Symptoms:

IOPs not scaling with ESXi 7.0 U3 on high performance NVMe PCIe drives

Note: High performance NVMe PCIe drives here usually refers to those IOPs of 4kB random read could reach over 800k from specification data sheet of drives.

 


Environment

VMware vSphere ESXi 7.x
VMware vSphere ESXi 6.0
VMware vSphere ESXi 6.5
VMware vSphere ESXi 6.7

Cause

nvme_pcie drivers of ESXi 7.0 U3 processes I/O in interrupt mode, which is one of the IOPs scaling bottleneck on high performance NVMe PCIe drives.

Resolution

In ESXi 8.0, nvme_pcie driver enables polling feature to process I/O, which could scale the IOPs effectively. VMware suggests users that have high performance I/O requirements to upgrade to ESXi 8.0.

To make better use of polling feature,increase the hardware I/O queue  number of the drive by the following esxcli cmd (REBOOT or RELOAD REQUIRED), VMware suggests "vmknvme_io_queue_num" to be 4, 8 or 16.
```
esxcli system module parameters set -m vmknvme -p "vmknvme_io_queue_num=4"
```