vSAN performance diagnostics reports: "One or more disk(s) are not in active use"
search cancel

vSAN performance diagnostics reports: "One or more disk(s) are not in active use"

book

Article ID: 326525

calendar_today

Updated On:

Products

VMware vSAN

Issue/Introduction

This article explains the vSAN performance diagnostics issue: "One or more disk(s) are not in active use", why it might be showing up, and what possible solutions there are to address the issue.


Symptoms:
You see a message in vSAN performance diagnostics that says:
One or more disk(s) are not in active use


Cause

This issue means that one or more vSAN backend capacity disks do not see any I/O activity. This issue is observed only for all-flash vSAN clusters or for benchmarks that have some read I/O activity. While some capacity disks may not see read I/O for some intervals of time, the best performance is usually achieved when read I/O is spread evenly across all backend capacity disks.

A screenshot of this issue is displayed below:
 
 
Please also note that write IOPS are triggered on the vSAN backend disks only when the elevator thread is triggered for the disk group. Therefore, while there may be intervals where there are no write IOPS on some backend disks (because of the elevator not running during those periods of time), the best performance for a write workload requires write activity on all backend capacity disks.
 
You may ignore this issue under the following circumstances:
  1. If you observe IOPS on the capacity disk for some time periods (but not others) during the benchmark, this is acceptable. Please ignore the issue for the particular capacity disk in which you see the pattern.
  2. A vSAN stretched cluster may either use a standalone vSAN node for witness components or a vSAN Witness Appliance. In such setups, vSAN performance diagnostics will report this issue for the capacity disks on the witness node. You can safely ignore the issue.
 

Resolution

Here is a list of possible remedies:
  1. If you are running a 100% read workload, such as 100% sequential read or 100% random read, then it is possible that the contents of the capacity disk are cached in the write buffer of the cache tier. In this case, the reads are serviced from the cache tier and do not hit the capacity tier. Depending on the specifications of your cache tier and capacity tier storage, and the number of capacity tier disks in your disk group, it may be possible to get better read bandwidth if some read content is in the capacity tier. In this case, please increase the size of the virtual machine disks (VMDKs) to have more than a 400GB working set per disk group. In general, for an all-flash vSAN cluster, the performance of a 100% read workload is better with large working sets.
  2. If your benchmark has some write I/O component, you may increase the number of VMDKs so that all backend capacity disks have some object components. We recommend that any benchmark should create one VMDK for each capacity disk on the system, and concurrently drive I/O to each of these VMDKs. A safe rule is to create two VMs for every disk group on your system and create eight VMDKs for each virtual machine. Size VMDKs according to the available size of cache tier across all disk groups
  3. Alternatively, it is possible that your benchmark is not issuing I/O to all the VMDKs that were created. Please check if that is the case, and correct it so that I/O is targeted to all VMDKs.
  4. If you do not want to increase the number of virtual machines or the number of VMDKs, you may increase the “Number of disk stripes per object” (the default value is 1), in the vSAN storage policy with which the VMDKs were created. This number refers to the number of capacity disks across which each replica of a storage object is striped. You may choose to apply the policy to existing virtual machines, in which case all existing VMDKs will be reconfigured, and you must wait for the reconfiguration traffic to end. Alternatively, you can apply the policy manually to existing or new virtual machines/VMDKs. Also, see About the Virtual SAN Default Storage Policy.
 


Additional Information


https://core.vmware.com/vsan