Understanding storage device detection in ESXi
search cancel

Understanding storage device detection in ESXi

book

Article ID: 311466

calendar_today

Updated On:

Products

VMware vSphere ESXi

Issue/Introduction

This article provides information on VMware ESXi detecting storage devices during operations such as a rescan.


Environment

VMware vSphere 6.x
VMware vSphere ESXi 5.1
VMware vSphere ESXi 7.0
VMware vSphere ESXi 6.5
VMware vSphere ESXi 8.0
VMware vSphere ESXi 5.5
VMware vSphere 7.0.x
VMware vSphere ESXi 6.7

Resolution

SCSI Device Detection

  1. Storage Adapters

    Before we can detect any storage devices, we must have all required drivers loaded. Once the drivers are loaded, the devices are enumerated and claimed by the corresponding drivers. In addition to detecting physical hardware, there are software-based adapters such as the one provided by iSCSI Initiator and the FCoE software modules. In order to avoid issues at this stage, ensure that using a compatible and functional I/O device with VMware ESXi. For more information about supported hardware adapters, see the VMware Compatibility Guide.

    Note: When disabling the usbarbitrator service and connecting a USB storage device, another storage adapter may appear for that storage device. This may be used temporarily to access a FAT 16 file system, but is not a supported storage device for VMFS filesystems.

     
  2. Targets

    Before we can detect any storage devices, the storage adapters that are detected must be able to communicate to the destination which contains the storage devices. The medium between the adapter and the storage device may vary (e.g. - Fiber Channel over Ethernet, iSCSI, Fiber Channel, etc). Each storage type has their own method of establishing a connection between the initiator and the target. In order to avoid any issues at this stage, you must ensure that you can establish basic connectivity:
     
    • Fiber Channel

      The fiber channel protocol uses World Wide Node Names (WWNN) and World Wide Port Names (WWPN) to identify nodes within the network. Whether or not nodes can see each other within the fiber channel network depends heavily in the zoning configuration of the switches. For more information about zoning, see Section 3.2 of Fiber Channel Zone Server MIB (RFC 4936).

      The command esxcli storage core path list can be used to list all paths currently detected. Alternatively, you can log into the ESXi host and look at the /proc/scsi/<driver> directory for driver specific information, but the information varies greatly depending on the driver that is being used. If you cannot see the desired WWPN and WWNN addresses for your storage devices, you may have a zoning or Fiber Channel network or switch issue. You may be required to engage your Fiber Channel switch vendor.

      For additional assistance, engage your Fiber Channel switch vendor, or see Troubleshooting fibre channel storage array connectivity (1003680).
       
    • iSCSI

      The iSCSI protocol Internet SCSI Qualified Names (IQN) to identify nodes within the network. The IQN values are used to establish iSCSI sessions between the initiator and the target. For more information about iSCSI, see Internet Small Computer Systems Interface (iSCSI) (RFC 3720).

      The iSCSI protocol depends on an Ethernet network to communicate between initiators and targets. The command vmkping -I <interface> <destination IP> can be used to confirm IP communication to desired target. The command nc -s <source ip> <destination IP> <port> (port 860 and 3260 may be used) can be used to confirm that the desired ports are accessible on the target. If basic network connectivity cannot be established, then there may be underlying networking issues that need to be resolved.

      The command esxcli storage core path list can also be used to list all paths discovered by the iSCSI session. If you are able to communicate with the target, but do not see the desired list of paths, there may be an issue with the IQN configuration on the Initiator or the Target.

      For additional assistance, engage your Ethernet switch vendor, or see Troubleshooting iSCSI array connectivity issues (1003681).
  3. SCSI REPORT LUNS command

    A SCSI REPORT LUNS command (Operation Code A0h) is sent to the target and has a choice of using a particular addressing method. The target should return a logical unit inventory with logical unit numbers (LUNs). If the target cannot provide the list, a check condition is sent with a reason. For more information about the SCSI REPORT LUNS command, see
    Section 3.33 of the SCSI Commands Reference Manual from Seagate.

    If the storage array responds with a list, but the list does not contain the desired logical units (LUNs), this may indicate an issue associating the Initiator with the device. The configuration that determines the association between storage device and ESXi host is stored on the storage array, and the array vendor may need to be engaged.
  4. SCSI INQUIRY command and Vital Product Data (VPD)

    The SCSI INQUIRY (Operation code 12h) command is sent to a target for each logical unit. For more information about the SCSI INQUIRY command, see Section 3.6 of the SCSI Commands Reference Manual from Seagate. The target should respond with information about the logical unit using Vital Product Data (VPD) page codes. For more information about the Vital Product Data (VPD) pages, see Section 5.4 of the SCSI Commands Reference Manual from Seagate .

    Note: The array must provide the same identifier for all the ESXi hosts in the vSphere cluster.


    If multiple hosts receive VPD pages containing different unique identifiers for a common shared device containing a VMFS volume, then the volume may be detected as a snapshot. For more information, see Troubleshooting LUNs detected as snapshot LUNs in vSphere, and Managing Duplicate VMFS Datastores for ESXi.

    Virtual machine with RDMs cannot vMotion if VPD pages reported are not the same across all the hosts in the same DRS cluster. For more information, see Virtual Disk 'X' is a mapped direct access LUN that is not accessible (1016210).

    If the target cannot provide additional information about the logical unit, the target returns a check condition. For more information about check conditions, see Interpreting SCSI sense codes (289902).

    If there is a problem with the information contained with the VPD pages, it may be the result of a mis-configured logical unit, and the array vendor may need to be engaged.
  5. SCSI MODE SENSE command

    The MODE SENSE command (Operation 1Ah) is sent to the target with a specific page code. For more information about the MODE SENSE command, see Section 3.11 of the SCSI Commands Reference Manual from Seagate. The target responds with parameters on how the initiator may use the logical unit (LUN).

    If the logical unit does not support the page code provided, the target may return a CHECK CONDITION with the sense key set to ILLEGAL REQUEST and the additional sense code INVALID FIELD IN CDB. For more information about check conditions, see
    Interpreting SCSI sense codes (289902).
  6. Multipathing Plug-in (MPP)

    To manage storage multipathing, ESXi uses a special VMkernel layer, Pluggable Storage Architecture (PSA). The PSA is an open modular framework that coordinates the simultaneous operation of multiple multipathing plugins (MPPs). The multipathing plug-in is responsible for:
     
    • Collapsing multiple physical paths into a single logical device.
    • Handling path failures.
    • Handling reservations of logical devices.
    • Load balancing.

    Based on a set of claim rules, the host determines which multipathing plug-in (MPP) should claim the paths:
     
    • Native Multipathing Plug-in

      A Native Multipathing Plug-in (NMP) that comes included with ESXi. For the paths managed by the NMP module, a second set of claim rules is applied. These rules determine which Storage Array Type Plug-In (SATP) should be used to manage the paths for a specific array type, and which Path Selection Plug-In (PSP) is to be used for each storage device. For more information about the Native Multipathing Plugin, see What is Pluggable Storage Architecture (PSA) and Native Multipathing (NMP)? (1011375), VMware Multipathing Module, and Managing Storage Paths and Multipathing Plug-Ins.
       
    • Third Party Multipathing Plug-in

      Pluggable Storage Architecture (PSA) is a collection of VMkernel APIs that allow third party hardware vendors to insert code directly into the ESXi storage I/O path. This allows third party software developers to design their own load balancing techniques and failover mechanisms for particular storage array. If you are using a third party MPP and are experiencing issues with how the paths handle grouping, load balancing, fail over, or reservations, you will need to engage the third party vendor for assistance.
  7. SCSI READ CAPACITY command

    The ESXi host attempts to read the capacity of each logical unit. If the logical unit exceeds the supported maximums for the product, this operation may fail. For more information about the SCSI READ CAPACITY command, see Section 3.22 of the SCSI Commands Reference Manual from Seagate. For more information about our product maximums, see VMware Configuration Maximums.


Additional Information