Setting Up NPIV in a vSphere Environment
search cancel

Setting Up NPIV in a vSphere Environment

book

Article ID: 313970

calendar_today

Updated On:

Products

VMware vSphere ESXi

Issue/Introduction

This document serves as a guide to the prerequisites for NPIV, the steps for its configuration, a practical example of NPIV, and the anticipated behavior of NPIV after correct configuration.

Symptoms:

  • Implementing NPIV in a vSphere environment necessitates only minor changes within vSphere. However, it demands substantial alterations at the SAN switch and storage end.
  • Frequently, the expectation with NPIV setup is to observe virtual WWPNs on the switch or storage. Yet, this isn’t always the case, and the host often presents the subsequent events:
2024-03-20T09:59:46.921Z cpu58:2099254)Registering Vport Device done:[Success]
2024-03-20T09:59:46.921Z cpu58:2099254)Device: 507: psa:driver->ops.scanDevice:0 ms
2024-03-20T09:59:46.926Z cpu58:2099254)ScsiAdapter: 1185: Starting 1 completion worlds for adapter vmhba70
2024-03-20T09:59:46.926Z cpu58:2099254)Device: 395: psa_vport:driver->ops.attachDevice :0 ms
2024-03-20T09:59:46.926Z cpu58:2099254)Device: 400: Found driver psa_vport for device 0x6938430e2b044963
2024-03-20T09:59:46.926Z cpu58:2099254)Device: 685: psa_vport:driver->ops.startDevice:0 ms
2024-03-20T09:59:46.926Z cpu58:2099254)Device: 507: psa_vport:driver->ops.scanDevice:0 ms
2024-03-20T09:59:46.926Z cpu112:2488659)WARNING: ScsiNpiv: 1354: Created vport for world 2488396, vmhba64
2024-03-20T09:59:46.926Z cpu112:2488659)WARNING: ScsiPsaDriver: 1316: Failed adapter create path; vport:vmhba70 with error: bad0040
2024-03-20T09:59:47.513Z cpu99:2099246)qlnativefc: vmhba70(81:0.0): hba identifier RPRT_CMD
2024-03-20T09:59:47.515Z cpu99:2099246)qlnativefc: vmhba70(81:0.0): Driver sending EDC and RDF for port 28:#c:##:#c:29:##:##:2c
2024-03-20T09:59:47.517Z cpu132:2099153)qlnativefc: vmhba70(81:0.0): SCM: EDC ELS completed 2024-03-20T09:59:47.518Z cpu132:2099153)qlnativefc: vmhba70(81:0.0): SCM: RDF ELS completed - SCM enabled 2024-03-20T09:59:48.926Z cpu112:2488659)WARNING: ScsiPsaDriver: 1316: Failed adapter create path; vport:vmhba70 with error: bad0040 2024-03-20T09:59:50.925Z cpu112:2488659)WARNING: ScsiPsaDriver: 1316: Failed adapter create path; vport:vmhba70 with error: bad0040 2024-03-20T09:59:51.512Z cpu170:2099185)qlnativefc: vmhba70(81:0.0): Fabric scan failed on all retries. 2024-03-20T09:59:52.925Z cpu112:2488659)WARNING: ScsiPsaDriver: 1316: Failed adapter create path; vport:vmhba70 with error: bad0040 2024-03-20T09:59:54.924Z cpu112:2488659)WARNING: ScsiPsaDriver: 1316: Failed adapter create path; vport:vmhba70 with error: bad0040 2024-03-20T09:59:56.924Z cpu112:2488659)ScsiNpiv: 1162: NPIV vport rescan complete, [2:9] (0x430bd12fea80) [0x430bd0a1be80] status=0xbad0040 2024-03-20T09:59:56.924Z cpu112:2488659)WARNING: ScsiPsaDriver: 1316: Failed adapter create path; vport:vmhba70 with error: bad0040 2024-03-20T09:59:58.924Z cpu112:2488659)WARNING: ScsiPsaDriver: 1316: Failed adapter create path; vport:vmhba70 with error: bad0040 2024-03-20T10:00:00.923Z cpu112:2488659)WARNING: ScsiPsaDriver: 1316: Failed adapter create path; vport:vmhba70 with error: bad0040 2024-03-20T10:00:02.923Z cpu112:2488659)WARNING: ScsiPsaDriver: 1316: Failed adapter create path; vport:vmhba70 with error: bad0040 2024-03-20T10:00:04.922Z cpu112:2488659)WARNING: ScsiPsaDriver: 1316: Failed adapter create path; vport:vmhba70 with error: bad0040 2024-03-20T10:00:06.922Z cpu112:2488659)ScsiNpiv: 1162: NPIV vport rescan complete, [5:9] (0x430bd1839840) [0x430bd0a1be80] status=0xbad0040 2024-03-20T10:00:06.922Z cpu112:2488659)WARNING: ScsiNpiv: 1800: Failed to Create vport for world 2488396, vmhba64, rescan failed, status=bad0001


Environment

VMware vSphere 8.x
VMware vSphere 7.x

Cause

  • NPIV permits a single FC HBA port to register several unique World Wide Name (WWN) identifiers with the fabric, each of which can be assigned to a specific virtual machine. This capability allows a SAN administrator to monitor and manage storage access on a per-virtual machine basis.
  • Only virtual machines that have RDMs attached to them can have WWNs, and these assignments are used for all RDM traffic.
  • When a WWN is assigned to a virtual machine, the virtual machine’s configuration file (.vmx) is updated to include a pair of WWNs. This WWN pair consists of a World Wide Port Name (WWPN) and a World Wide Node Name (WWNN). When that virtual machine is powered on, the VMkernel creates a virtual port (VPORT) on the physical HBA for accessing / probing the LUN. Post successful probe the VPORT, which appears to the FC fabric as a physical HBA, uses the WWN pair assigned to the virtual machine as its unique identifier.
  • Each VPORT is unique to its virtual machine. The VPORT is removed on the host and no longer appears to the FC fabric when the virtual machine is powered off. When a virtual machine is migrated from one host to another, the VPORT is closed on the original host and opened on the destination host.
  • Virtual machines without WWN assignments access storage LUNs using the WWNs of their host’s physical HBAs.
  • However, if the probe fails due to misconfiguration, the host logs certain events indicating an inability to create virtual paths.
2024-03-20T09:59:48.926Z cpu112:2488659)WARNING: ScsiPsaDriver: 1316: Failed adapter create path; vport:vmhba70 with error: bad0040
2024-03-20T10:00:06.922Z cpu112:2488659)WARNING: ScsiNpiv: 1800: Failed to Create vport for world 2488396, vmhba64, rescan failed, status=bad0001
  • Generally, this means that the NPIV code in the VMkernel is not able to find any devices on the VPORT.
  • There are a number of possible causes. To track down the cause, make certain that the following mentioned checks are performed by the appropriate teams from either the vendors or customers who are in charge of SAN and storage configuration:
    • Check zoning in the configuration of the switch to be sure correct access is set for the NPIV WWN LUNs.
    • Check the switch port to be sure it has NPIV capability enabled.
    • Check the LUN’s HostID to be sure it matches the physical HBA and virtual HBA in the storage array.

Resolution

Prerequisites

If you’re considering enabling NPIV on your virtual machines, it’s important to be aware of the following prerequisites:

  • NPIV is exclusively applicable for virtual machines that have RDM disks presented to them. Virtual machines with standard virtual disks utilize the WWNs of the host’s physical HBAs.
  • Your host’s HBAs must be NPIV-compatible. For more details, consult the VMware Compatibility Guide and your vendor’s documentation.
  • Ensure the use of HBAs of the same type. VMware does not endorse the use of heterogeneous HBAs on the same host for accessing the same LUNs.
  • If a host employs multiple physical HBAs as paths to the storage, all physical paths to the virtual machine should be zoned. This is necessary to facilitate multipathing, even though only one path will be active at a time.
  • Verify that the host’s physical HBAs can identify all LUNs that are to be accessed by NPIV-enabled virtual machines operating on that host.
  • The fabric switches must be NPIV-aware.
  • When setting up a LUN for NPIV access at the storage level, ensure that the NPIV LUN number and NPIV target ID correspond to the physical LUN and Target ID.
  • Zone the NPIV WWPNs to connect to all storage systems accessible by the cluster hosts, even if the VM does not utilize the storage. If you introduce any new storage systems to a cluster with one or more NPIV-enabled VMs, include the new zones, so the NPIV WWPNs can detect the new storage system target ports.

Capabilities

Discover the specific features of using NPIV with ESXi.

ESXi with NPIV offers the following capabilities:

  • NPIV is compatible with vMotion. When you employ vMotion to migrate a virtual machine, it maintains the assigned WWN.
  • If you transfer an NPIV-enabled virtual machine to a host that lacks NPIV support, the VMkernel reverts to utilizing a physical HBA for I/O routing.
  • If your FC SAN environment allows concurrent I/O on the disks from an active-active array, concurrent I/O to two different NPIV ports is also supported.

Limitations

However, when you employ ESXi with NPIV, the following limitations are present:

  • Without presenting RDMs to the Virtual Machine, NPIV will not function.
  • As NPIV technology is an extension of the FC protocol, it necessitates an FC switch and is incompatible with direct attached FC disks.
  • When you clone a virtual machine or template with a WWN assigned to it, the clones do not retain the WWN.
  • NPIV does not support Storage vMotion.
  • Disabling and then re-enabling the NPIV capability on an FC switch while virtual machines are operational can lead to an FC link failure and halt I/O.

Procedure

  • Select a virtual machine. It is better to power of virtual machine till the configuration completes. 
  • In the VM Hardware panel, click Edit Settings.
  • Click VM Options.
  • Click the Fibre Channel NPIV triangle to expand the NPIV options.
  • Uncheck the Temporarily Disable NPIV for this virtual machine check box.
  • Select an option for assigning WWNs.
    • To leave WWNs unchanged, select Leave unchanged.
    • To have vCenter Server or the ESXi host generate new WWNs, select Generate New WWNs.
    • To remove the current WWN assignments, select Remove WWN assignment.
  • Click OK.
  • Go to VM settings and click add RDM Disk.
  • Validate NPIV capability is enabled on the Port/Switch
switch:admin> portcfgshow 0
Area Number:              15
Speed Level:              AUTO(HW)
Fill Word:                0(Idle-Idle)
AL_PA Offset 13:          OFF
Trunk Port                ON
Long Distance             OFF
VC Link Init              OFF
Locked L_Port             OFF
Locked G_Port             ON
Disabled E_Port           OFF
ISL R_RDY Mode            OFF
RSCN Suppressed           OFF
Persistent Disable        OFF
NPIV capability           ON
QOS E_Port                AE
Port Auto Disable:        OFF
Rate Limit                OFF
EX Port                   OFF
Mirror Port               OFF
Credit Recovery           ON
F_Port Buffers            OFF
  • If not then please set the NPIV to ON
switch:admin> portCfgNPIVPort --enable 0
  • Add the VM’s NPIV WWPN into all the aliases of both active and passive HBA, which have access to the RDM LUN.
switch:admin> aliadd "HBA-Alias-Name" "physical port wwn" "npiv wwpn"

alias: LAB_HBA
                2#:##:##:1#:9b:b1:ed:45; 28:#c:##:#c:29:##:##:2c
  • Save Zone configuration
switch:admin> cfgsave
 
WARNING!!!
You are about to save the Defined zoning configuration. This
action will only save the changes on Defined configuration.

Do you want to save the Defined zoning configuration only?  (yes, y, no, n): [yes]
  • Add an initiator under Access > Hosts > Add a Host:

  • Make sure the NPIV initiator WWN has been added for the initiator 
  • Make sure that the NPIV LUN number and NPIV target ID match the physical LUN and Target ID.
  • Once Done Power ON the VM 

Note:  If the NPIV configuration is not done correctly, the VM will use physical HBAs for RDM access instead of using its Virtual NPIV WWPNs

Additional Information