Changing a LUN to use a different Path Selection Policy (PSP)
search cancel

Changing a LUN to use a different Path Selection Policy (PSP)

book

Article ID: 318544

calendar_today

Updated On:

Products

VMware vSphere ESXi

Issue/Introduction

Based on your storage array vendor's best practices, you have configured your ESX and ESXi hosts with Round Robin Path Selection Policy (VMW_PSP_RR) for your storage array.

However, in some configurations, such as MSCS using shared storage RDMs (Raw Device Mappings), you need to change the Path Selection Policy (PSP) because Round Robin PSP is not supported for MSCS Clustering prior to vSphere 5.5. For more information see MSCS support enhancements in vSphere 5.5 (2052238)


Environment

VMware ESXi 4.1.x Embedded
VMware ESXi 4.0.x Embedded
VMware vSphere ESXi 6.5
VMware vSphere ESXi 6.0
VMware vSphere ESXi 5.0
VMware ESX 4.0.x
VMware ESX 4.1.x
VMware ESXi 4.1.x Installable
VMware vSphere ESXi 5.5
VMware ESXi 4.0.x Installable
VMware vSphere ESXi 5.1
VMware vSphere ESXi 6.7

Resolution

To work around the issue, change the PSP for the LUNs used by the MSCS cluster's RDMs to a PSP other than VMW_PSP_RR. The choice of PSP must be relevant to the Storage Array Type. For example, if the array is Active/Passive, do not use FIXED PSP, use MRU PSP.

To change a LUN to use a different PSP:
  1. Log in to the ESX host's terminal directly, or by SSH.
  2. Identify the RDM file name for each shared storage used by the virtual machines in the MSCS cluster nodes.

    Note: This step requires that the virtual machine be powered off first. If you do not power off the virtual machine, you receive the error, Device or resource busy. This applies to all virtual machines accessing the LUN.

    #cd /vmfs/volumes/Clusters_datastore/node1
    #fgrep scsi1 *.vmx |grep fileName
    scsi1:0.fileName = "/vmfs/volumes/4d8008a2-9940968c-04df-001e4f1fbf2a/node1/quorum.vmdk"
    scsi1:1.fileName = "/vmfs/volumes/4d8008a2-9940968c-04df-001e4f1fbf2a/node1/data.vmdk"


    The last two lines of the output show the path to the file and the file names of the shared storage for 2 RDMs.

    You use the text in double quotes in some of the steps of this article.
     
  3. Identify the logical device name mapped by the first RDM (the quorum disk above).

    #vmkfstools -q /vmfs/volumes/4d8008a2-9940968c-04df-001e4f1fbf2a/node1/quorum.vmdk
    Disk /vmfs/volumes/4d8008a2-9940968c-04df-001e4f1fbf2a/node1/quorum.vmdk is a Passthrough Raw Device Mapping
    Maps to: vml. 02000100006006016055711d00cff95e65664ee011524149442035

     
  4. Identify the LUN ID (NAA ID) represented by the logical device name (prefixed with vml listed in red above)

    #esxcfg-scsidevs -l -d vml. 02000100006006016055711d00cff95e65664ee011524149442035 |grep Display
    Display Name: DGC Fibre Channel Disk ( naa.6006016055711d00cff95e65664ee011)
     
     
  5. Using the NAA ID identified in step 3, list the current PSP used by the LUN.

    #esxcli nmp device list -d naa.6006016055711d00cff95e65664ee011 |grep PSP
    Path Selection Policy: VMW_PSP_RR

    For ESXi 5.x, the equivalent is:

    #esxcli storage nmp device list -d naa.6006016055711d00cff95e65664ee011 |grep PSP

    This output shows that the Path Selection Policy is VMW_PSP_RR
     
  6. Change the policy to one relevant to the array type. This example uses MRU.

    In ESXi/ESX 4.1, run this command:

    # esxcli nmp device setpolicy -d naa.6006016055711d00cff95e65664ee011 --psp=VMW_PSP_MRU

    In ESXi 5.x, run this command:

    # esxcli storage nmp device set -d naa.6006016055711d00cff95e65664ee011 --psp=VMW_PSP_MRU
     
  7. Verify that the change was successful

    In ESX/ESXi 4.1, run this command:

    # esxcli nmp device list -d naa.6006016055711d00cff95e65664ee011 |grep PSP
    naa.6006016055711d00cff95e65664ee011

    Path Selection Policy: VMW_PSP_MRU

    In ESXi 5.x, run this command:

    # esxcli storage nmp device list -d naa.6006016055711d00cff95e65664ee011 |grep PSP naa.6006016055711d00cff95e65664ee011
     
  8. Repeat Steps 3 through 7 for each RDM identified in Step 2.
  9. Repeat above steps for each node in the cluster on other ESXi/ESX hosts on which these nodes are running.