Virtual Volumes - Empty VP URL for VP
search cancel

Virtual Volumes - Empty VP URL for VP

book

Article ID: 312670

calendar_today

Updated On:

Products

VMware vCenter Server VMware vSphere ESXi

Issue/Introduction

Symptoms:
From the vCenter HTML Client, Protocol Endpoints are absent. 



The host does not display certificate information  


 

The vVOL datastore may report as inaccessible for the ESXi host. 

 

[root@ESXI:~]# esxcli storage vvol vasaprovider list

PureNorth-ct0

   VP Name: VPNAME-ct0

   URL: https://x.x.x.x: port

   Status: syncError

   Arrays:

         Array Id: com.vpstorage:caxx72ba-d1e3-zxd6a-9a8f-388982afd324

         Is Active: true

         Priority: 200


In the vvold.log, you see the errors:
 

--> PeerThumbprint: XX:F2:X8:7D:EX:49:57:XX:D2:XX:59:CA:93:64:61:B1:13:FC:X:17

--> ExpectedThumbprint:                                     

--> ExpectedPeerName: x.x.x.x >>>>><Storage_IP>

--> The remote host certificate has these problems:

-->

--> * unable to get local issuer certificate, using default

<Timestamp> info vvold[2099841] [Originator@6876 sub=Default] VasaSession::Initialize url is empty

<Timestamp>warning vvold[2099841] [Originator@6876 sub=Default] VasaSession::DoSetContext: Empty VP URL for VP (VPNAME-ct0)!

<Timestamp> info vvold[2099841] [Originator@6876 sub=Default] Initialize: Failed to establish connection https://x.x.x.x:8XX4

<Timestamp> error vvold[2099841] [Originator@6876 sub=Default] Initialize: Unable to init session to VPNAME-ct0 state: 0

<Timestamp> info vvold[2099817] [Originator@6876 sub=Default] VasaSession::GetEndPoint: with url  https://x.x.x.x:8XX4

<Timestamp> warning vvold[2099817] [Originator@6876 sub=Default] VasaSession::GetEndPoint: failed to get endpoint, err=SSL Exception: Verification parameters:

--> PeerThumbprint: XX:XX:XX:7D:E7:XX:57:XX:D2:47:59:XX:93:XX:61:X1:13:FC:EB:17

--> ExpectedThumbprint:                                      

--> ExpectedPeerName:  x.x.x.x

--> The remote host certificate has these problems:

-->

--> * unable to get local issuer certificate, using default

------------------------------------

<Timestamp> info vvold[2099843] [Originator@6876 sub=Default] CacheManager::CacheCleanUp [FriendlyNameCache] Periodic cache hits:0 Periodic cache calls:0 Periodic cache hit rate:0 %total cache hits:0 total cache calls:0 total cache hit rate:0 % maxLifetimeCap:false

 

In the vCenters /var/log/vmware/vmware-sps/sps.log you, will see the errors : 
 
      <Timestamp> [Thread-9] ERROR opId=sps-Main-44515-482 com.vmware.vim.sms.provider.vasa.VasaProviderImpl -SetContext failed!^M
      com.vmware.vim.sms.fault.VasaServiceException: org.apache.axis2.AxisFault: Connection has been shutdown:
       javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path validation failed:
       java.security.cert.CertPathValidatorException: timestamp check failed
VasaServiceException: org.apache.axis2.AxisFault: Connection reset
VasaServiceException: org.apache.axis2.AxisFault: Connection refused (Connection refused)


Environment

VMware vCenter Server 8.0
VMware vSphere ESXi 6.7
VMware vCenter Server Appliance 6.5.x
VMware vCenter Server 6.7.x
VMware vCenter Server 7.0.x
VMware vSphere ESXi 8.0
VMware vSphere ESXi 6.5
VMware vSphere ESXi 7.0.0

Cause


vCenter is unable to push certificates from TRUSTED_ROOTS & TRUSTED_ROOT_CRLS to the ESXi host because 

1. The vCenter certificate mode vpxd.certmgmt.mode is set to Thumbprint.

2. The ESXi host has parameter Config.HostAgent.ssl.keyStore.allowSelfSigned is set to false

3. The Storage/VASA Certificate presented to vCenter is incorrectly signed or missing Subject Alternate Name (SAN) and/or Common Name(CN). 

NOTE: The default setting of Config.HostAgent.ssl.keyStore.allowSelfSigned is false



Alternatively, vCenter did not succeed in pushing the TRUSTED_ROOTS & TRUSTED_ROOT_CRLS to the ESXi host due to temporary connectivity or some other cause.


There are 2 reasons for the VASA provider to show syncError-

1. vVold service on the ESXi hosts has the old certificate cached and is failing to authenticate with the VASA Provider

2. vCenter is unable to push the Trusted CA Certs to the existing or newly added ESXi hosts.

When the host is attempting to authenticate with the VP (VASA Provider) it is unable to locate the VP’s issuer certificate, and because that issuer certificate is missing the VP URL for Initialization, the host is unable to authenticate with the VASA Provider.

Resolution




The screenshot above gives a glimpse of the different types of certificates that one can choose from to create a Storage provider/VASA certificate. A storage provider is a software component that is offered by VMware or developed by a third party through vSphere APIs for Storage Awareness (VASA). The storage provider can also be called VASA provider. The storage providers integrate with various storage entities that include external physical storage and storage abstractions, such as vSAN and Virtual Volumes. Storage providers can also support software solutions, for example, I/O filters.

Please determine the type of certificate you are using (self-signed/intermediate/custom) and then set the vpxd.certmgmt.mode in vCenter to vmca OR custom.  

It's important to check this because it determines the changes we make to the setting vpxd.certmgmt.mode in vCenter. Please read Change the Certificate Mode to understand what this means. 

There are 2 important settings we need to discuss about that impacts the vVol PE, they are vpxd.certmgmt.mode in vCenter & Config.HostAgent.ssl.keyStore.allowSelfSigned in ESXi host. 

 
Config.HostAgent.ssl.keyStore.allowSelfSigned 	False   You can add non-CA (non-CRL Signed) self-signed certificates to the ESXi trust store, that is, certificates that do not have the CA bit set. 
Config.HostAgent.ssl.keyStore.allowSelfSigned 	True     it allows an ESXi host to accept any certificate in the trust store. (This option allows both non-CA & CA)

NOTE: Change the value of vpxd.certmgmt.mode to custom if you intend to manage your own certificates, and to thumbprint if you temporarily want to use thumbprint mode. vSphere 5.5 used thumbprint mode, and this mode is still available as a fallback option for vSphere 6.x. In this mode, vCenter Server checks that the certificate is formatted correctly, but does not check the validity of the certificate. Even expired certificates are accepted.

Do not use thumbprint mode unless you encounter problems that you cannot resolve with one of the other two modes (vmca or custom). Some vCenter 6.x and later services might not work correctly in thumbprint mode. Thumbprint mode is used when you don't want vCenter to push certificates to ESXi host but manually replace the ESXi host certificates yourself. 

Certificate Management for ESXi Hosts

Certificate Mode Switch Workflows 

1. Check connectivity from the Host to the Storage array - nc -zv <VASA/Array IP> <port>

2. Check connectivity from the vCenter to the Storage array - curl -v telnet://<vasa/Array IP> <port #>

3. Check the certificate presented from the Storage array to the vCenter by using command in the vCenter CLI(SSH).

openssl s_client -connect <Array IP><Port #>

Example: openssl s_client -connect X.X.X.X:8443

4. Check certificate validity and completeness using SSLShopper website. 

NOTE: Make sure it contains CN and SAN name.


5. First check the Storage Providers in vCenter for the Array and vVol Datastore in question and ensure they are both online and in sync. If the storage providers are offline or out-of-sync, try to delete/un-register the providers & then manually register both CT0 and CT1 Storage Providers. 

NOTE: Removing and registering storage providers in vCenter is not impactful to the vVol Data Path. Existing VMs running on vVol datastore will continue to run. 

Refresh the Certificates for both Storage Providers (Standby/Active)

6. When the VASA provider is in syncError and the PE is not showing up in the esxcli storage vvol protocolendpoint / esxcli storage vvol protocolendpoint list or the host GUI, first refresh the storage provider certificates, run through vvold ssl_reset and restart the vvold service. Resetting and restarting the vvold service doesn't impact the host services and can be executed without putting the host in maintenance mode. 

/etc/init.d/vvold ssl_reset && /etc/init.d/vvold restart (Not required for ESXi 7.0.3)

7.
Renew and Refresh CA Certificates on each host showing 'Empty VP URL for VP' in the vvold log.
 

8. From the vCenter (HTML-client), go to Configuration Advanced settings, and check for the vpxd.certmgmt.mode and change it to vmca OR custom depending on the type of certificate you are using for vCenter. 


From the ESXi host CLI and  HTML client follow these steps respectively when applicable:

 

  • On the ESXI host via CLI run cd /etc/vmware/ssl then ls -lah
  • Go to vCenter HTML client and Right click the host with issues and go to certificates > Refresh CA
  •  you may get an error similar to: Task failed: A general system error occurred: Unable to push CA certificates and CRLs to host [email protected]
  • Go to the ESXi host in vCenter > Advanced System Settings & change the parameter Config.HostAgent.ssl.keyStore.allowSelfSigned  to true/false based on the type of certificates. Please check the table above to understand what each option means. 
  • On the vCenter HTML client, Right click the host and go to certificates > Refresh CA
  • On the ESXi host via CLI (SSH) run the command - less  /etc/vmware/ssl/castore.pem
  • Go back to the HTML client of the vCenter and right click on the affected ESXi host and Renew certificates
  • less castore.pem certificate again to check if it actually refreshed.
  • Check if the Protocol Endpoint(s) is visible now, if not reboot the host. 
  • If the PE is visible, try adding the vVol datastore or mounting it. 


If the PEs are still not visible after following the steps above, check and remove expiring certificates. 

Manually reviewing certificates in VMware Endpoint Certificate Store for vSphere 6.x and 7.x (2111411)


root@vCenter [  ]# /usr/lib/vmware-vmafd/bin/vecs-cli entry list --store TRUSTED_ROOTS --text | egrep 'Alias|Key Usage' -A 1 | grep -v "Entry type"  
 

Removing Expired CA Certificates from the TRUSTED_ROOTS store in the VMware Endpoint Certificate Store(VECS) (2146011)

NOTE: Offline IOFilters showing under Storage Providers of vCenter belonging to ESXi hosts doesn't impact Protocol Endpoints visibility 


Workaround:

 

 

Additional Information


Adding an ESXi host to vCenter Server after upgrade to ESXi 6.7 Update 3 and later versions fail (78017)  

ESXi 6.7 U3 or later host newly added to vCenter is unable to access vVOl datastore (79958) 
  

Seamless upgrade to VASA 5 for VMware Virtual Volume with backward compatibility (91387)

Virtual Volumes datastore is inaccessible after moving to another vCenter Server or refreshing CA certificate (67744)

"A general system error occurred: Unable to push CA certificates and CRLs to host", Adding or Reconnecting 6.7 ESXi host to vCenter Server fails (74756)

VMware ESXi 8.0 Update 2 Release Notes 

If you update your vCenter to 8.0 Update 1, but ESXi hosts remain on an earlier version, vSphere Virtual Volumes datastores on such hosts might become inaccessible

Self-signed VASA provider certificates are no longer supported in vSphere 8.0 and the configuration option Config.HostAgent.ssl.keyStore.allowSelfSigned is set to false by default. If you update a vCenter instance to 8.0 Update 1 that introduces vSphere APIs for Storage Awareness (VASA) version 5.0, and ESXi hosts remain on an earlier vSphere and VASA version, hosts that use self-signed certificates might not be able to access vSphere Virtual Volumes datastores or cannot refresh the CA certificate.

Workaround: Update hosts to ESXi 8.0 Update 1. If you do not update to ESXi 8.0 Update 1, see VMware knowledge base article 91387.


Unable to remove the storage provider or is greyed out in vCenter (57863)

Impact/Risks:

When an event impacts the existing connection to the Management Path, there is no disruption to the I/O coming from any currently running vVol VMs. The disruption here comes into play with the communication via VASA between the ESXi Hosts, vCenter and the Storage Array. This disruption will prevent powered off vVol VMs from being powered on, vSphere vMotion(s) and Storage vMotion(s) will fail, vVol VMs won't be able to have their settings edited or updated, no new VMs can be created on the vVol Datastore, and the vVol Datastore will show as inaccessible in the vCenter UI. Essentially no reconfigurations will be possible until the management path comes back online.

Examples include:

1. Creating new virtual disks.
2. Resizing virtual disks.
3. Deleting virtual disks.
4. Assigning a storage policy to a VM or virtual disk.
5. Powering off a VM.
6. Powering on a VM.
7. Moving a VM.
 

This issue is being checked by Diagnostics for VMware Cloud Foundation.

The check is as follows:

  • Product: ESXi
  • Log File: vvold.log
  • Log Expression Check "warning" AND "VasaSession::GetEndPoint: failed to get endpoint"