"Certificate validation failed during pre-upgrade check" while upgrading vCenter Server - "Regenerate certificates for SSO and try again"
book
Article ID: 324980
calendar_today
Updated On:
Products
VMware vCenter Server 7.0VMware vCenter Server 8.0
Issue/Introduction
Symptoms:
Pre-upgrade check fails with error "Certificate validation failed during pre-upgrade check"
In the vCenter logs location var/log/vmware/upgrade/requirements-upgrade-runner.log file, the following are comparable entries:
'description': {'id': 'upgrade.sso.precheck.error.description', 'translatable': 'Certificate has expired', 'localized': 'Certificate has expired'}, 'problemId': None, 'resolution': {'id': 'upgrade.sso.precheck.error.resolution', 'translatable': 'Regenerate certificates for sso and try again', 'localized': 'Regenerate certificates for sso and try again'}}]}},
Pre-check fails when upgrading from vCenter 7 to 8
Legacy Lookup service certificate is "Expired"
Cause
An expired 7444 certificate, likely the lookup service certificate, stored in the STS_INTERNAL_SSL_CERT store can be replaced with the machine certificate from the MACHINE_SSL_CERT store using the vCert utility. This process ensures proper functioning of the vCenter Server and its communication with other services.
Pre-upgrade checks were introduced in vCenter Server 7.0 Update 1 and later to identify and resolve known SSO database issues on the vCenter Server Appliance prior to upgrade. The expired 7444 certificate stored in the STS_INTERNAL_SSL_CERT with the machine cert from the MACHINE_SSL_CERT store caused the issue.
Resolution
There are two available options to address this issue:
OPTION 1:
To resolve this issue, replace the STS_INTERNAL_SSL_CERT with the machine cert from the MACHINE_SSL_CERT store.
Process to replace STS_INTERNAL_SSL_CERT with machine cert from MACHINE_SSL_CERT store:
Check if the lookup certificate is expired by running this command: openssl s_client -connect <PSC/VCSA-FQDN/IP>:7444 | less
From the about output copy the contents which starts with "-----BEGIN CERTIFICATE-----" till the "-----END CERTIFICATE-----".
Save this file as 7444-lookup.txt and go to that location and rename this file extension as 7444-lookup.crt
Now open this 7444-lookup.crt file and check if the certificate is valid or expired.
If its expired then follow the Step 2.
Replace the expired 7444 certificate stored in the STS_INTERNAL_SSL_CERT with the machine cert from the MACHINE_SSL_CERT store. Implement the commands below as you see them one by one:
Re-run the openssl mentioned in Step #1 to validate if the lookup service certificate is valid.
And then retry the VC upgrade.
If Lookup service certificate is valid and you are facing the same issue, check certificates in VECS as well as SSO endpoints for possible expiration and replace them.
Check if the lookup service is valid by running the below command again openssl s_client -connect <PSC/VCSA-FQDN/IP>:7444 | less
OPTION 2:
Remove the stale port 7444 by running the lsdoctor tool stalefix option python lsdoctor.py -s
Copy and extract lsdoctor to the filesystem of the affected node
Run “python lsdoctor.py -s”
Verify that you have taken the appropriate snapshots
Provide the password for your SSO administrator account
Once the script completes, restart all services service-control --stop --all && service-control --start --all
Note: Re-register any external solutions that were previously pointed to the affected node (SRM, vSphere Replication, NSX-V, etc. – See product documentation for instructions)
If this article didn’t solve your issue, kindly raise a case with Broadcom support or :
Latest version of lsdoctor takes care of updating STS_INTERNAL_SSL_CERT. In case an old version of lsdoctor.py is used, use the below manual steps to remove STS_INTERNAL_SSL_CERT from VECS
Remove STS_INTERNAL_SSL_CERT from VECS via shell Script and SSH: