velero backup get NAME STATUS ERRORS WARNINGS CREATED EXPIRES STORAGE LOCATION SELECTORBackupName1 Failed 0 0 2025-04-19 12:50:58 +0530 MST 45m default <none>BackupName1 Failed 0 0 2025-04-20 12:50:58 +0530 MST 45m default <none>
# kubectl describe backup <backup-name> -n veleroName: Failed-backup-name
Namespace: velero
Failure Reason: error checking if backup already exists in object storage: rpc error: code = Unknown desc = operation error S3: HeadObject, exceeded maximum number of attempts, 3, https response error StatusCode: 0, RequestID: , HostID: , request send failed, Head "https://{TARGET_LOCATION_URL}/{TARGET_BUCKET}/01DFC1T###########7D5P6Y7VJ/backups/Failed-backup-name/velero-backup.json": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-04-21T09:30:10Z is after 2025-04-18T19:59:39Z
# kubectl describe deletebackuprequests -n velero deletebackuprequests-NameName: deletebackuprequests-NameNamespace: veleroLabels: velero.io/backup-name=backup-name velero.io/backup-uid=55######-####-###-####-#########1fAnnotations: <none>API Version: velero.io/v1Kind: DeleteBackupRequestStatus: Errors: error getting backup's volume snapshots: rpc error: code = Unknown desc = operation error S3: HeadObject, exceeded maximum number of attempts, 3, https response error StatusCode: 0, RequestID: , HostID: , request send failed, Head "https://(TARGET_LOCATION_URL}/{TARGET_BUCKET}/01H#############1E7VJ/backups/BACKUP_NAME-full-20250321233025/BACKUP_NAME-full-20250321233025-volumesnapshots.json.gz": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-04-22T04:19:07Z is after 2025-04-18T19:59:39Z error to connect backup repo: error to connect to storage: error retrieving storage config from bucket "TARGET_BUCKET": Get "https://(TARGET_LOCATION_URL}/{TARGET_BUCKET}/1H#############1E7VJ/kopia/{NameSpace}/.storageconfig": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-04-22T04:19:15Z is after 2025-04-18T19:59:39Z# kubectl get backupstoragelocations.velero.io -n velero -o yaml"apiVersion": "velero.io/v1", "kind": "BackupStorageLocation",objectStorage": { "bucket": "{TARGET_BUCKET}", "caCert": "LS0tLS1CRUdJTiBDRVJUSUZJQ0F####################################Jd0xraVE9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0t",- Decode the CACert base64 encoded Certificate using the following command # echo LS0tLS1CRUdJTiBDRVJUSUZJQ0F####################################Jd0xraVE9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0t | base64 -d | openssl x509 -noout -datesnotBefore=Apr 18 19:59:39 2024 GMTnotAfter=Apr 18 19:59:39 2025 GMTNote: The preceding log excerpts are only examples. Date, time, and environmental variables may vary depending on your environment.
VMware Tanzu Mission Control (TMC)
VMware Tanzu Mission Control - Self Managed (TMC- SM)
VMware Tanzu Kubernetes Grid Management (TKGm)
The root cause is an expired custom CA certificate configured for the Backup Target Location that used by Velero. When checking backup metadata or connecting to the repository, the TLS handshake fails due to the expired certificate.
- Get the backupstoragelocations in yaml formate
# kubectl get backupstoragelocations.velero.io -n velero -o yaml
"apiVersion": "velero.io/v1", "kind": "BackupStorageLocation",
objectStorage": { "bucket": "{TARGET_BUCKET}", "caCert": "LS0tLS1CRUdJTiBDRVJUSUZJQ0F####################################Jd0xraVE9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0t",
- Decode the CACert base64 encoded Certificate using the following command
# echo LS0tLS1CRUdJTiBDRVJUSUZJQ0F####################################Jd0xraVE9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0t | base64 -d | openssl x509 -noout -dates
notBefore= <New Date> GMTnotAfter= <New Date> GMT