Error "Resource List: <error getting backup resource list: request failed:" is received during velero back up failures
search cancel

Error "Resource List: <error getting backup resource list: request failed:" is received during velero back up failures

book

Article ID: 422505

calendar_today

Updated On:

Products

VMware Tanzu Kubernetes Grid Management

Issue/Introduction

  • Velero backups do not complete successfully

  • kubectl describe backuprepository -n <backuprepository namespace> <backuprepository name> shows the below error 

     Message:          failed to prune repo: error to connect backup repo: error to connect to storage: error retrieving storage config from bucket "<backend storage location>": The Access Key Id you provided does not exist in our records.
   Result:            Failed
   Start Timestamp:   2025-11-21T10:45:08Z

  • Logs have the below errors

    time="2025-11-21T15:44:20+05:30" level=error msg="Unable to download tarball for backup daily-backup-7dret-20251024030002, skipping associated DeleteItemAction plugins" backup=daily-backup-7dret-20251024030002 controller=backup-deletion deletebackuprequest=velero/daily-backup-7dret-20251024030002-d7v9r error="error copying Backup to temp file: rpc error: code = Unknown desc = error getting object backups/daily-backup-7dret-20251024030002/daily-backup-7dret-20251024030002.tar.gz: InvalidAccessKeyId: The Access Key Id you provided does not exist in our records.\n\tstatus code: 403, request id: 1879FE0D01AE0C1D, host id: ########################################" error.file="/go/src/github.com/vmware-tanzu/velero/pkg/controller/restore_controller.go:840" error.function=github.com/vmware-tanzu/velero/pkg/controller.downloadToTempFile logSource="pkg/controller/backup_deletion_controller.go:265"

 

 

Environment

2.x 

Cause

  • The credentials used by velero to access the backend storage is not valid.
  • The Access Key ID (the "username" part of the credential) is incorrect or has been deleted on the storage side.
  • This typically happens if the storage keys were rotated but not updated in the Kubernetes Secret, or if there is a typo in the secret definition.

Resolution

  1. Identify the Secret Velero is using currently

    kubectl get deployment velero -n <namespace> -o jsonpath='{.spec.template.spec.volumes[?(@.name=="cloud-credentials")].secret.secretName}'

  2. Verify the the Secret stored in Kubernetes

     kubectl get secret cloud-credentials -n <namespace> -o jsonpath="{.data.cloud}" | base64 -d

  3. Look for the below values in the output and confirm they are not same

    aws_access_key_id = <CHECK-THIS-VALUE>
    aws_secret_access_key = <CHECK-THIS-VALUE>

  4. Backup the existing secret 

    kubectl get secret cloud-credentials -n <name space> -o yaml > velero-secret-backup.yaml

  5. Create a new file "credentials-velero" with the correct credentials as below 

    [default]
    aws_access_key_id = CORRET_ACCESS_KEY
    aws_secret_access_key = CORRECT_SECRET_KEY

  6. Delete the  existing Secret 

    kubectl delete secret cloud-credentials -n <name space>

  7. Recreate the secret using the updated credentials 

    kubectl create secret generic cloud-credentials --from-file=cloud=credentials-velero -n <namespace>

  8. Restart Veloro Pods 

    kubectl delete pod -l component=velero -n <namespace>

  9. Check if the Velero pods are recreated and are running

    kubectl get pods -n <namespace>

  10. Run a backup job and verify if it is successful 

    velero backup create Test-backup --from-schedule <your-schedule-name>