Velero scheduled backup fails with "unknown time zone" error when using CRON_TZ
search cancel

Velero scheduled backup fails with "unknown time zone" error when using CRON_TZ

book

Article ID: 428155

calendar_today

Updated On:

Products

VMware vSphere Kubernetes Service

Issue/Introduction

When attempting to create a Velero backup schedule using the CRON_TZ option to specify a time zone (e.g., Asia/Tokyo), the scheduled backup fails with a validation error.

You observe the following behavior when creating a schedule:

# Example - New backup schedule
velero schedule create test-backup-5min --schedule="CRON_TZ=Asia/Tokyo */5 * * * *" --include-namespaces kube-system

# STATUS: FailedValidation
velero schedule get
#> NAME               STATUS             CREATED                         SCHEDULE
#> test-backup-5min   FailedValidation   YYYY-MM-DD hh:mm:ss +0000 UTC   CRON_TZ=Asia/Tokyo */5 * * * *

# Error message - "unknown time zone"
velero schedule describe test-backup-5min
#> Phase:  FailedValidation
#> Validation errors:  invalid schedule: provided bad location Asia/Tokyo: unknown time zone Asia/Tokyo

Environment

vSphere Kubernetes Service - VKS Standard Package - Velero

Cause

The Velero container image provided in the VKS Standard Package does not include the timezone definition files.
As a result, the Velero cannot resolve specific timezones defined in the CRON_TZ schedule.

Reference: Github - velero schedule create still doesn't support timezone well

Resolution

To resolve this issue, apply a YTT overlay to mount the worker node's timezone definition files (/usr/share/zoneinfo) into the Velero container.
This allows Velero to resolve the timezone specified in CRON_TZ.

1. Generate a YTT overlay file

cat > ytt-overlay-velero.yaml <<EOF
#@ load("@ytt:overlay", "overlay")
#@overlay/match by=overlay.subset({"kind": "Deployment", "metadata": {"name": "velero"}})
---
spec:
  template:
    spec:
      containers:
      #@overlay/match by=overlay.subset({"name": "velero"})
      - volumeMounts:
        #@overlay/append
        - mountPath: /usr/share/zoneinfo
          name: zoneinfo
          readOnly: true
      volumes:
      #@overlay/append
      - name: zoneinfo
        hostPath:
          path: /usr/share/zoneinfo
          type: Directory
EOF

2. Identify the Package Namespace

kubectl get pkgi -A | grep -E '(NAME|velero)'
PACKAGE_NS=package-installed

3. Apply the YTT overlay

vcf -n ${PACKAGE_NS} package installed update velero --ytt-overlay-file ytt-overlay-velero.yaml

Reconciliation triggers automatically after the update.  If the status does not change, trigger the reconciliation manually:

vcf -n ${PACKAGE_NS} package installed kick velero

4. Verify the Status

kubectl -n ${PACKAGE_NS} get pkgi | grep -E '(NAME|velero)' # Reconcile succeeded
velero schedule get                                         # STATUS: Enabled
velero backup get                                           # Check the backup result

Additional Information

Optional - Reverting the Workaround

Once a fixed container image is released in a future version, you should remove the temporary YTT overlay to return the package to its standard configuration.

1. Set environment variables

kubectl get pkgi -A | grep -E '(NAME|velero)'
PACKAGE_NS=package-installed

2. Remove the Annotation

Remove the annotation that links the PackageInstall to the overlay secret.

kubectl -n ${PACKAGE_NS} edit pkgi velero
  ext.packaging.carvel.dev/ytt-paths-from-secret-name.kctrl-ytt-overlays: velero-package-installed-overlays # <-- DELETE

3. Delete the Secret

kubectl -n ${PACKAGE_NS} get secrets velero-${PACKAGE_NS}-overlays
kubectl -n ${PACKAGE_NS} delete secrets velero-${PACKAGE_NS}-overlays

4. Trigger Reconciliation

Removing the annotation should automatically trigger a reconciliation. You can check the status or force a reconciliation if needed.

# Check status
vcf -n ${PACKAGE_NS} package installed status velero

# (Optional) Force reconciliation if status does not update
vcf -n ${PACKAGE_NS} package installed kick velero