kubectl get pod -n capi-system | grep capi-controller-manager
READY STATUS RESTARTS AGE capi-system capi-controller-manager-86f86fb9df-fhrwz 1/2 CrashLoopBackOff 14 65m
kubectl logs -n capi-system capi-controller-manager-86f86fb9df-fhrwz -c manager
runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
Make a backup of your Velero deployment:
kubectl get deployment velero -n velero -o yaml > velero_deploy-bak.yamlYou can patch the Velero deployment by adding the `--restore-resource-priorities` field so that the ClusterAPI CRDs are at the end of .spec.containers.
vi velero-patch-file.yaml
spec: template: spec: containers: - args: - server - --features= - --restore-resource-priorities=customresourcedefinitions,namespaces,storageclasses,volumesnapshotclass.snapshot.storage.k8s.io,volumesnapshotcontents.snapshot.storage.k8s.io,volumesnapshots.snapshot.storage.k8s.io,persistentvolumes,persistentvolumeclaims,secrets,configmaps,serviceaccounts,limitranges,pods,replicasets.apps,clusters.cluster.x-k8s.io,clusterresourcesets.addons.cluster.x-k8s.io name: velero
kubectl patch deployment.apps/velero --patch "$(cat velero-patch-file.yaml)" -n velero
. . . spec: containers: - args: - server - --features= - --restore-resource-priorities=customresourcedefinitions,namespaces,storageclasses,volumesnapshotclass.snapshot.storage.k8s.io,volumesnapshotcontents.snapshot.storage.k8s.io,volumesnapshots.snapshot.storage.k8s.io,persistentvolumes,persistentvolumeclaims,secrets,configmaps,serviceaccounts,limitranges,pods,replicasets.apps,clusters.cluster.x-k8s.io,clusterresourcesets.addons.cluster.x-k8s.io command: - /velero