E0425 07:36:36.085306 1 leaderelection.go:330] error retrieving resource lock kube-system/cloud-controller-manager: etcdserver: leader changed
E0425 07:37:07.522864 1 leaderelection.go:330] error retrieving resource lock kube-system/cloud-controller-manager: Get "https://<IP>:443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/cloud-controller-manager?timeout=5s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
E0425 07:37:12.522172 1 leaderelection.go:330] error retrieving resource lock kube-system/cloud-controller-manager: Get "https://<IP>:443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/cloud-controller-manager?timeout=5s": context deadline exceeded
I0425 07:37:12.522513 1 leaderelection.go:283] failed to renew lease kube-system/cloud-controller-manager: timed out waiting for the condition
F0425 07:37:12.522644 1 controllermanager.go:234] leaderelection lost
kubectl logs -n kube-system etcd-control-plane-1234 --all-containers -f | grep -i "fdatasync"{"level":"warn","ts":"2025-09-11T15:43:13.341189Z","caller":"wal/wal.go:805","msg":"slow fdatasync","took":"3.250706104s","expected-duration":"1s"}{"level":"warn","ts":"2025-09-11T15:43:34.591388Z","caller":"wal/wal.go:805","msg":"slow fdatasync","took":"21.248814627s","expected-duration":"1s"}{"level":"warn","ts":"2025-09-11T15:43:35.76005Z","caller":"wal/wal.go:805","msg":"slow fdatasync","took":"1.167246894s","expected-duration":"1s"}
This issue can occur if there is any slowness observed on the underlying storage backing the Control Plane VMs.
Review the environment with your storage/infrastructure team and look to isolate the causes of the Storage performance bottlenecks.
Once no latency spikes are observed, confirm if the restarts still occur.