Example: From the ssh of the Guest cluster Control plane, after we delete the pod 'test-pod-##########-####', the age of the pod after re-deployment is not even:
~ k get podsNAME READY STATUS RESTARTS AGEtest-pod-##########-#### 1/1 Running 0 3h2m~ k delete pod test-pod-##########-####;datepod "test-pod-##########-####" deletedWed Jan 22 15:40:04 IST 2025~ kubectl get pods -n testns;dateNAME READY STATUS RESTARTS AGEtest-pod-##########-#### 1/1 Running 0 6sWed Jan 22 15:40:09 IST 2025~ kubectl get pods -n testns;dateNAME READY STATUS RESTARTS AGEtest-pod-##########-#### 1/1 Running 0 9sWed Jan 22 15:40:12 IST 2025~ kubectl get pods -n testns;dateNAME READY STATUS RESTARTS AGEtest-pod-##########-#### 1/1 Running 0 3m7sWed Jan 22 15:40:21 IST 2025~ kubectl get pods -n testns;dateNAME READY STATUS RESTARTS AGEtest-pod-##########-#### 1/1 Running 0 22sWed Jan 22 15:40:25 IST 2025
System clock synchronized: no
systemctl status systemd-timesyncd
vSphere with Tanzu
This is a known issue caused due to systemd-timesyncd service resulting in time drift across the control plane nodes as well as worker nodes.
systemctl stop systemd-timesyncd
systemctl disable systemd-timesyncd