Legacy agents left after upgrading VKS Supervisor version, they are in status: NotReady
search cancel

Legacy agents left after upgrading VKS Supervisor version, they are in status: NotReady

book

Article ID: 404328

calendar_today

Updated On:

Products

VMware vSphere Kubernetes Service

Issue/Introduction

vCenter 8.0.x.

Supervisor has been upgraded. Everything is fine, but there are some legacy agents in status: Not ready

[root@tanzu-cli-ccdc3 ~]# kubectl get nodes
NAME                               STATUS     ROLES                  AGE    VERSION
###############################1   Ready      control-plane,master   133m   v1.30.6+vmware.wcp.2
###############################2   Ready      control-plane,master   118m   v1.30.6+vmware.wcp.2
###############################3   Ready      control-plane,master   149m   v1.30.6+vmware.wcp.2
vm-agent1                          NotReady   agent                  348d   v1.29.9-sph-2345678
vm-agent1.example.com              Ready      agent                  85m    v1.30.6-sph-1234567
agent2                             NotReady   agent                  348d   v1.29.9-sph-2345678
agent2.example.com                 Ready      agent                  91m    v1.30.6-sph-1234567
agent3                             NotReady   agent                  348d   v1.29.9-sph-2345678
agent3.example.com                 Ready      agent                  98m    v1.30.6-sph-1234567
agent4                             NotReady   agent                  348d   v1.29.9-sph-2345678
agent4.example.com                 Ready      agent                  77m    v1.30.6-sph-1234567

Environment

vCenter 8.0.x

Cause

These are legacy agent VMs.  We need to check if the upgrade has been done successfully.

If yes, then these legacy agents can be removed.

Resolution

a)  There are 4 physical Esxi hosts, so there are 4 agents.
   1 agent for each Esxi host.
   The agents with version v1.30.6-sph-2345678 are in status:  Ready,   it means the supervisor upgrade was done successfully.
   The legacy agents are with version v1.29.9-sph-1234567

 

b) Run commands:

         kubectl delete node agent1
         kubectl delete node agent2
         kubectl delete node agent3
         kubectl delete node agent4