Roles appearing as "<none>" for nodes when running "kubectl get nodes" command on the Cluster
search cancel

Roles appearing as "<none>" for nodes when running "kubectl get nodes" command on the Cluster

book

Article ID: 392642

calendar_today

Updated On:

Products

VMware vSphere Kubernetes Service

Issue/Introduction

  • The role will appear as <none> for one for multiple nodes when running kubectl get nodes command 

    • kubectl get nodes

      NAME                           STATUS          ROLES           AGE     VERSION
      <Control_Plane1_Hostname>      Ready    control-plane,master   2d1h   v1.26.8+vmware.wcp.1
      <Control_Plane2_Hostname>      Ready    control-plane,master   2d1h   v1.26.8+vmware.wcp.1
      <Control_Plane3_Hostname>      Ready    control-plane,master   2d1h   v1.26.8+vmware.wcp.1
      <Worker_Node1_Hostname>        Ready    <none>                 47h    v1.26.4-sph-79b2bd9
      <Worker_Node2_Hostname>        Ready    agent                  47h    v1.26.4-sph-79b2bd9
      <Worker_Node3_Hostname>        Ready    <none>                 47h    v1.26.4-sph-79b2bd9

    • This can happen after a cluster upgrade or a rolling restart of the nodes. 

Cause

  • It occurs because of one of the Labels missing on the impacted node. 

  • This can be verified via describing the node, and the following Label would be missing - 'node-role.kubernetes.io/agent=agent

    • kubectl describe <impacted_node_name> | grep -i 'label' 

      Labels:         beta.kubernetes.io/arch=amd64
                      beta.kubernetes.io/os=CRX
                      kubernetes.io/arch=amd64
                      kubernetes.io/hostname=<worker_node_hostname>
                      kubernetes.io/os=CRX
                      node.kubernetes.io/role=agent
                      type=virtual-kubelet

  • Whereas, on a healthy node the label would be present 

    • kubectl describe <healthy_node_name> | grep -i 'label' 

      Labels:         beta.kubernetes.io/arch=amd64
                      beta.kubernetes.io/os=CRX
                      kubernetes.io/arch=amd64
                      kubernetes.io/hostname=ncwpevaz05c0016.corp.chartercom.com
                      kubernetes.io/os=CRX
                      node-role.kubernetes.io/agent=agent
                      node.kubernetes.io/role=agent
                      type=virtual-kubelet

Resolution

  • Apply the missing label manually on the impacted worker node with the help of the following command - 

    • kubectl label node <node_hostname> node-role.kubernetes.io/agent=agent

  • Upon running kubectl get nodes command, roles should be visible now 

    • NAME                           STATUS          ROLES           AGE     VERSION
      <Control_Plane1_Hostname>      Ready    control-plane,master   2d1h   v1.26.8+vmware.wcp.1
      <Control_Plane2_Hostname>      Ready    control-plane,master   2d1h   v1.26.8+vmware.wcp.1
      <Control_Plane3_Hostname>      Ready    control-plane,master   2d1h   v1.26.8+vmware.wcp.1
      <Worker_Node1_Hostname>        Ready    agent                  47h    v1.26.4-sph-79b2bd9
      <Worker_Node2_Hostname>        Ready    agent                  47h    v1.26.4-sph-79b2bd9
      <Worker_Node3_Hostname>        Ready    agent                  47h    v1.26.4-sph-79b2bd9

Additional Information