vSphere 8 Supervisor Workload Cluster Upgrade Stuck with No Nodes on Desired Upgraded Version
search cancel

vSphere 8 Supervisor Workload Cluster Upgrade Stuck with No Nodes on Desired Upgraded Version

book

Article ID: 378212

calendar_today

Updated On:

Products

VMware vSphere Kubernetes Service vSphere with Tanzu Tanzu Kubernetes Runtime

Issue/Introduction

In a vSphere 8.X environment, a vSphere Workload Cluster upgrade is stuck and not progressing.

While connected to the Supervisor context, one or more of the following symptoms are present:

  • All of the affected cluster's control plane machines are in Healthy, Running state on the previous TKR version
  • All of the affected cluster's worker machines are on the previous version
  • A worker node is continuously getting recreated  every 5 - 15 minutes and remains in Provisioning state on the previous version:
    • kubectl get machines -n <affected cluster namespace>
  • Describing the worker node's corresponding vm does not show HA failover resource issues:
    • kubectl describe vm -n <affected cluster namespace> <worker node vm name>
  • Describing the cluster object notes that the Control plane upgrade to the desired version is on hold: Machinedeployment(s) are rolling out:
    • kubectl describe cluster -n <affected cluster namespace> <cluster name>
  • The machinedeployments (md) for each worker nodepool shows the previous version:
    • kubectl get md -n <affected cluster namespace>

 

While connected to the affected workload cluster's context, the following symptoms are present:

  • The recreating worker node shows NotReady state on the previous version:
    • kubectl get nodes
  • Pods for antrea, kube-proxy and/or vsphere-csi-node are in Init:ImagePullBackOff state:
    • kubectl get pods -A | grep -v Run
  • Describing one of the pods in Init:ImagePullBackOff state shows an error message similar to the below image error where the missing image's version is the version expected by the cluster's desired upgrade TKR version:
    • kubectl describe pod -n <pod namespace> <pod name>


      Failed to pull image "localhost:5000/vmware.io/<vmware-image>:<image version>": rpc error: code = NotFound desc = failed to pull and unpack image "localhost:5000/vmware.io/<vmware-image>:<image version>": failed to resolve reference "localhost:5000/vmware.io/<vmware-image>:<image version>": localhost:5000/vmware.io/<vmware-image>:<image version>: not found

 

While connected to the Provisioning and recreating worker node on the previous version, the following symptoms are present:

  • Containerd and kubelet are in a Healthy state and Running:
    • systemctl status containerd
    • systemctl status kubelet
  • Containerd and kubelet logs show repeated error messages similar to the below in reference to missing images expected by the cluster's desired upgrade TKR version:
    • "failed to pull and unpack image":"failed to resolve reference" "localhost:5000/vmware.io/<vmware-image>:<image version>": localhost:5000/vmware.io/<vmware-image>:<image version>: not found
  • The images present on the node are for the previous TKR version. There are no images present on versions expected for the desired upgrade TKR version:
    • crictl images list

 

VMware Tanzu Kubernetes Release Notes for 8.X contains tables for each TKR version and its expected package image versions:

TKR Release Notes

Environment

vSphere with Tanzu 8.X

This can occur on a vSphere Workload cluster regardless of whether or not it is managed by Tanzu Mission Control (TMC)

Cause

Worker nodes are continuously recreating and failing health checks because no containers are able to reach Running state. The containers cannot start up properly because the expected desired upgraded version images are not present on the worker nodes. These images are not present on worker nodes because the worker nodes are still on the older version.

This issue can occur when an upgrade was initiated but a change made to the cluster has not yet completed. Upgrades begin with rolling redeployments of the control plane nodes, but in this scenario, the control plane nodes are waiting for the worker node change to complete. However, the worker node change cannot complete because the recreating worker node stuck in Provisioning state is referencing images that are not present in the node. These images are not present because the upgrade process is searching for the desired upgrade version of images to deploy the necessary system pods, but only the previous version's images are available.

The above same scenario can occur if any changes requiring redeployment or deletions are performed on cluster's nodes after an upgrade is initiated and before the upgrade has finished upgrading the control planes.

This issue can also occur if a workload cluster upgrade was initiated before the system-initiated, mandatory migration from vSphere 7 to vSphere 8.

Resolution

Please note that this is only regarding a vSphere 8.X environment for a cluster where none of the nodes have upgraded.

If all control plane nodes have upgraded, please see the following KB:
vSphere Kubernetes Cluster Upgrade Stuck with Control Planes Upgraded but Worker Nodes Stuck Upgrading due to MachineDeployment Version

Otherwise, please open a ticket to VMware by Broadcom Technical Support referencing this KB for assistance.

Please provide information on the following: