Unable to add Worker Nodes to an existing Kubernetes Cluster in VMware Cloud Director.
search cancel

Unable to add Worker Nodes to an existing Kubernetes Cluster in VMware Cloud Director.

book

Article ID: 426220

calendar_today

Updated On:

Products

VMware Cloud Director

Issue/Introduction

  • Failed to add worker nodes to the existing Kubernetes cluster. The nodes are stuck and do not get added to the node pool.

  • Control Plane capvcd logs show errors as such:
    • ERROR   Reconciler error        {"controller": "vcdmachine", "controllerGroup": "infrastructure.cluster.x-k8s.io", "controllerKind": "VCDMachine", "VCDMachine": {"name":"<Cluster Name>","namespace":"<Cluster Name-ns>"}, "namespace": "<Cluster Name-ns>", "name": "<Cluster Name>", "reconcileID": "########-####-####-####-############", "error": "Error creating VCD client to reconcile Cluster [prod-main] infrastructure: error creating VCD client from secrets to reconcile Cluster [prod-main] infrastructure: [unable to get swagger client from secrets: [unable to get bearer token from secrets: [failed to set authorization header: [error finding LoginUrl: could not find valid version for login: API version 36.0 is not supported: version = 36.0 is not supported]]]]", "errorVerbose": "error creating VCD client from secrets to reconcile Cluster [prod-main] infrastructure: [unable to get swagger client from secrets: [unable to get bearer token from secrets: [failed to set authorization header: [error finding LoginUrl: could not find valid version for login: API version 36.0 is not supported: version = 36.0 is not supported]]]]

Environment

VMware Cloud Director 10.6.x
VMware Cloud Director Container Service Extension 4.2.x

Cause

The deployed cluster's component versions are not compatible with the deployed CSE Server.

Resolution

Customer can perform one of the following two actions to further resolve the Issue -