kubectl vsphere login is not setting the correct context when name of the guest cluster and the name of the namespace are the same
search cancel

kubectl vsphere login is not setting the correct context when name of the guest cluster and the name of the namespace are the same

book

Article ID: 398047

calendar_today

Updated On:

Products

VMware vSphere Kubernetes Service

Issue/Introduction

When logging in to a guest cluster with "kubectl vsphere login" the plugin is not setting the correct context when name of the guest cluster and the name of the namespace are the same.

 

There are no error messages.

 

Verify the problem:

After successful login to affected guest cluster the guest cluster context is not in use.

 

kubectl vsphere login --server=########### --tanzu-kubernetes-cluster-name <guest cluster name> --tanzu-kubernetes-cluster-namespace <guest cluster namespace> --vsphere-username [email protected] --insecure-skip-tls-verify

Note: <guest cluster name> and  <guest cluster namespace> are the same

Logged in successfully.

You have access to the following contexts:

###########

###########

###########

 

kubectl config current-context is showing the correct context for the guest cluster being set but kubectl get nodes is showing the nodes of the supervisor and not of the guest cluster.

 

This confirms that the guest cluster context is not correct. kubectl is using the Supervisor context.

 

 

Environment

vSphere with Tanzu

Cause

kubectl-vsphere plugin up to version 0.1.11 cannot handle context information when name of guest cluster and name of namespace are the same

 

Identify version:

debian@jumpbox ~/.kube> kubectl-vsphere version
kubectl-vsphere: version 0.1.9, build 23754142, change 13167650

Resolution

Fix confirmed for kubectl-vpshere plugin in vSphere 8.0U3P06

 

Workaround:

 

a) Login to the Supervisor and get the EXTERNAL IP of the LoadBalancer of the affected cluster

root@########### [ ~ ]# k get svc -n <guest cluster namespace> |grep <guest cluster name>
NAME                                             TYPE                CLUSTER-IP    EXTERNAL-IP     PORT(S)    AGE
<guest cluster name>-control-plane-service       LoadBalancer        ###########     ###########      6443/TCP   62m

 

b) Change .kube/config file manually adding another context for affected cluster:

 

not working context:

- context:
cluster: ###########
namespace: <guest cluster name>
user: wcp:tkgs.gslabs.local:[email protected]
name: <guest cluster name>

 

add new context to config file: 

- context:
cluster: <External IP of LoadBalancer of the affected cluster>
namespace: <guest cluster name>
user: wcp:<External IP of LoadBalancer of the affected cluster>:[email protected]
name: <guest cluster name>_<External IP of LoadBalancer of the affected cluster>

 

c) Login again to the affected guest cluster and switch context to new added context:

kubectl vsphere login --server=########### --tanzu-kubernetes-cluster-name <guest cluster name> --tanzu-kubernetes-cluster-namespace <guest cluster namespace> --vsphere-username [email protected] --insecure-skip-tls-verify

kubectl config use-context <guest cluster name>_<External IP of LoadBalancer of the affected cluster>

  

d) Verify that context is used and working

 

kubectl config get-contexts is showing that newly added context is active

 

kubectl get nodes is showing the nodes of the guest cluster

 

This confirms that kubectl is using the correct context of the guest cluster.