After successful login to Supervisor cluster, getting "You must be logged in to the server (Unauthorized)"
search cancel

After successful login to Supervisor cluster, getting "You must be logged in to the server (Unauthorized)"

book

Article ID: 327454

calendar_today

Updated On:

Products

VMware vSphere ESXi VMware vSphere with Tanzu

Issue/Introduction

Symptoms:

After successfully logging into Tanzu Kubernetes Guest cluster, any attempt to view the resources on the guest cluster fails with "error: You must be logged in to the server (Unauthorized)"

 

kubectl vsphere login --server=SUPERVISOR-CLUSTER-CONTROL-PLANE-IP 
--tanzu-kubernetes-cluster-name CLUSTER-NAME 
--tanzu-kubernetes-cluster-namespace NAMESPACE
--vsphere-username USER-NAME

kubectl config get-contexts
kubectl config use-context CLUSTER-NAME

kubectl get nodes
error: You must be logged in to the server (Unauthorized)


Same behavior is observed for vSphere and AD users.
The auth service logs on TKC show that clients are failing with SSL handshake errors

http: TLS handshake error from 127.0.0.1:44258: remote error: tls: bad certificate


The kube-apiserver will also show SSL handshake errors

1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Post "https://localhost:5443/tokenreview?timeout=30s": x509: certificate relies on legacy Common Name field, use SANs or temporarily enable Common Name matching with GODEBUG=x509ignoreCN=0]


The auth service logs may also show that key and cert are not a pair.

Error while beginning to serve TLS on localhost:5443: +tls: private key does not match public key

 


Environment

VMware vSphere 7.0 with Tanzu

Cause

The certificate and key for the Auth service are not valid and need to be regenerated.

Resolution

Issue fixed in VC 7.0.3


Workaround:

1. SSH to TKC control plane, see documentation for details.
Check for the guest-cluster-auth-svc-key secret and guest-cluster-auth-svc-cert configmap in the vmware-system-auth namespace.

kubectl --kubeconfig /etc/kubernetes/admin.conf get secret -n vmware-system-auth
kubectl --kubeconfig /etc/kubernetes/admin.conf get configmaps -n vmware-system-auth


2. Delete the auth svc secret and cert

kubectl --kubeconfig /etc/kubernetes/admin.conf delete configmap -n vmware-system-auth guest-cluster-auth-svc-cert
kubectl --kubeconfig /etc/kubernetes/admin.conf delete secret -n vmware-system-auth guest-cluster-auth-svc-key

 

3. Generate a new secret and key. Run the /usr/lib/vmware-wcpgc-manifests/generate_key_and_csr_updated.sh script with the KUBECONFIG variable.

KUBECONFIG=/etc/kubernetes/admin.conf /usr/lib/vmware-wcpgc-manifests/generate_key_and_csr_updated.sh

kubectl --kubeconfig /etc/kubernetes/admin.conf get secret -n vmware-system-auth kubectl --kubeconfig /etc/kubernetes/admin.conf get configmaps -n vmware-system-auth

 

4. Restart the guest-cluster-auth-svc pod in the vmware-system-auth namespace. Exit from the Control Plane VM.

kubectl --kubeconfig /etc/kubernetes/admin.conf get pods -n vmware-system-auth
kubectl --kubeconfig /etc/kubernetes/admin.conf delete pod AUTH-SERVICE-POD -n vmware-system-auth
kubectl --kubeconfig /etc/kubernetes/admin.conf get pods -n vmware-system-auth

 

5. Login to Guest cluster, switch context and check status of resources

kubectl vsphere login --server=SUPERVISOR-CLUSTER-CONTROL-PLANE-IP  --tanzu-kubernetes-cluster-name CLUSTER-NAME  --tanzu-kubernetes-cluster-namespace NAMESPACE --vsphere-username USER-NAME 

kubectl config get-contexts
kubectl config use-context CLUSTER-NAME
kubectl get nodes