kubectl vsphere login --vsphere-username <SSO@username> --server=<Workload Management Supervisor Control Plane Node IP Address>
kubectl vsphere login --server <Workload Management Supervisor Control Plane Node IP Address> --vsphere-username <SSO@username> --tanzu-kubernetes-cluster-namespace <namespace> --tanzu-kubernetes-cluster-name <clusterName>
time="YYYY-MM-DDTHH:MM:SS-00:00" level=fatal msg="Failed to get available workloads, response from the server was invalid."
kubectl vsphere login --server <Workload Management Supervisor Control Plane Node IP Address> --vsphere-username <SSO@username> --tanzu-kubernetes-cluster-namespace <namespace> --tanzu-kubernetes-cluster-name <clusterName> -v10
time="YYYY-MM-DDTHH:MM:SS-00:00" level=debug msg="Creating wcp.Client for <Workload Management Supervisor Control Plane Node IP Address>."
time="YYYY-MM-DDTHH:MM:SS-00:00" level=info msg="Does not appear to be a vCenter or ESXi address."
time="YYYY-MM-DDTHH:MM:SS-00:00" level=debug msg="Got response: \n"
time="YYYY-MM-DDTHH:MM:SS-00:00" level=info msg="Using <SSO@username> as username."
time="YYYY-MM-DDTHH:MM:SS-00:00" level=debug msg="Env variable KUBECTL_VSPHERE_PASSWORD is present \n"
time="YYYY-MM-DDTHH:MM:SS-00:00" level=debug msg="Error while getting list of workloads: bad gateway\nPlease contact your vSphere server administrator for assistance."
cat /var/log/vmware/wcp/wcpsvc.log
err: HTTP request failed: POST, url: https://<VCSA-FQDN/URL>:443/rest/vcenter/tokenservice/token-exchange, code:500, body: '{"type":"com.vmware.vcenter.tokenservice.invalid_grant","value":{"messages":[{"args":[],"default_message":"Invalid Subject token: tokenType=SAML2","id":"com.vmware.vcenter.tokenservice.exceptions.InvalidGrant"},{"args":[],"default_message":"Token expiration date: DAY MON DD HH:MM:SS GMT YYYY is in the past.","id":"com.vmware.identity.saml.InvalidTokenException"},{"args":[],"default_message":"Token expiration date: DAY MON DD HH:MM:SS GMT YYYY is in the past.","id":"com.vmware.vim.sso.client.exception.InvalidTimingException"}}}}'
kubectl get pods -A | grep authproxy
kubectl logs -n <authproxy namespace> <wcp-authproxy pod name>
InternalServerError - occurred on authorization: the SAML token was not exchanged, as it is expired, invalid or absent
For more information kubectl vsphere login and contexts with SSO, please see the below documentation:
TechDocs - kubectl vsphere login to the Supervisor Cluster Context
TechDocs - kubectl vsphere login to the TKG Service Cluster Context (also known as Guest Cluster or vSphere Kubernetes Cluster)
vSphere with Tanzu 7.0
vSphere with Tanzu 8.0
This issue can occur regardless of whether or not the clusters are managed by Tanzu Mission Control (TMC)
There is a NTP time sync, time skew or time drift issue between the Supervisor Cluster and the vCenter/VCSA.
This can occur if there is a difference between time servers or configuration of time servers in the various components that make up the environment: vCenter, ESXi Host, Supervisor Cluster
Time sync issues can occur when there is at least a five minute difference between components.
The component which is experiencing the time difference within the environment will need to be located and its time re-synced.
This KB will provide checks depending on the component: vCenter, ESXi Host, Supervisor Cluster, vSphere Kubernetes Cluster
https://<vCenter-FQDN>:5480
date
date
timedatectl show
timedatectl status
timedatectl timesync-status
timedatectl show-timesync
systemctl status chronyd
cat /etc/chrony.conf
systemctl status systemd-timesyncd
cat /etc/systemd/timesyncd.conf
timedatectl status
Generating and populating a kubeconfig does not work in this scenario because the SAML token is considered expired before the user can login.