Error when scheduling a pod: admission webhook "default.validating.license.supervisor.vmware.com" denied the request
search cancel

Error when scheduling a pod: admission webhook "default.validating.license.supervisor.vmware.com" denied the request

book

Article ID: 383698

calendar_today

Updated On:

Products

VMware vSphere Kubernetes Service vSphere with Tanzu

Issue/Introduction

The following error is seen when scheduling a Pod within a guest cluster context:

$ kubectl apply -f TestPod.yaml
Error when creating "TestPod.yaml": admission webhook "default.validating.license.supervisor.vmware.com" denied the request: workload management cluster uses vSphere Networking, which does not support action on kind Pod


Additionally,

  • After successful kubectl-vsphere login to a guest cluster, the guest cluster's context shows Supervisor Cluster IP instead of its own.
  • Even with a guest cluster context, 'kubectl' command actually runs against Supervisor context.

Environment

vSphere with Tanzu

vSphere with Tanzu with vDS Networking

Cause

When 'kubectl vsphere login' is run against a specific cluster in a specific vsphere namespace, it'll create a few kubernetes contexts. Among them are, guest cluster context & vsphere namespace context.

If the guest cluster name & the vsphere namespace name are the same, then the vsphere namespace context will overwrite the guest cluster context.

Under this situation, kubectl command will run with the vsphere namespace context which is actually the Supervisor cluster context.

Resolution

Ensure that the name of your guest cluster is different from any of vSphere namespace names.
If a guest cluster currently has the same name as one of your vSphere Namespaces, you need to delete that guest cluster and then recreate it using a unique name that does not conflict with any existing vSphere Namespace.

1. Make sure your current active context is pointing to the Supervisor Cluster but not a Guest Cluster and if required change the context to the Supervisor cluster.

kubectl config get-contexts
kubectl config use-context <supervisor-cluster-context>

2. List vSphere namespaces

kubectl get ns

3. Delete the guest cluster that shares the name with one of above vSphere namespaces listed.

kubectl delete cluster <guest-cluster-name> -n <namespace>

4. Recreate the new cluster using a unique name that is clearly distinct from all existing vSphere namespace names.

kubectl apply -f <guest-cluster-manifest.yaml> 

5. Run `kubectl vsphere login` against the newly created cluster, then confirm that the same Pod is now schedulable.