vSphere Supervisor Supervisor Service vSphere Pod Traffic to ClusterIP Time-outs
search cancel

vSphere Supervisor Supervisor Service vSphere Pod Traffic to ClusterIP Time-outs

book

Article ID: 312199

calendar_today

Updated On:

Products

VMware vCenter Server VMware vSphere ESXi VMware vSphere Kubernetes Service

Issue/Introduction

This article allows to quickly identify this known problem and fix it using one of the suggested workarounds.

vSphere Pods can be used to run Supervisor Services.


Symptoms:

This issue only happens on a Supervisor cluster deployed with VDS Networking.
There are different possible symptoms:

  • vSphere Pod fails at deployment with pod state set to ErrImagePull. The command kubectl describe pod <podname> shows a failure when UDP is used to perform DNS name lookup during image fetching. For example, this is the kind of message that can be seen if the issue occurs when you install a supervisor service:

failed to get images: Image <vsphere pod namespace>/<vsphere pod instance> has failed. Error: Failed to resolve on node <nodename>. Reason: Http request failed. Code 400: ErrorType(2) failed to do request: Head  https://projects.packages.broadcom.com/v2/tkg/<supervisor service>/manifests/sha256:<hash>: dial tcp: lookup projects.packages.broadcom.com: i/o timeout
  • vSphere Pod fails slightly after deployment with pod state set to CrashLoopBackOff. The command kubectl logs <podname> shows a failure during a TCP connection to a cluster IP. For example, this is the kind of error message that can be seen if the issue occurs when you install a contour service:

time="YYYY-MM-DDTHH:MM:SSZ" level=fatal msg="unable to initialize Server dependencies required to start <supervisor service>" error="unable to set up controller manager: Get \"https://<IP>:443/api?timeout=32s\": net/http: TLS handshake timeout"
  • A vSphere Pod time-outs trying to establish a TCP connection to a ClusterIP, while direct connections to the backend IPs work fine.

  • The image disks containing the desired images for the vSphere Pod/Supervisor service are not present in the Supervisor cluster context:
    kubectl get imagedisks -A -o yaml | grep <supervisor service name>



Environment

VMware vSphere ESXi 8.0.x
VMware vCenter Server 8.0.x
vSphere Supervisor 8.0 and higher

Cause

On Supervisor deployed with VDS, any UDP or TCP traffic sent from a vSphere Pod to a ClusterIP will go through the gateway of the supervisor workload network, even if the backend IP behind the Cluster IP is on the same subnetwork as the emitting vSphere Pod. There is a chance that this traffic is blocked around the gateway for the following possible reasons:

  • On the gateway, incoming UDP or TCP packets are supposed to be re-routed to the same network interface as the one from where they are coming from. This alone could be denied by security features (e.g router RPF check).

    • UDP: failure of DNS resolution at image fetching
  • While packets from pod to ClusterIP will go through the gateway, packets returning will not be able to if the pod and the backend behind the ClusterIP are on the same subnetwork. In the case of a TCP connection, it means the gateway will only see the 1st and the 3rd packets of the TCP 3-way handshake. A firewall could block this traffic.

    • TCP: failure to exchange data after TCP connection to a cluster IP from a vSphere Pod

  • The management network gateway resides on a third party router and does not allow traffic between kube-dns and the upstream DNS servers. This prevents the vSphere Pods on the ESXi host from completing a DNS lookup using Kube-DNS successfully. It is expected that KubeDNS will forward name lookups to the upstream DNS server configured in the Workload Network DNS server.

Resolution

VMware is aware of this issue and working to resolve this in a future release. vSphere 9.0 will provide more verbose logging and information regarding these issues.

 


Workaround:

See the below for possible workarounds:

  • Modify router and firewall configuration to allow any kind of traffic coming from the workload network interface to be routed back to the same network interface.

  • When choosing the gateway of the supervisor workload network, use an internal gateway with relaxed security rules. For example, instead of using your company external gateway you can point to an intermediate internal gateway that is not protected by a firewall and that is just one hop from your external gateway.

  • Allow traffic between upstream DNS servers configured under Workload Network's DNS servers and Kube-DNS.
    • The Kube-DNS workload IP can be found by connecting into the Supervisor cluster context and running the following command:
      kubectl get services -n kube-system | grep -i dns
      kube-dns      ClusterIP      <kube-dns IP>      <none>       <port>/UDP,<port>/TCP,<port>/TCP       ##d