Cannot connect to external IP (highlighted in red) of the supervisor cluster
root@xxxxxxxxxxxxxxxx [ ~ ]# kubectl get services -A
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP ###.##.0.1 <none> 443/TCP 10d
kube-system docker-registry ClusterIP ###.##.80.238 <none> 5000/TCP 10d
kube-system kube-apiserver-authproxy-svc ClusterIP ###.##.124.27 <none> 8443/TCP 10d
kube-system kube-apiserver-lb-svc LoadBalancer ###.##.207.150 xxx.xx.0.2 443:30303/TCP,6443:31533/TCP 10d
kube-system kube-dns ClusterIP ###.##.0.10 <none> 53/UDP,53/TCP,9153/TCP 10d
kube-system snapshot-validation-service ClusterIP ###.##.104.167 <none> 443/TCP 10d
kube-system storage-quota-webhook-service ClusterIP ###.##.105.161 <none> 443/TCP 10d
supervisor v1beta1-cluster-control-plane-service LoadBalancer ###.##.34.114 xxx.xx.0.3 6443/TCP 9d
svc-tkg-domain-c52 capi-controller-manager-metrics-service ClusterIP ###.##.166.48 <none> 9844/TCP 10d
Deploying this appliance is the equivalent of deploying a piece of L2, networking infrastructure. The IP range selected for the virtual servers will be reserved by the load balancer appliance. This means if the VIP range is ###.###.1.0/24, and there happens to be a gateway on ###.###.1.1, anyone or anything trying to access a host on ###.###.1.0/24 is going to have a bad time. The appliance will argue it owns ###.###.1.1, any routes that require the gateway ###.###.1.1 failing in the process.
vSphere with Tanzu
When the HAProxy VM is deployed with only two NICs, the Workload network must also provide the logical networks used to access the load balanced services. It is helpful to assign a /16 to the Workload network when deploying HAProxy with only two NICs.