In environments where Tanzu Kubernetes Grid Multicloud (TKGm) clusters are deployed through VMware Cloud Director (VCD) using the Container Service Extension (CSE) 4.2.3 plugin, customers may observe that the Contour ingress service fails to obtain a LoadBalancer IP.
In this case, the service/envoy (v1) resource under the tanzu-system-ingress namespace remains in an “ongoing reconcile” state with an empty LoadBalancer ingress field.
The issue was reported by the customer when deploying Contour inside a Tenant Organization (OSE Org) that was independent of the Solution Organization (CSE Org), where CSE and Harbor were hosted.
Product: VMware Cloud Director (VCD) with Container Service Extension (CSE) 4.2.3
Cluster Type: Tanzu Kubernetes Grid Multicloud (TKGm) Workload Cluster
Ingress Controller: Contour
Load Balancer: NSX Advanced Load Balancer (Avi)
Orgs Involved:
Solution Org: CSE Org with Harbor registry and CSE appliance
Tenant Org: OSE Org where the TKGm cluster was deployed
Integration: Avi integrated via CSE (AKO not directly deployed)
The root cause was traced to a network and firewall configuration issue combined with the lack of direct AKO (Avi Kubernetes Operator) integration in the Tenant Org.
Key contributing factors:
No AKO instance was deployed in the Tenant Org’s TKGm cluster, meaning LoadBalancer services could not automatically trigger Virtual Service or VIP creation on the Avi Controller.
CSE-managed Avi integration does not dynamically handle VIP allocation for LoadBalancer-type services in TKGm clusters.
The Tenant Org IP Space and network mapping were incomplete, preventing Contour from obtaining a LoadBalancer IP.
Additional physical network and firewall policies prevented proper connectivity between the cluster nodes and the Avi Controller.
As a result, the Contour service reconciliation was stuck in a loop, continuously reporting LoadBalancer ingress is empty.
Steps and actions taken:
Network Validation
Verified IP Space association between the Tenant Org and the Avi Controller.
Corrected network and firewall policies that were blocking communication with Avi.
Configuration Review
Confirmed that AKO was not managing the cluster; LoadBalancer allocation was handled via CSE.
Verified that CSE configuration supported Avi Load Balancer annotations.
Functional Testing (Verification Tests)
Deployed two Nginx test pods and exposed them with a LoadBalancer service annotated as:
4. Contour Validation
Redeployed Contour in the Tenant Org.
Confirmed that the LoadBalancer ingress field was successfully populated.
Verified application reachability over both IP and FQDN.
After the above corrective actions, Contour deployed successfully and ingress functionality was restored.
When deploying TKGm clusters via VCD + CSE, AKO does not manage Avi integration automatically. In such environments, LoadBalancer services depend on correct CSE configuration and IP Space association.
Ensure the following before deploying Contour:
The Tenant Org IP Space is properly mapped to the Avi Controller.
Firewall rules allow traffic between cluster nodes, Avi Service Engines, and the Controller.
Sufficient IPs are allocated for VIP creation.
Reference tests used during triage:
Test 1: Basic VM connectivity using curl to validate T0–T1–Avi–App path.
Test 2: LoadBalancer test using Nginx pods.