Pod incorrectly scheduled to Service Node Pool instead of Management Node Pool in TCA deployed TKG Workload Cluster
book
Article ID: 426676
calendar_today
Updated On:
Products
VMware Telco Cloud Automation
Issue/Introduction
The application specific Pod is observed running on a Service Node Pool instead of the expected Management Node Pool.
Environment
TCA 3.2
Cause
The issue is caused due to missing definition in the Application Override YAML (CSAR inputs) file during the CNF instantiation or update.
Even if the Helm Chart supports affinity, the specific keys or indentation required to inject the nodeAffinity rule were missing from the active override file.
Consequently, the Kubernetes Scheduler received a Pod manifest without placement restrictions, allowing it to schedule the Pod on any available node with sufficient resources (often the larger Service pool).
Resolution
To resolve this issue, the Application Override YAML file must be corrected to explicitly define the nodeAffinity for the specific component.
Identify the Missing Configuration:
Review the Helm Chart values.yaml structure to ensure the nesting in Override YAML is correct.
Ensure the nodeAffinity block targets the correct key.
Correct the configuration of Override Yaml File in TCA UI and Update the CNF:
Log in to the TCA Manager UI.
Navigate to the Inventory > Network Functions.
Select the target CNF and click Edit/Update.
In the Input/Override section, paste the corrected YAML containing the full affinity definition.
Complete the update workflow.
Once the task finishes, verify the Pod has been terminated and re-created on the correct node: kubectl get pods -n <namespace> -o wide | grep podname
Expected Result: The NODE column should now reflect a node from the Management pool.