Step 1: Generate a Template (Optional but Recommended)
Since the TcaNodePool schema is complex, it is safest to export an existing node pool configuration to use as a template.
- Find an existing node pool in your target namespace:
kubectl get tknp -n <your-namespace>
- Export it to a YAML file:
kubectl get tknp <existing-nodepool-name> -n <your-namespace> -o yaml > new-nodepool.yaml
- Clean up the YAML file:
Remove metadata.uid, metadata.resourceVersion, metadata.creationTimestamp, metadata.generation, and the status section.
Step 2: Define the TcaNodePool Manifest
Create or edit your YAML file (e.g., new-tknp.yaml). Below is a structural example of what the TcaNodePool resource looks like. Adjust the replicas, memory, cpu, and network settings to match your requirements.
YAML
----
apiVersion: telco.vmware.com/v1
kind: TcaNodePool
metadata:
name: <new-nodepool-name> # e.g., worker-pool-gpu-01
namespace: <target-namespace> # The namespace where the Cluster exists
labels:
cluster.x-k8s.io/cluster-name: <cluster-name> # Must match the Cluster name
spec:
clusterName: <cluster-name> # Link to the parent Cluster
replicas: 3 # Number of worker nodes
# Infrastructure definition (simplified example)
nodeConfig:
memory: 16384 # RAM in MB
cpu: 4 # vCPUs
storage: 50 # Disk in GB
# Network settings (crucial for TCA)
networks:
- name: <network-name> # e.g., "sriov-network-1"
label: <network-label>
# Kubernetes settings
k8sConfig:
version: <k8s-version> # e.g., v1.20.5+vmware.1
Note: The exact fields under spec (like vsphere, networks, or customizations) depend heavily on your underlying infrastructure (vSphere, VCD, OpenStack) and TCA version. Always refer to an existing tknp in your environment for the correct schema.
Step 3: Apply the Manifest
Once your YAML file is ready, apply it to the cluster:
kubectl apply -f new-tknp.yaml
Step 4: Verify Creation
Monitor the status of the new node pool.
1. Check if the resource was accepted:
2. kubectl get tknp -n <namespace>
3. Watch the associated Machine deployments. TCA (via the Cluster API) will begin creating the actual virtual machines (Machines) corresponding to your new pool:
4. kubectl get machines -n <namespace> -l
cluster.x-k8s.io/deployment-name=<new-nodepool-name>