Antrea-agent restarting continuously with error "Spec.PodCIDR is empty"
search cancel

Antrea-agent restarting continuously with error "Spec.PodCIDR is empty"

book

Article ID: 425788

calendar_today

Updated On:

Products

VMware vSphere Kubernetes Service

Issue/Introduction

Antrea-agent pods in a guest cluster face CrashLoopBackOff continuously.

# kubectl get pod -A | grep antrea

kube-system                    antrea-agent-8jxtr                                        1/2     CrashLoopBackOff    5 (107s ago)   7m28s
kube-system                    antrea-agent-nrpc8                                        1/2     CrashLoopBackOff    5 (68s ago)    6m51s
kube-system                    antrea-agent-ns9w8                                        2/2     Running             0              160m
kube-system                    antrea-controller-977ddfd7-x6nfb                          1/1     Running             0              160m
vmware-system-antrea           antrea-pre-upgrade-job-g5ssq                              0/1     Completed           0              160m

Antrea-agent pod logs have the following error.

# kubectl logs -n kube-system antrea-agent-8jxtr

E0116 07:50:52.897559       1 agent.go:911] "Spec.PodCIDR is empty for Node. Please make sure --allocate-node-cidrs is enabled for kube-controller-manager and --cluster-cidr specifies a sufficient CIDR range, or nodeIPAM is enabled for antrea-controller" err="context deadline exceeded" nodeName="xxxxxxxxxx"
F0116 07:50:52.897944       1 main.go:54] Error running agent: error initializing agent: Spec.PodCIDR is empty for Node xxxxxxxxxx

Environment

vSphere with Tanzu

Cause

The error "Spec.PodCIDR is empty" occurs as the Kubernetes kube-controller-manager runs out of subnets to assign to the new nodes. 

In this scenario, the cluster is deployed with a pod CIDRblock range of 172.X.X.0/24. This range would support only one node in the cluster. The first control plane node consumes the only /24 block available. Hence, the other nodes will not have any CIDR available.

Resolution

Pod CIDR ranges cannot be changed on an existing cluster. You must redeploy the cluster with a larger range or a different mask configuration. The recommendation is to use a CIDR with a /16 subnet.

Please refer to the below KB article for more information on changing Service and Pod CIDR ranges in a vSphere Kubernetes Guest Cluster.

Changing Service and Pod CIDR Ranges in vSphere Kubernetes Guest Cluster

Additional Information

Japanese KB: Antrea-agentが"Spec.PodCIDR is empty"というエラーで再起動を繰り返す