# kubectl-vsphere login --server <Supervisor_Control_Plane_Node_IP_Address> --vsphere-username <username> --insecure-skip-tls-verify -v 10
DEBU[YYYY-MM-DD HH:MM:SS] User passed verbosity level: 10 DEBU[YYYY-MM-DD HH:MM:SS] Setting verbosity level: 10 DEBU[YYYY-MM-DD HH:MM:SS] Setting request timeout: DEBU[YYYY-MM-DD HH:MM:SS] login called as: kubectl-vsphere login --server <Supervisor_Control_Plane_Node_IP_Address> --vsphere-username <username> --insecure-skip-tls-verify -v 10 DEBU[YYYY-MM-DD HH:MM:SS] Creating wcp.Client for <Supervisor_Control_Plane_Node_IP_Address>. INFO[YYYY-MM-DD HH:MM:SS] Got unexpected HTTP error: Head "https://<Supervisor_Control_Plane_Node_IP_Address>/sdk/vimServiceVersions.xml": EOF ERRO[YYYY-MM-DD HH:MM:SS] Error occurred during HTTP request: Get "https://<Supervisor_Control_Plane_Node_IP_Address>/wcp/loginbanner": EOF There was an error when trying to connect to the server.\nPlease check the server URL and try again.FATA[YYYY-MM-DD HH:MM:SS] Error while connecting to host <Supervisor_Control_Plane_Node_IP_Address>: Get "https://<Supervisor_Control_Plane_Node_IP_Address>/wcp/loginbanner": EOF.
"512 worker_connections are not enough while connecting to upstream" is observed:root@SVCP [ ~ ]# k get pods -A | grep plugin-vsphere
kube-system kubectl-plugin-vsphere-[SVCP-VM2] 1/1 Running 0 7d
kube-system kubectl-plugin-vsphere-[SVCP-VM1] 1/1 Running 0 7d
kube-system kubectl-plugin-vsphere-[SVCP-VM3] 1/1 Running 0 7d
root@SVCP [ ~ ]# k logs -n kube-system kubectl-plugin-vsphere-[SVCP-VM2] | grep "worker_connections are not" | tail -n10
[...]YYYY-MM-DD HH:MM:SS[alert] 6#0: *973680 512 worker_connections are not enough while connecting to upstream, client: x.x.x.x, server: default, request: "GET /apis/vmoperator.vmware.com/v1alpha2/namespaces/<vsphere-namespace>/virtualmachines?allowWatchBookmarks=true&resourceVersion=100400720&watch=true HTTP/2.0", upstream: "https://127.0.0.1:6443/apis/vmoperator.vmware.com/v1alpha2/namespaces/<vsphere-namespace>/virtualmachines?allowWatchBookmarks=true&resourceVersion=100400720&watch=true", host: "x.x.x.x"
kubectl-plugin-vsphere-pods has a default limit of 512 concurrent active HTTP connections. Exceeding this limit results in new connections being blocked and failing.Workaround
The workaround is essentially modifying the config file /etc/vmware/wcp/nginx/nginx.conf on each Supervisor Node in each cluster.
Important: The changes are reverted when this specific node gets re-deployed, like during a rolling Supervisor update! It needs to be reapplied!
cp /etc/vmware/wcp/nginx/nginx.conf /etc/vmware/wcp/nginx/nginx.conf.bak2
events {
worker_connections 1024;
}
crictl stop $(crictl ps -q --name kubectl-plugin-vsphere)
crictl ps --name kubectl-plugin-vsphere
crictl logs -f --since 1m $(crictl ps -q --name kubectl-plugin-vsphere)