How to set requests and limits for queue-proxy sidecar containers
search cancel

How to set requests and limits for queue-proxy sidecar containers

book

Article ID: 297894

calendar_today

Updated On:

Products

VMware Tanzu Application Service for VMs

Issue/Introduction

- One workload service POD consists of 2 containers. One is workload and another is called queue-proxy.
$ kubectl get pod tanzu-java-web-app-00002-deployment-868b484b7c-fzhpj \
  -o jsonpath='{.spec.containers[*].name}'

  workload
  queue-proxy

 - It's able to simply configure the CPU/Memory limits and requests for the workload container when creating a workload by passing below parameters. See Workload Apply flags for more details about the parameters.
  •  --request-memory
  • --limit-memory
  • --request-cpu
  • --limit-cpu

 - However, above method cannot set the same for the queue-proxy container. Instead, an overlay is required to be applied to the CNRs package to set the CPU/Memory limits and requests for the queue-proxy container.

Environment

Product Version: 1.3

Resolution

1. Manually add an overlay in the tap-values.yaml: (Refer to Customize a package that was installed by using a profile)
package_overlays:
- name: cnrs
  secrets:
  - name: cnrs-patch-sidecar
2. Create a secret with below with required requests/limits set:
apiVersion: v1
kind: Secret
metadata:
  name: cnrs-patch-sidecar
  namespace: tap-install
stringData:
  cnrs-patch-sidecar.yaml: |
    #@ load("@ytt:overlay", "overlay")
    #@overlay/match by=overlay.subset({"kind":"ConfigMap","metadata":{"name":"config-deployment","namespace":"knative-serving"}})
    ---
    data:
      #@overlay/match missing_ok=True
      queue-sidecar-cpu-request: "25m"
      #@overlay/match missing_ok=True
      queue-sidecar-cpu-limit: "1000m"
      #@overlay/match missing_ok=True
      queue-sidecar-memory-request: "50Mi"
      #@overlay/match missing_ok=True
      queue-sidecar-memory-limit: "200Mi"
3. Run below command to apply the changes:
$ tanzu package installed update tap -n tap-install -v $TAP_VERSION -f tap-values.yaml
4. Confirm the changes were applied by running:
$ kubectl get cm config-deployment -n knative-serving -ojsonpath="{.data.queue-sidecar-cpu-request}"

$ kubectl get cm config-deployment -n knative-serving -ojsonpath="{.data.queue-sidecar-cpu-limit}"

$ kubectl get cm config-deployment -n knative-serving -ojsonpath="{.data.queue-sidecar-memory-request}"

$ kubectl get cm config-deployment -n knative-serving -ojsonpath="{.data.queue-sidecar-memory-limit}"
5. If the changes didn't take effect, then try deleting the above configmap and waiting for kapp to restore it to the cluster with the new changes:
$ kubectl delete cm config-deployment -n knative-serving