Scale down the AuthHub Kubernetes cluster for lower environments
search cancel

Scale down the AuthHub Kubernetes cluster for lower environments


Article ID: 257229


Updated On:


VIP Authentication Hub


We are doing POC for VIP Auth solution running on Azure Kubernetes Services. The Broadcom requirements for the Kubernetes cluster to run VIP Auth Hub are quite extensive, considering this is for DEV/POC environment (requirement is 6 virtual machines running 4x CPU and 16 GB each). Can you please suggest how can we scale down the Kubernetes cluster for our lower environments?


Release : Oct.04


Here are some the recommendations which customers can use to scale down the AuthHub environment.

  • We provide sizing calculations and you can built the cluster based upon the sizing recommendations which comes out of the sizing sheet which is provided in this document link.

Sizing the Deployment

  • In Oct base release onwards we provide the demo mode installation which minimally uses the resources and scales down the pods to 1 only. Here are the release notes and instructions

VIP Authentication Hub Oct Release notes

  • Additionally you can use 3 nodes only and provision the 4th node when you plan for Upgrade as that might be required that time. 
  • Environments should be sized to support the load as per sizing calc. If a dev env has very little volume and needs no HA, then a much smaller number of cpus will suffice - with  CSP, one 16cpu vm worker node should be ok for low volume shared dev usage, maybe 8cpu if there is no other software running there
  • All clusters can be scaled down when not in use - automatically or manually
  • Maximize use of tenants as each tenant is a full service.  All dev envs should be individuals tenants in one cluster.
  • Staging cluster VMs used for perf testing can be enabled / disabled as needed
  • If you are OK to tweak then you can also try fitting more pods on a single node but this has to be done carefully so resources are utilized properly else you might start seeing OOM memory errors.