Symptoms:
See the Workaround below for additional information.
Workaround:
Run commands to scale down replicas to Zero :
kubectl scale deployment orchestration-ui-app --replicas=0 -n prelude
kubectl scale deployment vco-app --replicas=0 -n prelude sleep 120
Run commands to scale up Replicas based on Single deployment (1) or Clustered Deployment (3):
kubectl scale deployments orchestration-ui-app --replicas=1 -n prelude
kubectl scale deployment vco-app --replicas=1 -n prelude
initialDelaySeconds: 180
timeoutSeconds: 10
periodSeconds: 30
successThreshold: 1
failureThreshold: 20
livenessProbe:
failureThreshold: 3
httpGet:
path: /vco/api/health/liveness
port: 8280
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 30
name: vco-server-app
ports:
- containerPort: 8280
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /vco/api/health/readiness
port: 8280
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 30
/opt/scripts/deploy.sh
vracli vro
" commands as per the documentation: Additional command line interface configuration options
NOTE: If the pods keep going into CrashLoopBAckOff and generating heap dumps, then it is likely that Orchestrator is automatically re-trying the failed workflow when it restarts. In this situation, you will need to cancel all executions:
vracli vro cancel executions
7G
then the vRealize Orchestrator Appliance RAM should be increased with 4G
respectively because the subtraction between the default heap value of 3G
and the desired heap memory is 4G
.Impact/Risks:
VMware Aria Automation or Automation Orchestrator fails to properly boot. Workflows will fail to run until this is resolved.