Workaround:
Ensure that valid snapshots have been taken prior to performing any actions. Do not create live snapshots. For vRealize Automation 8.x, ensure cold powered down snapshots are performed
It is highly encouraged to have a stringent backup procedure available on a daily schedule.
It is recommended to have redundant network pathing between ESXi hosts in which host the Aria Automation appliance nodes.
Workaround:
- On any of the vRealize Automation virtual appliance(s) run the following command once
vracli cluster exec -- touch /data/db/live/debug
Note: This will create a flag file on all cluster nodes that will then pause the database pods when they start so they can then be worked with manually.
- Restart the postgres-1 and postgres-2 pods
kubectl delete pod -n prelude postgres-1; kubectl delete pod -n prelude postgres-2;
Note: This will restart the postgres-1 and postgres-2 pods. Due to the debug flags, they will stop then wait instead of starting vPostgres
- Identify the node on which postgres-1 and postgres-2 pods are now running
kubectl get pods -n prelude -l name=postgres -o wide
- On the node where postgres-1 is running execute the following command to remove the debug flags
rm /data/db/live/debug
- Run
kubectl -n prelude logs -f postgres-1
- Monitor the logs and ensure that postgres-1 discovers postgres-0 as a primary, re-syncs from it and starts working. A message similar to the below will be reported if successful
'[repmgrd] monitoring primary node "postgres-0.postgres.prelude.svc.cluster.local" (ID: 100) in normal state' on the postgres-1 log.
- Repeat the same for postgres-2
- Finally, remove the /data/db/live/debug file on the node where postgres-0 is running