/var/log/vmware/prelude/upgrade-*.log | Upgrade reports. Review based on timestamp |
One node environment Cluster environments | Packages installation details |
/var/log/bootstrap/postupdate.log | Initialization scripts details |
/var/log/bootstrap/everyboot.log | Initialization scripts details |
/var/log/vmware/prelude/deploy-*.log | Services startup details |
SSH to the VMware Aria Automation node indicated in the Aria Suite Lifecycle error.
Validate that there is available disk space in the root partition (/dev/sda4) running the command vracli disk-mgr
root@vranode1 [ /tmp ]# vracli disk-mgr
/dev/sda4(/):
Total size: 47.80GiB
Free: 33.58GiB(70.2%)
Available(for non-superusers): 31.13GiB(65.1%)
SCSI ID: (0:0)
/dev/sdb(/data):
Total size: 140.68GiB
Free: 109.54GiB(77.9%)
Available(for non-superusers): 102.32GiB(72.7%)
SCSI ID: (0:1)
/dev/sdc(/var/log):
Total size: 21.48GiB
Free: 9.09GiB(42.3%)
Available(for non-superusers): 7.97GiB(37.1%)
SCSI ID: (0:2)
/dev/sdd(/home):
Total size: 29.36GiB
Free: 27.41GiB(93.4%)
Available(for non-superusers): 25.90GiB(88.2%)
SCSI ID: (0:3)
Run the following command to collect the directories and logs related to the upgrade:
mkdir /tmp/upgradelogs && cp -R /var/log/vmware/prelude /tmp/upgradelogs && cp -R /opt/vmware/var/log/vami /tmp/upgradelogs && cp -R /var/log/bootstrap /tmp/upgradelogs && tar -zcvf /tmp/upgradelogs.tar.gz /tmp/upgradelogs
Extract and continue your review with the collected file /tmp/upgradelogs.tar.gz for the failure code or submit this data to Global Services for additional assistance in troubleshooting the upgrade.
cd / tmp
rm upgradelogs.tar.gz
rm -r upgradelogs
Build numbers and versions for VMware Aria Automation (formerly VMware vRealize Automation
Troubleshooting VMware Aria Automation cloud proxies and On-Premises appliance deployments
Upgrade of Cluster VRA 8.x fails with Split brain scenario
Upgrade from vRA or vRO to newer may fail if there are certain records in the known_hosts file of the virtual appliance
vRealize Automation 8.x upgrade failed when iptables.service did not start