VMware Cloud Foundation VRM VM vrm-tcserver log file catalina.out growing to fill root filesystem
search cancel

VMware Cloud Foundation VRM VM vrm-tcserver log file catalina.out growing to fill root filesystem

book

Article ID: 330363

calendar_today

Updated On:

Products

VMware Cloud Foundation

Issue/Introduction

Symptoms:
  • Most VMware Cloud Foundation operations fail due to the / filesystem being almost full.
  • In the /home/vrack/vrm/logs/lcm.log file of the SDDC Manager VM, you see entries similar to:
Nov 20, 2017 5:07:33 PM : Upgrade scheduled,
Nov 20, 2017 5:15:12 PM : Upgrade status changed to INPROGRESS,
Nov 20, 2017 5:15:18 PM : Upgrade element resourceType: ESX_HOST resourceId: 1e03e170-e056-4401-9180-e2c4d78d897e status changed to INPROGRESS,
Nov 20, 2017 5:15:18 PM : Upgrade element resourceType: ESX_HOST resourceId: 1e03e170-e056-4401-9180-e2c4d78d897e recorded stage ESX_HOST_UPGRADE_STAGE_MAKE_VIM_CONNECTIONS,
Nov 20, 2017 5:15:20 PM : Successfully ran upgrade stage ESX_HOST_UPGRADE_STAGE_MAKE_VIM_CONNECTIONS,
Nov 20, 2017 5:15:20 PM : Upgrade element resourceType: ESX_HOST resourceId: 1e03e170-e056-4401-9180-e2c4d78d897e recorded stage ESX_HOST_UPGRADE_STAGE_PRECHECK,
Nov 20, 2017 5:15:27 PM : Successfully ran upgrade stage ESX_HOST_UPGRADE_STAGE_PRECHECK,
Nov 20, 2017 5:15:27 PM : Upgrade element resourceType: ESX_HOST resourceId: 1e03e170-e056-4401-9180-e2c4d78d897e recorded stage ESX_HOST_UPGRADE_STAGE_DISABLE_DRS_RULES,
Nov 20, 2017 5:15:27 PM : Successfully ran upgrade stage ESX_HOST_UPGRADE_STAGE_DISABLE_DRS_RULES,
Nov 20, 2017 5:15:27 PM : Upgrade element resourceType: ESX_HOST resourceId: 1e03e170-e056-4401-9180-e2c4d78d897e recorded stage ESX_HOST_UPGRADE_STAGE_TAKE_BACKUP,
Nov 20, 2017 5:15:27 PM : Upgrade element resourceType: ESX_HOST resourceId: 1e03e170-e056-4401-9180-e2c4d78d897e status changed to COMPLETED_WITH_FAILURE,
Nov 20, 2017 5:15:28 PM : Upgrade element resourceType: ESX_HOST resourceId: 89225694-7261-45a1-9dd8-aec90a268d09 status changed to SKIPPED,
Nov 20, 2017 5:15:28 PM : Upgrade element resourceType: ESX_HOST resourceId: 98dcbd68-e9f0-46a8-b592-044036d662d0 status changed to SKIPPED,  
Nov 20, 2017 5:15:28 PM : Upgrade element resourceType: ESX_HOST resourceId: d32311da-32fc-4c01-b663-f9f76e1487a5 status changed to SKIPPED,
Nov 20, 2017 5:15:28 PM : Upgrade status changed to COMPLETED_WITH_FAILURE,

 
  • Note: The preceding log excerpts are only examples. Date, time, and environmental variables may vary depending on your environment.
  • In the sos.log file of the SDDC Manager VM, you see these entries during an sos log bundle creation::
rack-1-vrm-1:/home/vrack/bin # /opt/vmware/evosddc-support/sos

Welcome to Supportability and Serviceability(SoS) utility!

Logs : /var/tmp/sos-2017-11-21-13-12-11-30296

Log file : /var/tmp/sos-2017-11-21-13-12-11-30296/sos.log

Progress : 43%, Completed tasks : [AUDIT, ISVM-CASSANDRA, ISVM-ZOOKEEPER, ESX, SWITCH, HEALTH-CHECK, vCenter ServeProgress : 43%, Completed tasks : [AUDIT, ISVM-CASSANDRA, ISVM-ZOOKEEPER, ESX, SWITCH, HEALTH-CHECK, vCenter ServeProgress : 43%, Completed tasks : [AUDIT, ISVM-CASSANDRA, ISVM-ZOOKEEPER, ESX, SWITCH, HEALTH-CHECK, vCenter ServeProgress : 43%, Completed tasks : [AUDIT, ISVM-CASSANDRA, ISVM-ZOOKEEPER, ESX, SWITCH, HEALTH-CHECK, vCenter ServeProgress : 43%, Completed tasks : [AUDIT, ISVM-CASSANDRA, ISVM-ZOOKEEPER, ESX, SWITCH, HEALTH-CHECK, vCenter ServeProgress : 43%, Completed tasks : [AUDIT, ISVM-CASSANDRA, ISVM-ZOOKEEPER, ESX, SWITCH, HEALTH-CHECK, vCenter ServeProgress : 43%, Completed tasks : [AUDIT, ISVM-CASSANDRA, ISVM-ZOOKEEPER, ESX, SWITCH, HEALTH-CHECK, vCenter ServeProgress : 43%, Completed tasks : [AUDIT, ISVM-CASSANDRA, ISVM-ZOOKEEPER, ESX, SWITCH, HEALTH-CHECK, vCenter ServeProgress : 43%, Completed tasks : [AUDIT, ISVM-CASSANDRA, ISVM-ZOOKEEPER, ESX, SWITCH, HEALTH-CHECK, vCenter ServeProgress : 43%, Completed tasks : [AUDIT, ISVM-CASSANDRA, ISVM-ZOOKEEPER, ESX, SWITCH, HEALTH-CHECK, vCenter ServeProgress : 43%, Completed tasks : [AUDIT, ISVM-CASSANDRA, ISVM-ZOOKEEPER, ESX, SWITCH, HEALTH-CHECK, vCenter ServeTimeout on command: "/bin/bash -c "ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no [email protected] 'cd /;tar -ohcvf - opt/vrack/hms/config opt/vrack/hms/logs opt/vrack/upgrade var/log --exclude='temp*.zip'' > /var/tmp/sos-2017-11-21-13-12-11-30296/rack-1/hms.tar ; gzip -f /var/tmp/sos-2017-11-21-13-12-11-30296/rack-1/hms.tar""

Progress : 50%, Completed tasks : [AUDIT, ISVM-CASSANDRA, HMS, ISVM-ZOOKEEPER, ESX, SWITCH, HEALTH-CHECK, vCenter Progress : 60%, Completed tasks : [AUDIT, ISVM-CASSANDRA, HMS, ISVM-ZOOKEEPER, ESX, SWITCH, LCM, HEALTH-CHECK, vCeProgress : 66%, Completed tasks : [AUDIT, ISVM-CASSANDRA, HMS, ISVM-ZOOKEEPER, ESX, NSX, SWITCH, LCM, HEALTH-CHECKProgress : 66%, Completed tasks : [AUDIT, ISVM-CASSANDRA, HMS, ISVM-ZOOKEEPER, ESX, NSX, SWITCH, LCM, HEALTH-CHECKProgress : 66%, Completed tasks : [AUDIT, ISVM-CASSANDRA, HMS, ISVM-ZOOKEEPER, ESX, NSX, SWITCH, LCM, HEALTH-CHECKProgress : 73%, Completed tasks : [AUDIT, ISVM-CASSANDRA, HMS, ISVM-ZOOKEEPER, PSC, ESX, NSX, SWITCH, LCM, HEALTH-Progress : 80%, Completed tasks : [AUDIT, ISVM-CASSANDRA, HMS, ISVM-ZOOKEEPER, PSC, ESX, NSX, SWITCH, HMS-ESX-NODEProgress : 80%, Completed tasks : [AUDIT, ISVM-CASSANDRA, HMS, ISVM-ZOOKEEPER, PSC, ESX, NSX, SWITCH, HMS-ESX-NODEProgress : 80%, Completed tasks : [AUDIT, ISVM-CASSANDRA, HMS, ISVM-ZOOKEEPER, PSC, ESX, NSX, SWITCH, HMS-ESX-NODEProgress : 80%, Completed tasks : [AUDIT, ISVM-CASSANDRA, HMS, ISVM-ZOOKEEPER, PSC, ESX, NSX, SWITCH, HMS-ESX-NODE[Errno 28] No space left on device

Progress : 80%, Completed tasks : [AUDIT, ISVM-CASSANDRA, HMS, ISVM-ZOOKEEPER, PSC, ESX, NSX, SWITCH, HMS-ESX-NODETraceback (most recent call last):r]
File "/opt/vmware/lib/python2.7/logging/__init__.py", line 885, in emit
self.flush()
File "/opt/vmware/lib/python2.7/logging/__init__.py", line 845, in flush
self.stream.flush()
IOError: [Errno 28] No space left on device
Logged from file base.py, line 218
Traceback (most recent call last):
File "/opt/vmware/lib/python2.7/logging/__init__.py", line 885, in emit
self.flush()
File "/opt/vmware/lib/python2.7/logging/__init__.py", line 845, in flush
self.stream.flush()
IOError: [Errno 28] No space left on device
Logged from file base.py, line 220
Traceback (most recent call last):
File "/opt/vmware/lib/python2.7/logging/__init__.py", line 885, in emit
self.flush()
File "/opt/vmware/lib/python2.7/logging/__init__.py", line 845, in flush
self.stream.flush()
IOError: [Errno 28] No space left on device
Logged from file base.py, line 132
Traceback (most recent call last):
File "/opt/vmware/lib/python2.7/logging/__init__.py", line 885, in emit
self.flush()
File "/opt/vmware/lib/python2.7/logging/__init__.py", line 845, in flush
IOError: [Errno 28] No space left on device

 
Note: The preceding log excerpts are only examples. Date, time, and environmental variables may vary depending on your environment.


Cause

This occurs because the /home/vrack/vrm/logs/catalina.out file grows until the / (or root) filesystem is filled, or it is manually (re)moved.

Resolution

This is a known issue affecting VMware Cloud Foundation. Currently, there is no resolution.

Workaround:
To workaround this issue on the VRM (2.1) or SDDC Manager Controller (2.2) VM, relocate the /home/vrack/vrm/logs/catalina.out file to another system.

To automatically rotate the catalina.out (vrm-tcserver service) log file, add it to the logrotate process configuration:
 

:SSH to the VRM VM (192.168.100.108) as the root user.

 
  1. SSH to the VM as the root user:
  • for VMware Cloud Foundation 2.1, use the VRM VM (192.168.100.108).
  • ​For VMware Cloud Foundation 2.2, use the SDDC Manager Controller VM (192.168.100.40).  
  1. Create the /etc/logrotate.d/tomcat file using the following command:
echo -e "/home/vrack/vrm/logs/catalina.out {\n\tcopytruncate\n\tdaily\n\trotate 7\n\tcompress\n\tmissingok\n\tsize 500M\n}" > /etc/logrotate.d/tomcat
  1. Verify the file was created properly with:  cat /etc/logrotate.d/tomcat  command. The output looks similar to:
/home/vrack/vrm/logs/catalina.out {
        copytruncate
        daily
        rotate 7
        compress
        missingok
        size 500M
}
 
  1. Set the proper permissions on /etc/logrotate.d/tomcat file using the following command:
chmod 644 /etc/logrotate.d/tomcat
  1. Either wait until 0:05 UTC for the /etc/cron.daily/logrotate cron job to run automatically, or manually run logrotate using the following command:
/usr/sbin/logrotate /etc/logrotate.conf


About the logrotate configuration for tomcat (created in step 2 and shown in step 3):
  • /home/vrack/vrm/logs/catalina.out - the (log) file to be rotated
  • daily - rotates the catalina.out daily
  • rotate – keeps at most 7 log files
  • compress – compresses the rotated files
  • size – rotates if the size of catalina.out is greater than 500 MB
 


Additional Information

This is a known issue with a variety of VMware Appliances using Tomcat for web services (using various log file locations), including the VCSA 6.0 as described in /storage/log directory is full in vCenter Server Appliance 6.0

To be alerted when this document is updated, click the Subscribe to Article link in the Actions box.