Volume became 100% full, and this will most certainly cause problems.
After deleting all of the CommitLog* files and rebooting the MLS appliance, all containers indeed transitioned to a 'Healthy' state.)
Despite all containers being in a now 'Healthy' state, there were no registered subscriptions (as reported on SYSVIEW TSDSTATS), and hence no metric data was flowing.
We suspected that another reboot (post starting the ZMSSTART address spaces) would be enough to re-register the subscriptions. The customer rebooted the MLS appliance this morning, all containers again transitioned to a state of 'Healthy', subscriptions were indeed registered, and metric data flow resumed.
Space on the /lvm* file systems on the MLS appliance is still between 55% and 76%, so the space issue (believed to be a possible reason for the original data stoppage) is still un-addressed. Customer opened up an internal service desk issue to request that the VMWare team add three additional storage devices to the MLS appliance.
We got three additional 200Gb drives attached to his MLS appliance x86 VMWare guest. We walked through using the moi-diskutil.sh format command to format the drives, and the moi-diskutil.sh lvmadd command to add the new drives to the existing lvm1,2,3 storage pools. Their space %used was between 55-85% prior. After the addition of the drives the %used is now in the 12-15% range for each of the lvm1,2,3 storage pools. Basically, the distributed scripts worked flawlessly and, using them, we were able to dynamically just "add disk space" while MLS was still running. Customer resumed streaming metric data from the remaining production LPARs
If 'commit' logs will need to be cleaned up before restarting the appliance, you want to delete the specific commit log files which are causing problems (but if the customer doesn't care if he loses all current data, then he could just delete all files in the commit log directories)