Failed to start File System Check on /dev/vg_root_0/lv_root0" error on Photon OS based virtual appliances
search cancel

Failed to start File System Check on /dev/vg_root_0/lv_root0" error on Photon OS based virtual appliances

book

Article ID: 326323

calendar_today

Updated On:

Products

VMware Cloud Foundation VMware vCenter Server VMware SDDC Manager VCF Operations/Automation (formerly VMware Aria Suite) VMware Site Recovery Manager 8.x VMware Live Recovery VMware Cloud Director VMware vCenter Server 8.0 VCF Operations

Issue/Introduction

  • Following a system reboot, power outage, datastore inaccessibility, or similar disruption, the Photon OS-based virtual appliance boots into emergency mode.

  • The appliance fails to start normally and displays an error similar to the following:

[FAILED] Failed to start File System Check on /dev/vg_root_0/lv_root0
[DEPEND] Dependency failed for /sysroot.
[DEPEND] Dependency failed for Initrd Root File System.
[DEPEND] Dependency failed for Reload Configuration from the Real Root.



  • The system is stuck in emergency mode with the following errors displayed:

    [FAILED] Failed to start File System 0-a2d7-4c0a-###-8c1d##83acf
    [DEPEND] Dependency failed for /var/log/vmware.
    [DEPEND] Dependency failed for Local File Systems.
    You are in emergency mode. After logging in, type "journalctl -xb" to view system logs, "systemctl reboot" to reboot, "systemctl default" or "exit"

  • During emergency mode, a 'Dependency failed' error may be displayed for a specific mount point, such as the /storage/log directory:

 

Cause

  • This issue occurs when the virtual appliance experiences file inconsistencies after being forcefully halted as the result of a storage failure, power failure, or other crash.
  • The virtual appliance may also enter emergency mode when it no longer has access to one or more of its disks. For example, if a hard disk is removed from the vCenter virtual appliance.

Resolution

To resolve this issue, scan and correct the filesystem by running the fsck command automatically (preferred) or manually.
 
Note: Before proceeding, take a snapshot of the affected virtual appliance.
  1. Reboot the virtual appliance, and immediately after the OS starts, press 'e' to open the GNU GRUB Edit Menu.
  2. Locate the line that begins with the word linux

Option 1:

At the end of the line, add fsck.repair=yes then press F10 to continue booting the appliance. This will force the filesystem to check and auto-resolve disk issues. The appliance may silently reboot several times to fix issues as needed.

Option 2:

At the end of the line, add systemd.unit=emergency.target then press F10 to continue booting the appliance.

1. Find the affected filesystem:

systemctl status systemd-fsck-root

2. Run the fsck command against the mount point that has the issue:

fsck -y /dev/<mount>

Note:The -y switch will fix the nodes automatically. Replace <mount> with the mount point experiencing the issue:

fsck -y /dev/mapper/vg_root_0-lv_root_0

Alternatively, manually run a filesystem check and repair using the command below (the partition is just an example)

e2fsck -y /dev/vg_root_0/lv_root_0
reboot

3. Power off the virtual appliance:

shutdown -h now

4. Power on the virtual appliance.