"[FAILED] Failed to start File System Check on /dev/vg_root_0/lv_root0" error on Photon OS based virtual appliances
search cancel

"[FAILED] Failed to start File System Check on /dev/vg_root_0/lv_root0" error on Photon OS based virtual appliances

book

Article ID: 326323

calendar_today

Updated On:

Products

VMware Cloud Foundation VMware vCenter Server VMware SDDC Manager VMware Aria Suite VMware Site Recovery Manager 8.x VMware Live Recovery VMware Cloud Director

Issue/Introduction

  • After rebooting, power outage, datastore inaccessibility or other issues, the Photon OS based virtual appliance enters emergency mode.
  • The appliance fails to start and there is an error similar to:

[FAILED] Failed to start File System Check on /dev/vg_root_0/lv_root0
[DEPEND] Dependency failed for /sysroot.
[DEPEND] Dependency failed for Initrd Root File System.
[DEPEND] Dependency failed for Reload Configuration from the Real Root.

 

Note: The preceding log excerpts are only examples. Date, time, and environmental variables may vary depending on the environment.

Environment

  • VMware vCenter Server 6.x
  • VMware vCenter Server 7.x
  • VMware vCenter Server 8.x
  • VMware SDDC Manager 4.x
  • VMware SDDC Manager 5.x
  • VMware Aria Suite Lifecycle 8.x
  • VMware Aria Automation 8.x
  • VMware Aria Automation Orchestrator 8.x
  • VMware Identity Manager 3.3.7
  • VMware Live Site Recovery 9.x
  • VMware Site Recovery 8.8
  • VMware vSphere Replication 8.x
  • VMware vSphere Replication 9.x
  • VMware Cloud Director 10.X
  • VMware Aria Operations for logs 8.x

Cause

  • This issue occurs when the virtual appliance experiences file inconsistencies after being forcefully halted as the result of a storage failure, power failure, or other crash.
  • The vCenter may also enter emergency mode when it no longer has access to one or more of its disks. For example, if a hard disk is removed from the vCenter virtual appliance.

Resolution

To resolve this issue, scan and correct the filesystem by running the fsck command automatically (preferred) or manually.
 
Note: Before proceeding, take a snapshot of the affected virtual appliance.
  1. Reboot the virtual appliance, and immediately after the OS starts, press 'e' to open the GNU GRUB Edit Menu.
  2. Locate the line that begins with the word linux

Option 1:

At the end of the line, add fsck.repair=yes then press F10 to continue booting the appliance. This will force the filesystem to check and auto-resolve disk issues. The appliance may silently reboot several times to fix issues as needed.

Option 2:

At the end of the line, add systemd.unit=emergency.target then press F10 to continue booting the appliance.

1. Find the affected filesystem:

     # systemctl status systemd-fsck-root

2. Run the fsck command against the mount point that has the issue:

# fsck -y /dev/<mount>

Note: The -y switch will fix the nodes automatically. Replace <mount> with the mount point experiencing the issue:

         # fsck -y /dev/mapper/vg_root_0-lv_root_0

3. Power OFF the virtual appliance:

    # shutdown -h now

4. Power ON the virtual appliance.

Additional Information

The steps outlined in Resolution – Option 1 are also applicable when vCenter Server enters emergency mode and displays the following error:

 

You might want to save "/run/initramfs/rdsosreport.txt" to a USB stick or /boot
after mounting them and attach it to a bug report.

Give root password for maintenance
(or press Control-D to continue):

Generating "/run/initramfs/rdsosreport.txt"

Entering emergency mode. Exit the shell to continue.
[ype "journalctl" to view system logs.
You might want to save "/run/initramfs/rdsosreport.txt" to a USB stick or /boot
after mounting them and attach it to a bug report.

Give root password for maintenance
(or press Control-D to continue):

Generating "/run/initramfs/rdsosreport.txt"

Entering emergency mode. Exit the shell to continue.
[ype "journalctl" to view system logs.
You might want to save "/run/initramfs/rdsosreport.txt" to a USB stick or /boot