How to resize the / partition of Aria Suite Lifecycle Manager 8.18
search cancel

How to resize the / partition of Aria Suite Lifecycle Manager 8.18

book

Article ID: 415268

calendar_today

Updated On:

Products

VCF Operations/Automation (formerly VMware Aria Suite)

Issue/Introduction

There can be instances when the / partition needs to be resized to accommodate for more space.

Environment

Aria Suite Lifecycle 8.18

Resolution

Recommendation:

1. We recommended exhausting all other options of freeing disk space, e.g. this KB details the steps to find and remove unnecessary files to free disk space.

2. We recommend verifying other details of the environment according to this KB, and opening a SR with support referencing that KB.

3. If all other options are exhausted, or you really want to proceed, all snapshots of the appliance must first be deleted. As a precaution for the disk resize activity, create a clone of the appliance instead as a backup. Make sure to leave the option to power on the clone unchecked (it should already be unchecked by default) after the cloning process is done. Once the disk resize is done, and you have confirmed the appliance is behaving as expected, you can take a new snapshot of the appliance and delete the cloned backup appliance.


Part 1: Check current appliance details:

  1. First we take a look at the original disk size of the / partition:

    root@vcf-cm-asl [ ~ ]# df -B M
    Filesystem                                  1M-blocks   Used Available Use% Mounted on
    devtmpfs                                           4M     0M        4M   0% /dev
    tmpfs                                           2961M     1M     2961M   1% /dev/shm
    tmpfs                                           1185M     1M     1184M   1% /run
    /dev/mapper/system-system_0                     9989M  5825M     3653M  62% /
    tmpfs                                           5120M     1M     5120M   1% /tmp
    /dev/sda3                                        488M    50M      403M  11% /boot
    /dev/mapper/storage-storage_0                  10051M     3M     9534M   1% /storage
    /dev/sda2                                         10M     3M        8M  22% /boot/efi
    /dev/mapper/vg_alt_root-lv_alt_root            10038M     1M     9507M   1% /storage/alt_root
    /dev/mapper/vg_lvm_snapshot-lv_lvm_snapshot     8022M     1M     7594M   1% /storage/lvm_snapshot
    /dev/mapper/data-data_0                        90461M 60675M    25585M  71% /data
  2. We can also check the values for the logical volume of the root partition:

    root@vcf-cm-asl [ ~ ]# lvdisplay /dev/mapper/system-system_0
      --- Logical volume ---
      LV Path                /dev/system/system_0
      LV Name                system_0
      VG Name                system
      LV UUID                yRBQta-EsDA-IFNW-irvk-wDKH-y7wU-qteDRu
      LV Write Access        read/write
      LV Creation host, time photon-installer, 2024-09-14 10:26:37 +0000
      LV Status              available
      # open                 1
      LV Size                <10.00 GiB
      Current LE             2559
      Segments               1
      Allocation             inherit
      Read ahead sectors     auto
      - currently set to     256
      Block device           253:1
  3. We can check the structure of the logical volumes:

    root@vcf-cm-asl [ ~ ]# lsblk
    NAME                              MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
    sda                                 8:0    0 10.6G  0 disk
    +-sda1                              8:1    0    4M  0 part
    +-sda2                              8:2    0   10M  0 part /boot/efi
    +-sda3                              8:3    0  512M  0 part /boot
    +-sda4                              8:4    0   10G  0 part
      +-system-system_0               253:1    0   10G  0 lvm  /
    sdb                                 8:16   0 90.1G  0 disk
    +-data-data_0                     253:4    0 90.1G  0 lvm  /data
    sdc                                 8:32   0 10.1G  0 disk
    +-storage-storage_0               253:3    0 10.1G  0 lvm  /storage
    sdd                                 8:48   0  8.1G  0 disk
    +-swap-swap_0                     253:0    0    8G  0 lvm  [SWAP]
    sde                                 8:64   0 10.1G  0 disk
    +-vg_alt_root-lv_alt_root         253:2    0 10.1G  0 lvm  /storage/alt_root
    sdf                                 8:80   0  8.1G  0 disk
    +-vg_lvm_snapshot-lv_lvm_snapshot 253:5    0  8.1G  0 lvm  /storage/lvm_snapshot
    sr0                                11:0    1 1024M  0 rom
  4. We want to collect the SCSI H:C:T:L details to more easily identify the disk in vCenter:

    root@vcf-cm-asl [ ~ ]# lsscsi
    [0:0:0:0]    cd/dvd  NECVMWar VMware SATA CD00 1.00  /dev/sr0
    [32:0:0:0]   disk    VMware   Virtual disk     2.0   /dev/sda
    [32:0:1:0]   disk    VMware   Virtual disk     2.0   /dev/sdb
    [32:0:2:0]   disk    VMware   Virtual disk     2.0   /dev/sdc
    [32:0:3:0]   disk    VMware   Virtual disk     2.0   /dev/sdd
    [32:0:4:0]   disk    VMware   Virtual disk     2.0   /dev/sde
    [32:0:5:0]   disk    VMware   Virtual disk     2.0   /dev/sdf

Part 2: Edit appliance details in vCenter

  1. Next, login to vCenter, and delete all snapshots of the Aria Suite Lifecycle appliance. This is where it is recommended you create a clone of the appliance as a backup per the initial recommendation at the of of this section.

  2. Then, power off the guest OS, and edit the appliance settings for the disk size by identifying the corresponding H:C:T:L 32:0:0:0:

  3. We can set the new size to 20G, save the settings, and power the appliance back on.


Part 3: Inform appliance of additional disk space:

  1. Once the appliance is back online, SSH to the appliance and see the new space (20G) has been detected:

    root@vcf-cm-asl [ ~ ]# lsblk
    NAME                              MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
    sda                                 8:0    0 20.6G  0 disk
    +-sda1                              8:1    0    4M  0 part
    +-sda2                              8:2    0   10M  0 part /boot/efi
    +-sda3                              8:3    0  512M  0 part /boot
    +-sda4                              8:4    0   10G  0 part
      +-system-system_0               253:1    0   10G  0 lvm  /
    sdb                                 8:16   0 90.1G  0 disk
    +-data-data_0                     253:2    0 90.1G  0 lvm  /data
    sdc                                 8:32   0 10.1G  0 disk
    +-storage-storage_0               253:4    0 10.1G  0 lvm  /storage
    sdd                                 8:48   0  8.1G  0 disk
    +-swap-swap_0                     253:0    0    8G  0 lvm  [SWAP]
    sde                                 8:64   0 10.1G  0 disk
    +-vg_alt_root-lv_alt_root         253:3    0 10.1G  0 lvm  /storage/alt_root
    sdf                                 8:80   0  8.1G  0 disk
    +-vg_lvm_snapshot-lv_lvm_snapshot 253:5    0  8.1G  0 lvm  /storage/lvm_snapshot
    sr0                                11:0    1 1024M  0 rom
  2. We can also confirm this with fdisk:

    root@vcf-cm-asl [ ~ ]# fdisk -l /dev/sda
    GPT PMBR size mismatch (22151167 != 43123735) will be corrected by write.
    Disk /dev/sda: 20.56 GiB, 22079352832 bytes, 43123736 sectors
    Disk model: Virtual disk
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: gpt
    Disk identifier: B0A21919-B1EA-4DF9-A8F4-DF35F201E42B

    Device       Start      End  Sectors  Size Type
    /dev/sda1     2048    10239     8192    4M BIOS boot
    /dev/sda2    10240    30719    20480   10M EFI System
    /dev/sda3    30720  1079295  1048576  512M Linux filesystem
    /dev/sda4  1079296 22050815 20971520   10G Linux LVM
  3. We can now extend the partition with cfdisk:

    root@vcf-cm-asl [ ~ ]# cfdisk /dev/sda

  4. Scroll down to select /dev/sda4 and select resize to assign the “Free space” to /dev/sda4:

  5. The remainder of the available space may be automatically detected:

  6. Hit enter and confirm the new size:

  7. We finish by writing the new partition table and then quit:

  8. We can confirm the changes with fdisk again:

    root@vcf-cm-asl [ ~ ]# fdisk -l /dev/sda
    Disk /dev/sda: 20.56 GiB, 22079352832 bytes, 43123736 sectors
    Disk model: Virtual disk
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: gpt
    Disk identifier: B0A21919-B1EA-4DF9-A8F4-DF35F201E42B

    Device       Start      End  Sectors  Size Type
    /dev/sda1     2048    10239     8192    4M BIOS boot
    /dev/sda2    10240    30719    20480   10M EFI System
    /dev/sda3    30720  1079295  1048576  512M Linux filesystem
    /dev/sda4  1079296 43123702 42044407   20G Linux LVM
  9. We then need to inform LVM that the physical volume has changed:

    root@vcf-cm-asl [ ~ ]# pvresize /dev/sda4
      Physical volume "/dev/sda4" changed
      1 physical volume(s) resized or updated / 0 physical volume(s) not resized
  10. We extend (-l) the logical volume to use 100% of the FREE logical extents (+100%FREE) available, while also resizing (-r) the filesystem in the logical volume /dev/system/system_0:

    root@vcf-cm-asl [ ~ ]# lvextend -l +100%FREE -r /dev/system/system_0
      Size of logical volume system/system_0 changed from <10.00 GiB (2559 extents) to <20.05 GiB (5132 extents).
      Logical volume system/system_0 successfully resized.
    resize2fs 1.46.5 (30-Dec-2021)
    Filesystem at /dev/mapper/system-system_0 is mounted on /; on-line resizing required
    old_desc_blocks = 1, new_desc_blocks = 2
    The filesystem on /dev/mapper/system-system_0 is now 5255168 (4k) blocks long.

Part 4: Verify new disk space and appliance behavior:

  1. Finally, we confirm the new size for the / partition:

    root@vcf-cm-asl [ ~ ]# df -B M
    Filesystem                                  1M-blocks   Used Available Use% Mounted on
    devtmpfs                                           4M     0M        4M   0% /dev
    tmpfs                                           2961M     1M     2961M   1% /dev/shm
    tmpfs                                           1185M     1M     1184M   1% /run
    /dev/mapper/system-system_0                    20114M  5778M    13413M  31% /
    tmpfs                                           5120M     9M     5112M   1% /tmp
    /dev/sda3                                        488M    50M      403M  11% /boot
    /dev/mapper/data-data_0                        90461M 60675M    25585M  71% /data
    /dev/mapper/storage-storage_0                  10051M     3M     9534M   1% /storage
    /dev/sda2                                         10M     3M        8M  22% /boot/efi
    /dev/mapper/vg_alt_root-lv_alt_root            10038M     1M     9507M   1% /storage/alt_root
    /dev/mapper/vg_lvm_snapshot-lv_lvm_snapshot     8022M     1M     7594M   1% /storage/lvm_snapshot
  2. Confirm the appliance is behaving as expected by logging in to the portal.

  3. Finally, you may take a non-memory snapshot of the appliance in its current working state, and delete the clone if one was made.

Additional Information

e.g. if /dev/sda4 is not a lvm disk (you'll know because there is no system-system disk), then after the appliance reboots simply run cfdisk, follow the steps to resize, and then for step 9 run resize2fs /dev/sda4