CNS Volumes failing to hot-add to VMs with error "Cannot open the disk or one of the snapshot disks it depends on. "
search cancel

CNS Volumes failing to hot-add to VMs with error "Cannot open the disk or one of the snapshot disks it depends on. "

book

Article ID: 410042

calendar_today

Updated On:

Products

VMware vCenter Server

Issue/Introduction

Container Volumes fail to attach to Kubernetes nodes due a disk backing error. This comes from the filtlib process running out of memory. 

You would also see the following signatures in the following log files.

vsanvcmgmtd.log
2025-01-01T00:00:01.153Z error vsanvcmgmtd[97475] [vSAN@6876 sub=Workflow opId=557d4438] Workflow current action has fault (vim.fault.CnsFault) {
-->    faultCause = (vim.fault.GenericVmConfigFault) {
-->       faultCause = (vmodl.MethodFault) null,
-->       faultMessage = (vmodl.LocalizableMessage) [
-->          (vmodl.LocalizableMessage) {
-->             key = "msg.disk.hotadd.Failed",
-->             arg = (vmodl.KeyAnyValue) [
-->                (vmodl.KeyAnyValue) {
-->                   key = "1",
-->                   value = "scsi1:3"
-->                }
-->             ],
-->             message = "Failed to add disk scsi1:3."
-->          },
-->          (vmodl.LocalizableMessage) {
-->             key = "msg.disk.hotadd.poweron.failed",
-->             arg = (vmodl.KeyAnyValue) [
-->                (vmodl.KeyAnyValue) {
-->                   key = "1",
-->                   value = "scsi1:3"
-->                }
-->             ],
-->             message = "Failed to power on scsi1:3. "
-->          },
-->          (vmodl.LocalizableMessage) {
-->             key = "msg.disk.noBackEnd",
-->             arg = (vmodl.KeyAnyValue) [
-->                (vmodl.KeyAnyValue) {
-->                   key = "1",
-->                   value = "/vmfs/volumes/XXX.vmdk"
-->                }
-->             ],
-->             message = "Cannot open the disk '/vmfs/volumes/XXX.vmdk' or one of the snapshot disks it depends on. "
-->          },
-->          (vmodl.LocalizableMessage) {
-->             key = "msg.iofilter.failure",
-->             arg = <unset>,
-->             message = "Operation failed"
-->          }
-->       ],
-->       reason = "Failed to add disk scsi1:3."
-->       msg = "Failed to add disk scsi1:3."
-->    },
-->    faultMessage = <unset>,
-->    reason = "VSLM task failed"
-->    msg = ""
--> }

vpxa.log
2025-01-01T00:00:00.786Z In(166) Vpxa[2101068]: [Originator@6876 sub=vpxLro opID=XXX] [VpxLRO] -- BEGIN task-27381 -- vm-8 -- vim.VirtualMachine.attachDisk -- xxxx
2025-01-01T00:00:00.895Z Er(163) Vpxa[2100334]: -->       (vmodl.LocalizableMessage) {
2025-01-01T00:00:00.895Z Er(163) Vpxa[2100334]: -->          key = "msg.disk.noBackEnd",
2025-01-01T00:00:00.895Z Er(163) Vpxa[2100334]: -->          arg = (vmodl.KeyAnyValue) [
2025-01-01T00:00:00.895Z Er(163) Vpxa[2100334]: -->             (vmodl.KeyAnyValue) {
2025-01-01T00:00:00.895Z Er(163) Vpxa[2100334]: -->                key = "1",
2025-01-01T00:00:00.895Z Er(163) Vpxa[2100334]: -->                value = "/vmfs/volumes/XXX.vmdk"
2025-01-01T00:00:00.895Z Er(163) Vpxa[2100334]: -->             }
2025-01-01T00:00:00.895Z Er(163) Vpxa[2100334]: -->          ],
2025-01-01T00:00:00.895Z Er(163) Vpxa[2100334]: -->          message = "Cannot open the disk '/vmfs/volumes/XXX.vmdk' or one of the snapshot disks it depends on. "
2025-01-01T00:00:00.895Z Er(163) Vpxa[2100334]: -->       },
...
2025-01-01T00:00:00.895Z Er(163) Vpxa[2100334]: -->    reason = "Failed to add disk 'scsi1:3'."
2025-01-01T00:00:00.895Z Er(163) Vpxa[2100334]: -->    msg = "Failed to add disk 'scsi1:3'."
2025-01-01T00:00:00.895Z Er(163) Vpxa[2100334]: --> }
 

vmware.log
## Attaching disk fails with admission check failed
## FiltLib: VMKPrivate_FiltModInitDiskInfo failed: "Admission check failed for memory resource" (195887233).
2025-01-01T00:00:00.813Z In(05) vmx - PluginLdr_Load: Loaded plugin 'libvmiof-disk-spm.so' from '/usr/lib64/vmware/plugin/libvmiof-disk-spm.so'
2025-01-01T00:00:00.813Z In(05) vmx - FiltLib: VMKPrivate_FiltModInitDiskInfo failed: "Admission check failed for memory resource" (195887233).
2025-01-01T00:00:00.813Z In(05) vmx - FiltLib: FiltLibAttachToFiltMod failed with error: "Operation failed" (1).
2025-01-01T00:00:00.813Z Er(02) vmx - DISKLIB-LIB   : DiskLibFiltLibInit: Failed to create filtLib context: Operation failed (334).
2025-01-01T00:00:00.813Z In(05) vmx - DISKLIB-LIB   : DiskLibOpenInt: Failed to create filtLib context: Operation failed (334).

## Attaching disk fails with filtmod out of memory
## FiltLib: VMKPrivate_FiltModInitDiskInfo failed: "Out of memory (ok to retry)" (195887125).
2025-01-01T00:00:00.524Z In(05) vmx - PluginLdr_Load: Loaded plugin 'libvmiof-disk-spm.so' from '/usr/lib64/vmware/plugin/libvmiof-disk-spm.so'
2025-01-01T00:00:00.525Z In(05) vmx - FiltLib: VMKPrivate_FiltModInitDiskInfo failed: "Out of memory (ok to retry)" (195887125).
2025-01-01T00:00:00.525Z In(05) vmx - FiltLib: FiltLibAttachToFiltMod failed with error: "Operation failed" (1).
2025-01-01T00:00:00.525Z Er(02) vmx - DISKLIB-LIB   : DiskLibFiltLibInit: Failed to create filtLib context: Operation failed (334).
2025-01-01T00:00:00.525Z In(05) vmx - DISKLIB-LIB   : DiskLibOpenInt: Failed to create filtLib context: Operation failed (334).
 

Cause

The issue is caused by the VM process not having enough memory to instantiate the iofilter component. This only applies when adding VMDKs whose storage policy has IO shares enabled. The affected VMDKs are generally large in size i.e. >=500GB.

# Attaching disk fails with admission check failed
FiltLib: VMKPrivate_FiltModInitDiskInfo failed: "Admission check failed for memory resource" (195887233).

# Attaching disk fails with filtmod out of memory
FiltLib: VMKPrivate_FiltModInitDiskInfo failed: "Out of memory (ok to retry)" (195887125).

Resolution

There are 2 options to resolve the issue;

- Reserve all memory assigned to the affected VM
- Remove the IO shares from the storage policy

Additional Information

Broadcom engineering are aware of the issue and are working on a fix for a future release - vSphere 9.2 at the earliest.