ESXi 8.0.3 host disconnects from vCenter due to Dell EMC RecoverPoint iofilterd-emcsplitter crashes.
search cancel

ESXi 8.0.3 host disconnects from vCenter due to Dell EMC RecoverPoint iofilterd-emcsplitter crashes.

book

Article ID: 433552

calendar_today

Updated On:

Products

VMware vSphere ESXi

Issue/Introduction

Symptoms:

  • VMware ESXi hosts intermittently show as "Disconnected" in vCenter Server despite remaining powered on and accessible via ping. Management operations (vMotion, configuration changes) become unresponsive.

  • Restarting management agents on the ESXi host restore the connection to the connected state.

Environment

VMware ESXi 8.0.3 (Build 24585383) with Dell EMC RecoverPoint Splitter (iofilterd-emcsplitter) installed.

 

Cause

The third-party storage IO filter, Dell EMC RecoverPoint Splitter (iofilterd-emcsplitter), experienced continuous silent crashes. The ESXi host management daemon (hostd) attempts to repeatedly recover the crashed plugin in an infinite loop. This continuous reload process deadlocks hostd worker threads, preventing it from responding to vCenter agent (vpxa) API queries. Consequently, the internal Envoy proxy reaches its strict 45-second timeout and terminates the connection to vCenter.

Log analysis confirms the persistent crashing of the storage IO filter and the subsequent hostd lockup:

  • vmkernel.log displays continuous core dumps from the EMC splitter. This  reveals that the iofilterd-emcsplitter daemon has been highly unstable, failing and generating memory core dumps since at least mid-February.

    2026-02-20T13:34:30.434Z In(182) vmkernel: cpu23:)UserDump: 3157: iofltd-emcsplit: Dumping cartel ##### (from world 139941674) to file /var/core/iofilterd-zdump.001 ...
    2026-03-03T13:04:30.329Z In(182) vmkernel: cpu48:)UserDump: 3157: iofltd-emcsplit: Dumping cartel ##### (from world 143316680) to file /var/core/iofilterd-zdump.000 ...
    2026-03-12T13:21:03.625Z In(182) vmkernel: cpu33:)UserDump: 3157: iofltd-emcsplit: Dumping cartel ##### (from world 147947490) to file /var/core/iofilterd-zdump.001 ...

  • hostd.log shows hostd caught in a rapid loop reloading the plugin library. Because the IO filter sits directly in the ESXi path, the host's primary management daemon (hostd) continuously attempted to recover the broken EMC plugin. hostd became trapped in an infinite loop, reloading the library up to three times every 30 seconds for hours on end. Under normal, healthy conditions, hostd should load an IO filter plugin exactly once when the service starts up or when the host boots. Every time hostd attempted to load this broken module, the active worker threads hung waiting for a response from the dead daemon, effectively holding the management stack.

    2026-03-16T15:40:52.077Z In(166) Hostd[####]: [Originator@6876 sub=Libs opID=###] PluginLdr_Load: Loaded plugin 'libvmiof-disk-emcsplitter.so' from '/usr/lib64/vmware/plugin/libvmiof-disk-emcsplitter.so'
    2026-03-16T15:40:54.272Z In(166) Hostd[####]: [Originator@6876 sub=Libs opID=###] PluginLdr_Load: Loaded plugin 'libvmiof-disk-emcsplitter.so' from '/usr/lib64/vmware/plugin/libvmiof-disk-emcsplitter.so'
    2026-03-16T15:40:54.317Z In(166) Hostd[####]: [Originator@6876 sub=Libs opID=###] PluginLdr_Load: Loaded plugin 'libvmiof-disk-emcsplitter.so' from '/usr/lib64/vmware/plugin/libvmiof-disk-emcsplitter.so'
    .
    .
    2026-03-17T07:01:03.684Z In(166) Hostd[####]: [Originator@6876 sub=Libs opID=###] PluginLdr_Load: Loaded plugin 'libvmiof-disk-emcsplitter.so' from '/usr/lib64/vmware/plugin/libvmiof-disk-emcsplitter.so'
    2026-03-17T07:01:05.810Z In(166) Hostd[####]: [Originator@6876 sub=Libs opID=###] PluginLdr_Load: Loaded plugin 'libvmiof-disk-emcsplitter.so' from '/usr/lib64/vmware/plugin/libvmiof-disk-emcsplitter.so'
    2026-03-17T07:01:05.856Z In(166) Hostd[####]: [Originator@6876 sub=Libs opID=###] PluginLdr_Load: Loaded plugin 'libvmiof-disk-emcsplitter.so' from '/usr/lib64/vmware/plugin/libvmiof-disk-emcsplitter.so'

  • hostd.log confirms the resulting 45-second threshold breach. With hostd locked up by the storage filter, it could no longer respond to health checks or API queries from the vCenter agent (vpxa). As described in KB 401817, the internal ESXi web services have a strict 45-second timeout. Once this 45-second limit was repeatedly breached, the host's Envoy proxy terminated the connection to vCenter, resulting in the host appearing "Disconnected" in the vSphere Client.

    2026-03-17T03:43:52.133Z In(166) Hostd[######]: ... HTTP Connection has timed out while waiting for further requests; ... N7Vmacore16TimeoutExceptionE(Operation timed out... duration: 00:00:45.588947)

  • envoy access log show the connection drop.

    2026-03-17T03:43:54.631Z In(166) envoy-access[####]: GET /hgw/host-####/vpxa/service 503 upstream_reset_before_response_started{connection_termination}

Resolution

Permanent Resolution:

  1. If Dell EMC RecoverPoint is required: Contact Dell Support to obtain a VIB version compatible with ESXi 8.0 U3.

  2. If Dell EMC RecoverPoint is not in use:

    • Place the ESXi host into Maintenance Mode.

    • Remove the VIB using the following command: esxcli software vib remove -n emcsplitter

    • Reboot the ESXi host to ensure the management stack is cleared.

Workaround (Short-term mitigation): To prevent immediate disconnections while coordinating the VIB update, increase the vCenter agent (vpxa) timeout:

  1. Log in to the ESXi host via SSH.

  2. Modify the read_timeout_ms parameter in the vpxa configuration (default 45000ms) to 120000ms. Note: Refer to the internal documentation or Broadcom KB 401817 for specific config.xml editing steps.

  3. Restart the management agents: services.sh restart