Virtual Machines running on VMFS storage on PURE storage arrays, using iSCSI, experience high latency. iSCSI connections are also dropping and reconnecting.
This may also be isolated to a single host, which can lead to confusion on if it is an ESXi host side issue or not.
To match this issue, all these signature messages need to be present:
The following logs can be found in /var/run/log on the ESXi host:
Vobd.log shows iSCSI based messages:
vobd[2097814] [iscsiCorrelator] 3986907780153us: [vob.iscsi.connection.stopped] iScsi connection 0 stopped for vmhba64:C1:T0vobd[2097814] [iscsiCorrelator] 3986911055021us: [vob.iscsi.connection.stopped] iScsi connection 0 stopped for vmhba64:C1:T0
Vmkernel.log shows these iSCSI based messages:
vmkwarning: cpu53:2098361)WARNING: iscsi_vmk: iscsivmk_ConnReceiveAtomic:474: vmhba64:CH:1 T:0 CN:0: Failed to receive data: Connection closed by peer
vmkernel: cpu53:2098361)iscsi_vmk: iscsivmk_ConnRxNotifyFailure:1223: vmhba64:CH:1 T:0 CN:0: Connection rx notifying failure: Failed to Receive. State=Online
Syslog.log shows these iSCSI based messages:
Er(27) iscsid[2098421] send_pdu failed rc-22
Wa(28) iscsid[2098421] Kernel reported iSCSI connection 3:0 error (1011) state (3)
vSphere (all versions)
PURE FlashArray
This is a rarer iSCSI specific session based problem that has been identified on the storage array side and triggers on a particular set of storage I/O workload patterns.
This is a known issue that PURE has identified with their Purity software for their FlashArray systems.
PURE has released a fix in Purity version 6.9.1, released on August 29th, 2025.
In addition, PURE support confirmed Purity version 6.9.2 has an advanced setting that helps mitigate this type of behavior.
Please contact PURE support for further details if needed.