ESXi PSOD After Upgrading the ESXi Host from 7.0U3m to 7.0U3t
search cancel

ESXi PSOD After Upgrading the ESXi Host from 7.0U3m to 7.0U3t

book

Article ID: 400208

calendar_today

Updated On:

Products

VMware vSAN

Issue/Introduction

Symptoms : 

  • After upgrading an ESXi host from version 7.0U3m to 7.0U3t, the host experienced a PSOD.

  • PSOD stack

    2025-05-10T08:44:13.054Z cpu65:2099340)World: 3072: PRDA 0x420050400000 ss 0x0 ds 0xf50 es 0xf50 fs 0x0 gs 0x0
    2025-05-10T08:44:13.054Z cpu65:2099340)World: 3074: TR 0xf68 GDT 0xfffffffffca02000 (0xffff) IDT 0xfffffffffc408000 (0xffff)
    2025-05-10T08:44:13.054Z cpu65:2099340)World: 3075: CR0 0x80050033 CR3 0x8ac90d3000 CR4 0x156660
    2025-05-10T08:44:13.087Z cpu65:2099340)Backtrace for current CPU #65, worldID=2099340, fp=0x453a2aa1bc00
    2025-05-10T08:44:13.087Z cpu65:2099340)0x453a2aa1bbd0:[0x42000b8ff107]PanicvPanicInt@vmkernel#nover+0x327 stack: 0x453a2aa1bcc8, 0x0, 0x42000b8ff107, 0x453a2aa1bc00, 0x453a2aa1bbd0
    2025-05-10T08:44:13.087Z cpu65:2099340)0x453a2aa1bca0:[0x42000b8ff96e]Panic_vPanic@vmkernel#nover+0x23 stack: 0x4501c91dd928, 0x42000b916651, 0x453a2aa1be08, 0x420000000010, 0x453a2aa1bd20
    2025-05-10T08:44:13.087Z cpu65:2099340)0x453a2aa1bcc0:[0x42000b916650]vmk_PanicWithModuleID@vmkernel#nover+0x41 stack: 0x453a2aa1bd20, 0x453a2aa1bce0, 0x12d2bb3c37, 0x4502cede36d0, 0x42000d1c1758
    2025-05-10T08:44:13.087Z cpu65:2099340)0x453a2aa1bd20:[0x42000d1a57ad]SSDLOGFreeLogInt@LSOMCommon#1+0x172 stack: 0x4535b12b09468b52, 0x42000d1a9a8d, 0x1581e4200b, 0x42000d2225d2, 0x1
    2025-05-10T08:44:13.087Z cpu65:2099340)0x453a2aa1bd40:[0x42000d1a9a8c]SSDLOG_FreeLogEntry@LSOMCommon#1+0x9 stack: 0x1, 0x453a2aa1bde0, 0x4501c003afa8, 0x10, 0x45bffefd7560
    2025-05-10T08:44:13.087Z cpu65:2099340)0x453a2aa1bd50:[0x42000d2225d1][email protected]#0.0.0.1+0x2a stack: 0x4501c003afa8, 0x10, 0x45bffefd7560, 0x42000d238965, 0xffff
    2025-05-10T08:44:13.087Z cpu65:2099340)0x453a2aa1bd80:[0x42000d238964][email protected]#0.0.0.1+0x5d9 stack: 0x3fcf6401000, 0x420000001000, 0x4501cd30a818, 0x45bffefc4d90, 0x45bffefc4db8
    2025-05-10T08:44:13.087Z cpu65:2099340)0x453a2aa1bee0:[0x42000d13ce82][email protected]#0.0.0.1+0x637 stack: 0xa0c94c7866a, 0x8, 0x0, 0xa0c94c7866a, 0x0
    2025-05-10T08:44:13.087Z cpu65:2099340)0x453a2aa1bf90:[0x42000b91e288]vmkWorldFunc@vmkernel#nover+0x49 stack: 0x42000b91e284, 0x0, 0x453a2aa1f000, 0x453a2aa1f000, 0x453a0651f140
    2025-05-10T08:44:13.087Z cpu65:2099340)0x453a2aa1bfe0:[0x42000bbb4d55]CpuSched_StartWorld@vmkernel#nover+0x86 stack: 0x0, 0x42000b8c4de0, 0x0, 0x0, 0x0
    2025-05-10T08:44:13.087Z cpu65:2099340)0x453a2aa1c000:[0x42000b8c4ddf]Debug_IsInitialized@vmkernel#nover+0xc stack: 0x0, 0x0, 0x0, 0x0, 0x0
    2025-05-10T08:44:13.112Z cpu65:2099340)VMware ESXi 7.0.3 [Releasebuild-24585291 x86_64]
    Failed at bora/modules/vmkernel/lsomcommon/ssdlog/ssdopslog.c:724 -- NOT REACHED

Issue Validation : 

The impacted node has disk I/O errors prior to the PSOD

2025-05-10T07:45:16.875Z cpu14:2097619)Vol3: 2128: Couldn't read volume header from naa.xxxxxxxxxxxxxxxx:1: I/O error
2025-05-10T07:45:16.875Z cpu14:2097619)ScsiDevice: 1524: Could not submit helper request to flush cache on last close of device naa.xxxxxxxxxxxxxxxx - issuing sync call to flush cache

Current Logs post PSOD also shows the disk error for the same disk:

vmkernel.log:2025-05-12T12:13:16.004Z cpu61:2943180)PLOG: PLOGProbeDevice:6873: Probed plog device <naa.xxxxxxxxxxxxxxxx:1> xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx 0x4501c00876a8 exists.. continue with old entry
vmkernel.log:2025-05-12T12:28:28.693Z cpu32:2950011)HPP: HppPluginStateLogger:5930: 1: naa.xxxxxxxxxxxxxxxx (flags=0x00000001, openCount=2):
vmkernel.log:2025-05-12T12:28:30.052Z cpu22:2950011)LSOMCommon: LSOMGetWCEnableSATA:1851: Failure Failure while retrieving SATA drive cache state for naa.xxxxxxxxxxxxxxxx:2
vmkernel.log:2025-05-12T12:28:30.053Z cpu22:2950011)WARNING: PLOG: PLOGVsi_DeviceWCEGet:1805: Get write cache settings on device naa.xxxxxxxxxxxxxxxx failed
vmkernel.log:2025-05-12T12:28:34.856Z cpu34:2950043)HPP: HppPluginStateLogger:5930: 1: naa.xxxxxxxxxxxxxxxx (flags=0x00000001, openCount=2):
vmkernel.log:2025-05-12T12:28:36.224Z cpu16:2950043)LSOMCommon: LSOMGetWCEnableSATA:1851: Failure Failure while retrieving SATA drive cache state for naa.xxxxxxxxxxxxxxxx:2
vmkernel.log:2025-05-12T12:28:36.224Z cpu16:2950043)WARNING: PLOG: PLOGVsi_DeviceWCEGet:1805: Get write cache settings on device naa.xxxxxxxxxxxxxxxx failed

Although the disk reports Active in vSAN Disk configuration, repeated I/O failures and cache state retrieval errors indicate degraded health.

Device: naa.5xxxxxxxxxxxxxxx
Is SSD: true
VSAN Disk Group UUID: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
Deduplication: true
Compression: true
Is Capacity Tier: true
On-disk format version: 15

Environment

  • vSAN 7.x 
  • vSAN 8.X

Cause

The PSOD was triggered by a double free memory error when the host attempted to release an SSD log entry that had already been freed. This is linked to the affected capacity-tier disk in the vSAN disk group.

Resolution

The affected disk should be replaced proactively to avoid further host instability.

Action Steps:

  1. Place the affected ESXi host into Maintenance Mode with Ensure Accessibility.

  2. Remove the disk from the vSAN disk group before hardware replacement.
  3. Replace the disk at the hardware level.

  4. Rebuild the disk group or re-add a healthy replacement disk.

Refer : How to remove a disk from a vSAN disk group/host

Note : With Deduplication and Compression enabled, individual disk removal is not supported. The full disk group must be removed.

Refer : Adding or Removing Disks with Deduplication and Compression Enabled