PSODs and iLO driver performance problems on HPE servers caused by interrupt issues
search cancel

PSODs and iLO driver performance problems on HPE servers caused by interrupt issues

book

Article ID: 414242

calendar_today

Updated On:

Products

VMware vSphere ESXi

Issue/Introduction

  • HPE servers using AMD CPUs running on HPE customized ESXi image might experience PSODs due to iLO interrupt storm.
  • The iLO driver may get slow or hang due to lost iLO interrupts.
  • Shared level-triggered interrupts from iLO (INTx) are sometimes lost (dropped) or occasionally become stuck / storming. 
  • Issue is not observed when a standard ESXi image is used.
  • Sample snippets from the PSOD:

    [VMware ESXi 7.0.3 [Releasebuild-23307199 x86_64]
    NMI IPI: Panic requested by another PCPU. RIPOFF(base):RBP:CS [0xdfa70(0x420012000000):0x0:0xf48] (Src 0x1, CPU89)
    cr0=0x80010031 cr2=0x9cc7ca0960 cr3=0x20cdde000 cr4=0x140768
    FMS=17/31/0 uCode=0x830107a LBR: 0x9ee7c9dd79 -> 0x9ee7c9ddb2
    *PCPU89:37036812/ssacli
    PCPU  0: SSSSSSSSSSSSSSSVIISSSIUUISSVSVISSSSSUVUIVSVIVISISSSISUUIVIVIIVIV
    PCPU 64: SUSISUUIIIIIIIISUSIUSISSIUSSSISIVSSVVISVIVVIIVVISSSSSSISSIISIISS
    Code start: 0x420012000000 VMK uptime: DD:HH:MM:SS.NNN
    Saved backtrace from: pcpu 89 Heartbeat NMI
    0x45396649b660:[0x4200120dfa6f]IntrCookie_DoInterrupt@vmkernel#nover+0x3c4 stack: 0x430184e02a00
    0x45396649b710:[0x4200120dfd11]IntrCookie_VmkernelInterrupt@vmkernel#nover+0x3a stack: 0x31
    0x45396649b730:[0x420012155aac]IDT_IntrHandler@vmkernel#nover+0x9d stack: 0x0
    0x45396649b750:[0x42001214e067]gate_entry@vmkernel#nover+0x68 stack: 0x0
    0x45396649b818:[0x420012112245]Timer_GetCycles@vmkernel#nover+0x2 stack: 0x13
    0x45396649b820:[0x4200120c0321]BH_DrainAndDisableInterrupts@vmkernel#nover+0x8a stack: 0x0
    0x45396649b8a0:[0x4200120dfd8a]IntrCookie_VmkernelInterrupt@vmkernel#nover+0xb3 stack: 0x31
    0x45396649b8c0:[0x420012155aac]IDT_IntrHandler@vmkernel#nover+0x9d stack: 0x0
    0x45396649b8e0:[0x42001214e067]gate_entry@vmkernel#nover+0x68 stack: 0x0
    0x45396649b9a8:[0x4200120847a5]Power_ArchPerformWait@vmkernel#nover+0x106 stack: 0x420056400980
    0x45396649b9b0:[0x42001208487e]Power_ArchSetCState@vmkernel#nover+0x8f stack: 0x0
    0x45396649ba00:[0x4200123aef3c]CpuSchedIdleLoopInt@vmkernel#nover+0x275 stack: 0x420056400100
    0x45396649ba70:[0x4200123b3002]CpuSchedDispatch@vmkernel#nover+0x1aff stack: 0x420056400140
    0x45396649bcb0:[0x4200123b3d57]CpuSchedWait@vmkernel#nover+0x2f4 stack: 0x3
    0x45396649be30:[0x4200123c8821]EventQ_WaitShared@vmkernel#nover+0x52 stack: 0x9ea789a3e4
    0x45396649be50:[0x42001250374f]UserThread_QueueWait@vmkernel#nover+0x34 stack: 0x1
    0x45396649be80:[0x420012513ac8]LinuxThread_Futex@vmkernel#nover+0x1dd stack: 0x0
    0x45396649bf10:[0x4200124b4863]User_LinuxSyscallHandler@vmkernel#nover+0x1a4 stack: 0x0
    0x45396649bf40:[0x42001214e067]gate_entry@vmkernel#nover+0x68 stack: 0x0
    base fs=0x0 gs=0x420056400000 Kgs=0x0
    1 other PCPU is in panic.
    YYYY-MM-DDTHH:MM:SS.NNNZ cpu89:37036812)NMI: 712: NMI IPI: RIPOFF(base):RBP:CS [0x112246(0x420012000000):0x420056400000:0xf48] (Src 0x1, CPU89)
    YYYY-MM-DDTHH:MM:SS.NNNZ cpu89:37036812)NMI: 712: NMI IPI: RIPOFF(base):RBP:CS [0x10ab63e(0x420012000000):0x430184e62cb0:0xf48] (Src 0x1, CPU89)
    YYYY-MM-DDTHH:MM:SS.NNNZ cpu99:2097251)NMI: 712: NMI IPI: RIPOFF(base):RBP:CS [0x112246(0x420012000000):0x420058c00000:0xf48] (Src 0x1, CPU99)
    YYYY-MM-DDTHH:MM:SS.NNNZ cpu99:2097251)NMI: 712: NMI IPI: RIPOFF(base):RBP:CS [0x1089f7(0x420012000000):0x1:0xf48] (Src 0x1, CPU99)
    Coredump to file: /vmfs/volumes/######-######-######/vmkdump/####-######-#####.dump
    Finalized dump header (15/15) FileDump: Successful.
    No port for remote debugger.

Environment

VMware vSphere ESXi 7.x
VMware vSphere ESXi 8.x
VMware vSphere ESXi 9.x

Cause

In ESXi, level-triggered shared interrupts require all participating drivers to implement proper acknowledge (ack) callbacks. When an interrupt occurs, the system invokes all registered ack callbacks, issues an End-of-Interrupt (EOI), sets the interrupt flag, and proceeds to call handlers for those ack callbacks that return VMK_OK.
If any driver fails to correctly acknowledge its device, the level-triggered nature of the interrupt can result in immediate reinterrupts, leading to repeated or persistent interrupt handling.

In addition, current versions of ESXi mask level-triggered IOAPIC pin interrupts (such as PCI INTx interrupts) at the IOAPIC while migrating the interrupt destination from one logical CPU to another. ESXi expects that even if a new interrupt comes in while the pin is masked, because the interrupt is level-triggered, the IOAPIC will respond to it at the point when ESXi unmasks the pin and will interrupt the CPU. This expected behavior is not seen with iLO devices on HPE servers using AMD CPUs, resulting in occasional lost interrupts.

Resolution

Driver-side fix:
iLO driver will be updated to use MSI interrupts instead of an INTx interrupt as MSI is edge-triggered and not shared.
This removes the complex shared level-trigger masking/retargeting problems entirely.
This is resolved in iLO native driver 10.9.1
HPE ProLiant DL345 / DL365 / DL385 Gen11 Platforms Running VMware ESXi 8.0 U2 / U3 May Experience the Purple Diagnostic Screen "PSOD error @BlueScreen: NMI IPI: Panic requested by another PCPU. RIPOFF(base)"


ESXi change implemented: 
ESXi kernel will be updated in a future release to not mask the INTx interrupt while moving it, thereby increasing robustness of ESXi in interrupt handling.