ESX/ESXi 4.1 host fails with a purple diagnostic screen and the error: Spin count exceeded (VASpace) - possible deadlock with PCPU 2
book
Article ID: 341340
calendar_today
Updated On:
Products
VMware vSphere ESXi
Issue/Introduction
Symptoms:
- The ESX/ESXi 4.1 host fails with a purple diagnostic screen.
- On the purple diagnostic screen or the system logs, you see entries similar to:
@BlueScreen: Spin count exceeded (VASpace) - possible deadlock with PCPU 2
2:19:58:08.201 cpu4:428426)Code start: 0x418039000000 VMK uptime: 2:19:58:08.201
2:19:58:08.209 cpu4:428426)Saved backtrace from: pcpu 2 SpinLock spin out NMI
2:19:58:08.216 cpu4:428426)0x417f80017d38:[0x4180392acb36]Power_HaltPCPU@vmkernel:nover+0x27d stack: 0x417f80017de8
2:19:58:08.226 cpu4:428426)0x417f80017e48:[0x4180391cbe1e]CpuSchedIdleLoopInt@vmkernel:nover+0x985 stack: 0x417f80017e88
2:19:58:08.237 cpu4:428426)0x417f80017e58:[0x4180391d15ee]CpuSched_IdleLoop@vmkernel:nover+0x15 stack: 0x2
2:19:58:08.246 cpu4:428426)0x417f80017e88:[0x4180390364f7]Init_SlaveIdle@vmkernel:nover+0x11e stack: 0x0
2:19:58:08.256 cpu4:428426)0x417f80017fe8:[0x4180392b3408]SMPSlaveIdle@vmkernel:nover+0x45f stack: 0x0
2:19:58:08.274 cpu4:428426)FSbase:0x0 GSbase:0x418041000000 kernelGSbase:0xff8881a0
2:19:58:08.042 cpu2:4098)NMI: 2020: NMI IPI recvd. We Halt. eip(base):ebp:cs [0x2acb36(0x418039000000):0x417f80017d38:0x4010](Src0x4, CPU2)
2:19:58:08.293 cpu4:428426)Backtrace for current CPU #4, worldID=428426, ebp=0x417f84c57958
2:19:58:08.302 cpu4:428426)0x417f84c57958:[0x4180390577b5]PanicLogBacktrace@vmkernel:nover+0x18 stack: 0x2032205550435020, 0x4
2:19:58:08.314 cpu4:428426)0x417f84c57a98:[0x418039057a97]PanicvPanicInt@vmkernel:nover+0x24e stack: 0x3000000010, 0x417f84c57
2:19:58:08.325 cpu4:428426)0x417f84c57b78:[0x418039058089]Panic_WithBacktrace@vmkernel:nover+0xa8 stack: 0x417f84c57bc8, 0x200
2:19:58:08.336 cpu4:428426)0x417f84c57be8:[0x418039069ea4]SP_WaitLockIRQ@vmkernel:nover+0x24b stack: 0x417f84c57c68, 0x3908123
2:19:58:08.347 cpu4:428426)0x417f84c57c68:[0x418039080b39]VASpace_Delete@vmkernel:nover+0x60 stack: 0x417f84c57c88, 0x29db0980
2:19:58:08.359 cpu4:428426)0x417f84c57cd8:[0x4180392308ca]VisorFSUnmapMpns@vmkernel:nover+0x99 stack: 0x417f84c57d28, 0x417f84
2:19:58:08.370 cpu4:428426)0x417f84c57d68:[0x418039232a29]VisorFSTarUnmountInt@vmkernel:nover+0x288 stack: 0x200c57e18, 0x1, 0
2:19:58:08.381 cpu4:428426)0x417f84c57eb8:[0x418039232e99]VisorFSTar_MountUVA@vmkernel:nover+0x20c stack: 0x9a, 0x417f84c57f38
2:19:58:08.392 cpu4:428426)0x417f84c57ee8:[0x4180391a0f8b]UWVMKSyscallUnpackVisorFSMountArchive@vmkernel:nover+0x6a stack: 0x6
2:19:58:08.404 cpu4:428426)0x417f84c57f18:[0x418039166a1f]User_UWVMKSyscallHandler@vmkernel:nover+0xb6 stack: 0xff886e98, 0x64
2:19:58:08.415 cpu4:428426)0x417f84c57f28:[0x4180390da747]gate_entry@vmkernel:nover+0x46 stack: 0x0, 0x13b, 0x49a, 0xff886f9d,
- After upgrading vCenter Server from 4.1 to 5.0, you see a purple diagnostic screen.
- When you disconnect and reconnect the host on the same vCenter Server or on a different vCenter Server, you see a purple diagnostic screen.
Environment
VMware ESXi 4.1.x Installable
VMware ESX 4.1.x
VMware ESXi 4.1.x Embedded
Feedback
thumb_up
Yes
thumb_down
No