ESXi host fails with a PSOD (purple screen of death) FSSVec_FileIO
search cancel

ESXi host fails with a PSOD (purple screen of death) FSSVec_FileIO

book

Article ID: 318365

calendar_today

Updated On:

Products

VMware vSphere ESXi

Issue/Introduction

Symptoms:
ESXi host fails with a PSOD (purple screen of death) FSSVec_FileIO 

You see backtrace similar to:
#0  FSSVec_FileIO (identity=0x0, desc=0x0, fhID=1787690, sgArr=0x4312c5a103d0, token=0x0, ioFlags=(FS_WRITE_OP | FS_AUTO_ALIGNUP | FS_BUFFER_CACHE_IO | FS_RETRY_TIMEOUT), bytesRead=0x451a6431a9e4)
    at bora/vmkernel/filesystems/fsSwitchVec.c:505
505        if (FSS_FD2FILEOPS(desc) && FSS_FD2FILEOPS(desc)->FSS_FileIO) {
(gdb) bt
#0  FSSVec_FileIO (identity=0x0, desc=0x0, fhID=1787690, sgArr=0x4312c5a103d0, token=0x0, ioFlags=(FS_WRITE_OP | FS_AUTO_ALIGNUP | FS_BUFFER_CACHE_IO | FS_RETRY_TIMEOUT), bytesRead=0x451a6431a9e4)
    at bora/vmkernel/filesystems/fsSwitchVec.c:505
#1  0x000042000ac541fe in FSSFileIO (fileHandleID=1787690, sgArr=0x4312c5a103d0, token=0x0, ioFlags=(FS_WRITE_OP | FS_AUTO_ALIGNUP | FS_BUFFER_CACHE_IO | FS_RETRY_TIMEOUT), bytesTransferred=0x451a6431a9e4)
    at bora/vmkernel/filesystems/fsSwitch.c:7777
#2  0x000042000ac54786 in FSS_SGFileIO (fileHandleID=fileHandleID@entry=1787690, sgArr=sgArr@entry=0x4312c5a103d0, ioFlags=ioFlags@entry=(FS_WRITE_OP | FS_AUTO_ALIGNUP | FS_BUFFER_CACHE_IO | FS_RETRY_TIMEOUT),
    bytesTransferred=bytesTransferred@entry=0x451a6431a9e4) at bora/vmkernel/filesystems/fsSwitch.c:5112
#3  0x000042000ac60500 in BCFileIO (fhid=fhid@entry=1787690, sgArr=sgArr@entry=0x4312c5a103d0, ioFlags=(FS_WRITE_OP | FS_AUTO_ALIGNUP | FS_BUFFER_CACHE_IO | FS_RETRY_TIMEOUT),
    ioFlags@entry=(FS_WRITE_OP | FS_AUTO_ALIGNUP)) at bora/vmkernel/filesystems/caches/bufferCache2.c:352
#4  0x000042000ac60ca7 in BCWriteBuffer (file=0x430e033176d0, fhid=1001253, buf=<optimized out>) at bora/vmkernel/filesystems/caches/bufferCache2.c:625
#5  0x000042000ac60f47 in BCFlushFile (file=file@entry=0x430e033176d0, fhid=fhid@entry=1001253, sync=1 '\001') at bora/vmkernel/filesystems/caches/bufferCache2.c:711
#6  0x000042000ac64fca in BC_FlushFHID (fhid=1001253, dataOnly=<optimized out>) at bora/vmkernel/filesystems/caches/bufferCache2.c:3740
#7  0x000042000b2673e7 in LinuxFileDesc_Fsync (fd=3) at bora/vmkernel/user/linuxFileDesc.c:2442
#8  0x000042000b1b42a2 in User_LinuxSyscallHandler (fullFrame=0x451a6431af38) at bora/vmkernel/user/user.c:1963
#9  0x000042000addd076 in gate_entry ()
#10 0x0000007c0e756530 in ?? ()
Backtrace stopped: previous frame inner to this frame (corrupt stack?)


In vmkernel.log you see the volume is out of space and the file got closed errors similar to below :

2018-12-12T18:29:26.314Z cpu8:1001427226)Fil3: HandleOutOfResourcesRetry:9778: Max no space retries (10) exceeded for caller Fil3_FileIOInt (status 'No space left on device')
2018-12-12T18:29:26.429Z cpu3:1001427225)Fil3: HandleOutOfResourcesRetry:9791: SFB totalNumResources: 8388096 numFreeResources: 0 numNotAvailable 4451840
2018-12-12T18:29:26.429Z cpu30:1001427236)Fil3: HandleOutOfResourcesRetry:9791: SFB totalNumResources: 8388096 numFreeResources: 0 numNotAvailable 4451840
2018-12-12T18:29:26.429Z cpu8:1001427226)Fil3: HandleOutOfResourcesRetry:9791: SFB totalNumResources: 8388096 numFreeResources: 0 numNotAvailable 4451840
2018-12-12T18:29:26.452Z cpu3:1001427225)Fil3: HandleOutOfResourcesRetry:9801: LFB totalNumResources: 16383 numFreeResources: 0 numNotAvailable 7688
2018-12-12T18:29:26.452Z cpu8:1001427226)Fil3: HandleOutOfResourcesRetry:9801: LFB totalNumResources: 16383 numFreeResources: 0 numNotAvailable 7688
2018-12-12T18:29:26.452Z cpu30:1001427236)Fil3: HandleOutOfResourcesRetry:9801: LFB totalNumResources: 16383 numFreeResources: 0 numNotAvailable 7688
2018-12-12T18:29:26.452Z cpu8:1001427226)FSS: SGFileIO:5118: status: No space left on device on try: 1
2018-12-12T18:29:26.452Z cpu3:1001427225)FSS: SGFileIO:5118: status: No space left on device on try: 1
2018-12-12T18:29:26.452Z cpu30:1001427236)FSS: SGFileIO:5118: status: No space left on device on try: 1
2018-12-12T18:29:26.452Z cpu17:1001427234)World: ResetToVMKOnPanic:2974: PRDA 0x420044400000 ss 0x0 ds 0x10b es 0x10b fs 0x10b gs 0x0
2018-12-12T18:29:26.452Z cpu8:1001427226)close: UserVmfs: Close:2076: dt: FSS_Close on handle 1787690 failed: 0xbad00d7 <————— Failed with VMK_NO_SPACE
2018-12-12T18:29:26.452Z cpu17:1001427234)World: ResetToVMKOnPanic:2976: TR 0xf58 GDT 0x451a6431f000 (0xf77) IDT 0x42000addf000 (0xfff)
2018-12-12T18:29:26.452Z cpu17:1001427234)World: ResetToVMKOnPanic:2977: CR0 0x8001003d CR3 0x14cf91000 CR4 0x142768
2018-12-12T18:29:26.452Z cpu30:1001427236)close: UserVmfs: Close:2076: dt: FSS_Close on handle 739108 failed: 0xbad00d7
^[[31;1m2018-12-12T18:29:26.452Z cpu3:1001427225)ALERT: BC: CloseFile:3133: File ihost_new_w1-hs2-o0506_LZT_60-flat.vmdk closed with dirty buffers. Possible data loss.^[


Environment

VMware vSphere ESXi 7.0.x
VMware ESXi 6.7.x

Resolution

This issue is resolved in ESXi  7.0 U2