VMware ESXi 8.0.3 [Releasebuild-24674464 x86_64]#PF Exception 14 in world 2097517:HELPER_UPLIN IP 0x42002f3d16cc addr 0x4521ccc2201ePTEs:0x8000082023:0x80e3bbb063:0x80f296d063:0x0;
Module(s) involved in panic: [qfle3 1.4.46.0-10EM.700.1.0.15843807 (External)]cr0=0x8001003d cr2=0x4521ccc2201e cr3=0x682f6000 cr4=0x14216cFMS=06/55/7 uCode=0x5003801frame=0x4539cb69b7c0 ip=0x42002f3d16cc err=0x0 rflags=0x10246rax=0x4521ccc2201e rbx=0x13 rcx=0x1rdx=0x0 rbp=0x0 rsi=0x0rdi=0x4539cb69b958 r8=0x4539cb69b948 r9=0x4cr10=0x1 r11=0x100 r12=0x113r13=0x0 r14=0x0 r15=0x431f8713c090*PCPU49:2097517/HELPER_UPLINK_ASYNC_CALL_QUEUEPCPU 0: VVVVVVVVUUSVVVVVUUVVSVVVSSUUVVSVVSSVVSVSUUSVVVSVUSVUVVSVUUVUVVVUPCPU 64: USVVVSSVVUSSUVSVCode start: 0x42002e000000 VMK uptime: 18:06:30:08.9840x4539cb69b880:[0x42002f3d16cc]qfle3_sp_helper_func@(qfle3)#<None>+0x4e8 stack: 0x00x4539cb69b920: [0x42002f435a93]ecore_queue_state_change@(qfle3)#<None>+0x228 stack: 0x00x4539cb69b9a0:[0x42002f3b9db2]qfle3_stop_queue@(qfle3)#<None>+0x85f stack: 0x431f8713c0900x4539cb69bab0: [0x42002f3d913e]qfle3_rq_stop@(qfle3)#<None>+0x327 stack: 0x4539cb69bb580x4539cb69bc10:[0x42002f38dfbd]qfle3_cmd_stop_q@(qfle3)#<None>+0xla stack: 0xbad00030x4539cb69bc20:[0x42002f3c5b8c]qfle3_sm_q_cmd@(qfle3)#<None>+0x145 stack: 0x00x4539cb69bc80:[0x42002f3dc799]qfle3_rx_queue_stop@(qfle3)#<None>+0x20a stack: 0x1000420010x4539cb69bcb0: [0x42002f3dcea9]qfle3_queue_quiesce@(qfle3)#<None>+0x276 stack: 0x4307a5e3f5900x4539cb69bce0:[0x42002e444a3c]UplinkNetq_NotifyQueueQuiesce@vmkernel#nover+0x89 stack: 0x4307a5e300430x4539cb69bd30: [0x42002e32c7fc]NetqueueBalDeactivatePendingRxQueues@vmkernel#nover+0x161 stack: 0x10x4539cb69bda0: [0x42002e331540]UplinkNetqueueBal_BalanceCB@vmkernel#nover+0x841 stack: 0xff0x4539cb69bf30: [0x42002e429e82]UplinkĂ…syncProcessCallsHelperCBOvmkernel#nover+0xa7 stack: 0x4302bf2012200x4539cb69bf60: [0x42002e15ba1c]HelperQueueFunc@vmkernel#nover+0x19d stack: 0x430b962012380x4539cb69bfe0: [0x42002e6dc88e]CpuSched_StartWorld@vmkernel#nover+Oxbf stack: 0x00x4539cb69c000:[0x42002e144faf]Debug_IsInitialized@vmkernel#nover+Oxc stack: 0x0base fs=0x0 gs=0x42004c400000 Kgs=0x0No disk partition conf igured to dump data.Finalized dump header (18/18) FileDump: Successful.No port for remote debugger. "Escape" for local debugger.
YYYY-MM-DDTHH:MM:SS.681Z cpu21:2097985)qfle3: qfle3_get_mf_config:6716: [0000:18:00.4] func 4 is in MF switch-independent mode
YYYY-MM-DDTHH:MM:SS.681Z cpu21:2097985)qfle3: qfle3_get_mf_config:6742: [0000:18:00.4] multi function mode
YYYY-MM-DDTHH:MM:SS.681Z cpu21:2097985)qfle3: qfle3_init_queue_count:7245: [0000:18:00.4] Module param txqueue_nr (10)
YYYY-MM-DDTHH:MM:SS.681Z cpu21:2097985)qfle3: qfle3_init_queue_count:7250: [0000:18:00.4] Module param rxqueue_nr (10)
YYYY-MM-DDTHH:MM:SS.681Z cpu21:2097985)qfle3: qfle3_init_queue_count:7255: [0000:18:00.4] Module param RSS (0)
YYYY-MM-DDTHH:MM:SS.681Z cpu21:2097985)qfle3: qfle3_init_queue_count:7256: [0000:18:00.4] Module param txqueue_nr (10) rxqueue_nr (10).
Esx 8.0.3
The crash is caused by the vmnics detecting a TX hang condition. To recover, the system attempts to unload and reload the vmnic using the qfle3 driver.
This issue can also occur due to a misconfiguration on the physical switch port, particularly related to link speed settings.
To resolve this issue, please engage the hardware vendor, run a thorough hardware diagnostic, and perform the following additional steps:
1) Set the Physical switch port speed as per the vmnic speed.
2. Set Global bandwidth allocation as 'Default' for all ports, so that by default Maximum TX Bandwidth for all partitions will get set to 100.