VMware ESXi 5.5, Patch ESXi550-201407401-BG: Updates esx-base
search cancel

VMware ESXi 5.5, Patch ESXi550-201407401-BG: Updates esx-base

book

Article ID: 334661

calendar_today

Updated On:

Products

VMware vSphere ESXi

Issue/Introduction

Release date: July 1, 2014

Patch Category Bugfix
Patch SeverityCritical
BuildFor build information, see KB 2077405.
Host Reboot RequiredYes
Virtual Machine Migration or Shutdown RequiredYes
Affected HardwareN/A
Affected SoftwareN/A
VIBs IncludedVMware:esx-base:5.5.0-1.28.1892794
PRs Fixed785233, 1039879, 1142160, 1158936, 1161673, 1163752, 1164812, 1168507, 1172910, 1196941, 1200696, 1204626, 1205311, 1209469, 1209817, 1210014, 1210165, 1213078, 1239407, 1242893, 1242897, 1250103
Related CVE numbersNA


Environment

VMware vSphere ESXi 5.5

Resolution

Summaries and Symptoms

This patch updates the esx-base VIB to resolve the following issues:

  • When you run the dd if=/dev/mem of=/dev/null command on a Linux kernel 3.0 64-bit guest operating system the virtual machine fails to respond with no visible errors. You can observe the following error message logged in the vmware.log file:

    2011-11-08T20:10:06.716Z| vcpu-0| MONITOR PANIC: vcpu-0:NOT_IMPLEMENTED devices/pci/pci_monitor.c:244

  • A virtual machine might fail when the guest operating system with e1000 NIC driver is placed on D3 (suspended) mode. You will see error messages similar to the following logged in the vmware.log file:

    2014-03-17T14:50:08Z[+42.372]| vcpu-0| W110: Caught signal 11-- tid 56247 (addr 0)
    2014-03-17T14:50:08Z[+42.372]| vcpu-0| I120: SIGNAL: rip 0x0 rsp 0x3ffd5b76cf0 rbp 0x0

    This issue occurs in virtual machines that use IP aliasing and where the number of IP addresses exceed 10.

  • When an ESXi host has more CIM-XML Indication Subscriptions than the host profile attached to it, during a compliance check although the CIM Indication Subscription profile is disabled, the host profiles verify the extra subscriptions and indicates that the host profile is in noncompliant status with CIM-XML Indication Subscription message.

  • When the management interface with static IP is configured on VDS and the default gateway is configured manually, if you create a new host profile and apply the profile to a host, the default gateway is not set. You are unable to deploy hosts using host profiles as the host is unable to communicate on its network.

  • In VMware View environments having Windows virtual machines with the reclamation option enabled, an ESXi host might display a purple screen with error messages similar to the following:

    2013-12-23T09:07:19.746Z cpu4:37844)Backtrace for current CPU #4, worldID=37844, ebp=0x41240f51d6d0
    2013-12-23T09:07:19.746Z cpu4:37844)0x41240f51d6d0:[0x41801f634d0d]GFCPutSectionHeader@esx#nover+0x1 stack: 0x41240f51d750, 0x0, 0x4124
    2013-12-23T09:07:19.746Z cpu4:37844)0x41240f51d770:[0x41801f645f1d]SEResource_UnlockResources@esx#nover+0x115 stack: 0x0, 0x410cc378090
    2013-12-23T09:07:19.746Z cpu4:37844)0x41240f51d790:[0x41801f64a29d]SETxnGTEFaultCleanupCompletionArgs@esx#nover+0x65 stack: 0x410cc3780
    2013-12-23T09:07:19.746Z cpu4:37844)0x41240f51dbd0:[0x41801f64c3de]SETxnHandleGTEFault@esx#nover+0x36a stack: 0x412e89ba6390, 0x0, 0x41
    2013-12-23T09:07:19.746Z cpu4:37844)0x41240f51dc80:[0x41801f64d206]SETxn_HandleGTEFault@esx#nover+0x22e stack: 0x412e89ba6390, 0x0, 0x4
    2013-12-23T09:07:19.746Z cpu4:37844)0x41240f51de90:[0x41801f64143a]SESparseAsyncFileIO@esx#nover+0x1c4a stack: 0x50, 0x4109ea364640, 0x
    2013-12-23T09:07:19.746Z cpu4:37844)0x41240f51def0:[0x41801eaf7735]FDSAsyncIOIssueFn@vmkernel#nover+0x5d stack: 0x41090001e3df, 0x80000
    2013-12-23T09:07:19.746Z cpu4:37844)0x41240f51df30:[0x41801eaf2ac3]LibAIODoAsyncIO@vmkernel#nover+0x12f stack: 0x0, 0x41240f527000, 0x4
    2013-12-23T09:07:19.746Z cpu4:37844)0x41240f51dfd0:[0x41801e860f8a]helpFunc@vmkernel#nover+0x6b6 stack: 0x0, 0x0, 0x0, 0x0, 0x0
    2013-12-23T09:07:19.746Z cpu4:37844)0x41240f51dff0:[0x41801ea53242]CpuSched_StartWorld@vmkernel#nover+0xfa stack: 0x0, 0x0, 0x0, 0x0, 0
    2013-12-23T09:07:19.776Z cpu4:37844)^[[45m^[[33;1mVMware ESXi 5.5.0 [Releasebuild-1331820 x86_64]^[[0m
    #PF Exception 14 in world 37844:helper26-9 IP 0x41801f634d0d addr 0x6c

  • When a function returns an error and the caller function does not unlock the Internet Group Management Protocol (IGMP) lock, and later use the same lock again, the ESXi host displays a purple screen with the following error message:

    2014-01-01T03:26:20.652Z cpu19:32832)0x41238101d770:[0x41801208cf39]PanicvPanicInt@vmkernel#nover+0x575 stack: 0x412300000008, 0x4123810
    2014-01-01T03:26:20.652Z cpu19:32832)0x41238101d7d0:[0x41801208d17d]Panic_NoSave@vmkernel#nover+0x49 stack: 0x3ff, 0x41238101d84c, 0x1,
    2014-01-01T03:26:20.652Z cpu19:32832)0x41238101d830:[0x418012014f23]LockCheckSpinCountInt@vmkernel#nover+0x323 stack: 0x7ef9cfa43965a, 0
    2014-01-01T03:26:20.652Z cpu19:32832)0x41238101d880:[0x41801209c195]SP_WaitLock@vmkernel#nover+0x135 stack: 0x410a7ac3e000, 0x18, 0x4123
    2014-01-01T03:26:20.652Z cpu19:32832)0x41238101d8a0:[0x418012945dbc]mtx_lock@ <none></none># <none></none>+0xb8 stack: 0x418012a0b7a4, 0x410a7ac3e000, 0
    2014-01-01T03:26:20.652Z cpu19:32832)0x41238101d960:[0x418012980743]igmp_input@ <none></none># <none></none>+0x71b stack: 0x41238101d9b0, 0x41801296b4f9
    2014-01-01T03:26:20.652Z cpu19:32832)0x41238101d970:[0x418012946a7b]netisr_dispatch@ <none></none># <none></none>+0x1b stack: 0x413600000000, 0x410a7ac3
    2014-01-01T03:26:20.652Z cpu19:32832)0x41238101d9b0:[0x41801296b4f9]ether_demux@ <none></none># <none></none>+0x20d stack: 0x41238101da5f, 0x410a7ac3e00
    2014-01-01T03:26:20.652Z cpu19:32832)0x41238101da00:[0x41801296b74f]ether_input@ <none></none># <none></none>+0x183 stack: 0x0, 0x0, 0x412300000001, 0x3
    2014-01-01T03:26:20.652Z cpu19:32832)0x41238101da90:[0x418012933d5a]TcpipRx@ <none></none># <none></none>+0x16a stack: 0x410006750000, 0x410a7958af40, 0

  • On accessing corrupted VMFS metadata, the ESXi host might fail with a purple diagnostic screen with a backtrace similar to the following:

    2014-01-07T09:01:38.388Z cpu37:1326825)@BlueScreen: #PF Exception 14 in world 1326825:hostd-worker IP 0x4180062fab23 addr 0x41002ba01000
    PTEs:0x405bdfe023;0x1080000023;0x1080058023;0x0;
    2014-01-07T09:01:38.388Z cpu37:1326825)Code start: 0x418005800000 VMK uptime: 14:01:27:08.785
    2014-01-07T09:01:38.390Z cpu37:1326825)0x41233ba5ba30:[0x4180062fab23]Res3_IsValidClusterMeta@esx#nover+0x1d2 stack: 0x40
    2014-01-07T09:01:38.391Z cpu37:1326825)0x41233ba5bb10:[0x4180062fbf07]Res3Stat@esx#nover+0x51e stack: 0x5340
    2014-01-07T09:01:38.392Z cpu37:1326825)0x41233ba5bc70:[0x4180062cbdd6]Vol3_GetAttributes@esx#nover+0x2ed stack: 0x410017bc0000
    2014-01-07T09:01:38.393Z cpu37:1326825)0x41233ba5bd20:[0x418005a2f47c]FSS_GetAttributes@vmkernel#nover+0x157 stack: 0xb637240
    2014-01-07T09:01:38.395Z cpu37:1326825)0x41233ba5be00:[0x418005c923b4]UserFileIoctl@#+0x22f stack: 0x41233ba5be40
    2014-01-07T09:01:38.396Z cpu37:1326825)0x41233ba5be70:[0x418005cca757]UserVmfs_Ioctl@#+0x66 stack: 0x412300000048
    2014-01-07T09:01:38.398Z cpu37:1326825)0x41233ba5beb0:[0x418005c96629]LinuxFileDesc_Ioctl@#+0x68 stack: 0x4180059b84c9
    2014-01-07T09:01:38.399Z cpu37:1326825)0x41233ba5bef0:[0x418005c776fa]User_LinuxSyscallHandler@#+0xe5 stack: 0x0
    2014-01-07T09:01:38.401Z cpu37:1326825)0x41233ba5bf10:[0x4180058a878e]User_LinuxSyscallHandler@vmkernel#nover+0x19 stack: 0x2e7c0b08
    2014-01-07T09:01:38.402Z cpu37:1326825)0x41233ba5bf20:[0x418005910064]gate_entry@vmkernel#nover+0x63 stack: 0x0
    2014-01-07T09:01:38.408Z cpu37:1326825)base fs=0x0 gs=0x418049400000 Kgs=0x0
    2014-01-07T09:01:38.408Z cpu37:1326825)vmkernel 0x0 .data 0x0 .bss 0x0

  • An ESXi host might stop responding and display a purple screen with DE Exception 0 error due to an exception in the ESXi VOB infrastructure. Messages similar to the following are logged in the vmkernel.log file:

    2014-01-12T15:31:11.773Z cpu6:37879)@BlueScreen: #DE Exception 0 in world 37879:vmx @ 0x4180140cf00e
    2014-01-12T15:31:11.774Z cpu6:37879)Code start: 0x418014000000 VMK uptime: 15:05:14:23.434
    2014-01-12T15:31:11.774Z cpu6:37879)0x41240fddd518:[0x4180140cf00e]VobImplContextAddList@vmkernel#nover+0x37e stack: 0x28
    2014-01-12T15:31:11.774Z cpu6:37879)0x41240fddd578:[0x4180140cfc58]Vob_ImplContextAdd@vmkernel#nover+0x38 stack: 0x410b035e4b38

  • On a host which has storage multipath settings for remote storage, during ESXi installation the installer might not list the disks in correct order and the script installation might not choose the LUN with the lowest LUNID as the boot disk.

  • If you enable the traffic shaping policy on a vSwitch portgroup, the ESXi host might intermittently fail with a purple diagnostic screen with a backtrace similar to the following:

    2013-01-10T14:41:19.355Z cpu20:14281)0x41229f25b808:[0x418016968ab3]Vmxnet3VMKDevTxComplete@vmkernel#nover+0x1d2 stack: 0x41229f25b838,
    2013-01-10T14:41:19.355Z cpu20:14281)0x41229f25b848:[0x418016968d13]Vmxnet3VMKDevTxCompleteCB@vmkernel#nover+0x116 stack: 0x41229f25b800
    2013-01-10T14:41:19.356Z cpu20:14281)0x41229f25b8e8:[0x41801693cf40]IOChain_Resume@vmkernel#nover+0x247 stack: 0x41229f25b900, 0x0, 0x0,
    2013-01-10T14:41:19.356Z cpu20:14281)0x41229f25b958:[0x41801692b185]Port_IOCompleteList@vmkernel#nover+0x1c4 stack: 0x41229f25b998, 0x14
    2013-01-10T14:41:19.357Z cpu20:14281)0x41229f25bb58:[0x418016e3c139]EtherswitchPortDispatch@ <none></none># <none></none>+0x151c stack: 0x412200000018,
    2013-01-10T14:41:19.357Z cpu20:14281)0x41229f25bbc8:[0x41801692b337]Port_InputResume@vmkernel#nover+0x146 stack: 0x41229f25bc18, 0x41001
    2013-01-10T14:41:19.358Z cpu20:14281)0x41229f25bc18:[0x41801692cab2]Port_Input_Committed@vmkernel#nover+0x29 stack: 0x11221ead0, 0x41220
    2013-01-10T14:41:19.359Z cpu20:14281)0x41229f25bc98:[0x41801696b541]Vmxnet3VMKDevTQDoTx@vmkernel#nover+0x2f8 stack: 0x41229f25bd28, 0x14
    2013-01-10T14:41:19.359Z cpu20:14281)0x41229f25bce8:[0x41801696c968]Vmxnet3VMKDev_AsyncTx@vmkernel#nover+0xd7 stack: 0x41229f25bd28, 0x4
    2013-01-10T14:41:19.360Z cpu20:14281)0x41229f25bd58:[0x4180169518a3]NetWorldletPerVMCB@vmkernel#nover+0xae stack: 0xe9, 0x41229f25bdf0,
    2013-01-10T14:41:19.360Z cpu20:14281)0x41229f25bed8:[0x41801690af2b]WorldletProcessQueue@vmkernel#nover+0x486 stack: 0x41229f25bf18, 0xb
    2013-01-10T14:41:19.361Z cpu20:14281)0x41229f25bf18:[0x41801690b5a5]WorldletBHHandler@vmkernel#nover+0x60 stack: 0x10041229f25bf48, 0x41
    2013-01-10T14:41:19.361Z cpu20:14281)0x41229f25bf98:[0x4180168207fa]BH_Check@vmkernel#nover+0x185 stack: 0x4180169b84f9, 0x4100346ac188,
    2013-01-10T14:41:19.362Z cpu20:14281)0x41229f25bfe8:[0x4180168f3763]VMMVMKCall_Call@vmkernel#nover+0x27a stack: 0x0, 0x0, 0x0, 0x0, 0x0
    2013-01-10T14:41:19.362Z cpu20:14281)0x4180168c77d8:[0xfffffffffc223a12]__vmk_versionInfo_str@esx#nover+0xe4f024b1 stack: 0x0, 0x0, 0x0,

  • An ESXi host might stop responding when NetFlow is enabled in a VXLAN environment due to improper handling of the null pointer case. A purple screen with messages similar to the following is displayed:

    2014-02-24T20:59:05.678Z cpu9:32791)World: 8773: PRDA 0x418042400000 ss 0x0 ds 0x10b es 0x10b fs 0x0 gs 0x13b
    2014-02-24T20:59:05.678Z cpu9:32791)World: 8775: TR 0x4020 GDT 0x4123805e1000 (0x402f) IDT 0x41802c8f3000 (0xfff)
    2014-02-24T20:59:05.678Z cpu9:32791)World: 8776: CR0 0x80010031 CR3 0xc3a2d4000 CR4 0x2768
    2014-02-24T20:59:05.749Z cpu9:32791)Backtrace for current CPU #9, worldID=32791, ebp=0x4123805dd710
    2014-02-24T20:59:05.749Z cpu9:32791)0x4123805dd710:[0x41802d1e6b0c]IpfixFilterPacket@ <none></none># <none></none>+0x80 stack: 0x410889663ab0, 0x1, 0x4
    2014-02-24T20:59:05.749Z cpu9:32791)0x4123805dd740:[0x41802d1e740b]IpfixInputFilter@ <none></none># <none></none>+0x4b stack: 0x4123805dd790, 0x4108a5e
    2014-02-24T20:59:05.749Z cpu9:32791)0x4123805dd7e0:[0x41802c992164]IOChain_Resume@vmkernel#nover+0x174 stack: 0x4108a5e74ac0, 0x0, 0x41
    2014-02-24T20:59:05.749Z cpu9:32791)0x4123805dd850:[0x41802c97a703]Port_InputResume@vmkernel#nover+0xc3 stack: 0x4123805dd8b0, 0x100410
    2014-02-24T20:59:05.749Z cpu9:32791)0x4123805dd8a0:[0x41802c97ba39]Port_Input_Committed@vmkernel#nover+0x25 stack: 0x1805dd910, 0x41238
    2014-02-24T20:59:05.749Z cpu9:32791)0x4123805dd930:[0x41802c9cad25]Vmxnet3VMKDevTQDoTx@vmkernel#nover+0x28d stack: 0x412300000000, 0x25
    2014-02-24T20:59:05.749Z cpu9:32791)0x4123805dd9a0:[0x41802c9cec13]Vmxnet3VMKDev_AsyncTx@vmkernel#nover+0xa3 stack: 0x4123805ddb20, 0x4
    2014-02-24T20:59:05.749Z cpu9:32791)0x4123805dda10:[0x41802c9add70]NetWorldletPerVMCB@vmkernel#nover+0x218 stack: 0x4, 0x417fec8ee090,
    2014-02-24T20:59:05.750Z cpu9:32791)0x4123805ddb70:[0x41802c8eae77]WorldletProcessQueue@vmkernel#nover+0xcf stack: 0x1, 0x200, 0x9, 0x4
    2014-02-24T20:59:05.750Z cpu9:32791)0x4123805ddbb0:[0x41802c8eb93c]WorldletBHHandler@vmkernel#nover+0x54 stack: 0x25a7fa440c063, 0x4100
    2014-02-24T20:59:05.750Z cpu9:32791)0x4123805ddc20:[0x41802c82e5b9]BH_Check@vmkernel#nover+0xc9 stack: 0x0, 0x4100067bc000, 0x2805ddc90
    2014-02-24T20:59:05.750Z cpu9:32791)0x4123805ddc90:[0x41802ca4e72d]CpuSchedIdleLoopInt@vmkernel#nover+0x391 stack: 0x412300000002, 0x0,
    2014-02-24T20:59:05.750Z cpu9:32791)0x4123805dddf0:[0x41802ca54930]CpuSchedDispatch@vmkernel#nover+0x1630 stack: 0x2c0, 0x25a7fa4b1c901
    2014-02-24T20:59:05.750Z cpu9:32791)0x4123805dde60:[0x41802ca55c65]CpuSchedWait@vmkernel#nover+0x245 stack: 0x412300000001, 0x4123805dd
    2014-02-24T20:59:05.750Z cpu9:32791)0x4123805ddec0:[0x41802ca56437]CpuSched_SleepUntilTC@vmkernel#nover+0xfb stack: 0x0, 0x4123805ddf30
    2014-02-24T20:59:05.750Z cpu9:32791)0x4123805ddfd0:[0x41802c9fc5f0]NetCoalesceWorldFn@vmkernel#nover+0x298 stack: 0x0, 0x0, 0x0, 0x0, 0
    2014-02-24T20:59:05.750Z cpu9:32791)0x4123805ddff0:[0x41802ca53242]CpuSched_StartWorld@vmkernel#nover+0xfa stack: 0x0, 0x0, 0x0, 0x0, 0
    2014-02-24T20:59:05.779Z cpu9:32791)VMware ESXi 5.5.0 [Releasebuild-1331820 x86_64]
    #PF Exception 14 in world 32791:coalesceWorl IP 0x41802d1e6b0c addr 0x8
    PTEs:0xc398f1027;0xc39a79027;0x0;
    2014-02-24T20:59:05.780Z cpu9:32791)cr0=0x8001003d cr2=0x8 cr3=0x8180e000 cr4=0x216c
    2014-02-24T20:59:05.781Z cpu9:32791)frame=0x4123805dd470 ip=0x41802d1e6b0c err=0 rflags=0x10246

  • On an ESXi host, if the numaSched.node heap memory contains obsolete values, NUMA (Non-Uniform Memory Access) scheduling decisions might be affected resulting in performance degradation.

  • Invoking the extendVirtualDisk function with the eagerZero parameter set to true on a VMDK using the vSphere API might result in data loss after the extend operation. This issue occurs irrespective of the base virtual disk backing type.

  • When a Solaris 10 virtual machine runs on ESXi 5.5 host that uses virtualHW.version more than 8, that is vmx-9 or vmx-10, and has more than 1 virtual central processing unit (vCPU) configured on it, the guest operating system fails to boot and displays an error message similar to the following:

    WARNING: /pci@0,0/pci15ad,1976@10 (mpt1):
    Disconnected command timeout for Target 0

  • ESXi hosts running nested virtual machines (for example, virtual ESXi) might stop responding with a purple diagnostic screen (PSOD) due to memory corruption.

  • When you attempt to extend a Virtual Disk located on a VAAI-NAS datastore, the ESXi host might stop responding with an error similar to:

    Assert Failed: "pos <= max" @ bora/vim/hostd/vdisksvc/vdiskManagerImpl.cpp:102

    You will also notice an error displayed in the vCenter Server similar to:

    An error occurred while communicating with the remote host.

  • When you set permissions for an Active Directory (AD) user or group on an ESXi host with Host Profile, the permissions might not persist after you reboot the ESXi host with Auto Deploy.

  • During system restart, if the ESXi host is unable to reach Auto Deploy and needs to boot from the local cache, the settings might not persist and the ESXi host might be removed from Active Directory (AD) upon reboot.

  • When you attempt to change the boot sequence of a virtual machine using the bootOrder property of the VirtualMachineBootOptions data object using the vSphere APIs, the proper boot sequence is not applied due to the introduction of an extra space in the bios.bootOrder value for the virtual machine's .vmx file.

Patch Download and Installation

The typical way to apply patches to ESXi hosts is through the VMware Update Manager. For details, see the Installing and Administering VMware vSphere Update Manager.

ESXi hosts can be updated by manually downloading the patch ZIP file from the VMware download page and installing the VIB by using the esxcli software vib command. Additionally, the system can be updated using the image profile and the esxcli software profile command. For details, see the vSphere Command-Line Interface Concepts and Examples and the vSphere Upgrade Guide.