VMware ESXi 5.5, Patch ESXi550-201608401-BG: Updates esx-base
search cancel

VMware ESXi 5.5, Patch ESXi550-201608401-BG: Updates esx-base

book

Article ID: 328234

calendar_today

Updated On:

Products

VMware

Issue/Introduction

Release date: August 04, 2016

Important: Always upgrade vCenter Server to version 5.5 Update 3b or later before you update ESXi to ESXi 5.5 Patch [ESXi550-201608001] released on 08/04/2016 to avoid issues due to interoperability implication relating to SSLv3 disablement.

Support for SSLv3 protocol is disabled by default
Note: In your vSphere environment, you need to update vCenter Server to vCenter Server 5.5 Update 3b or later before updating ESXi to ESXi 5.5 Patch [ESXi550-201608001] released on 08/04/2016. vCenter Server will not be able to manage ESXi 5.5 Patch [ESXi550-201608001] released on 08/04/2016, if you update ESXi before updating vCenter Server to version 5.5 Update 3b or later. For more information about the sequence in which vSphere environments need to be updated, refer KB 2057795.

VMware highly recommends you to update ESXi hosts to ESXi 5.5 Patch [ESXi550-201608001] released on 08/04/2016 while managing them from vCenter Server 5.5 Update 3b or later.
VMware does not recommend re-enabling SSLv3 due to POODLE vulnerability. If at all you need to enable SSLv3, you need to enable the SSLv3 protocol for all components. For more information, refer KB 2139396.

Patch Category Bugfix
Patch SeverityCritical
BuildFor build information, see KB 2144359.
Host Reboot RequiredYes
Virtual Machine Migration or Shutdown RequiredYes
Affected HardwareN/A
Affected SoftwareN/A
VIBs IncludedVMware:esx-base:5.5.0:3.89.4179633
PRs Fixed1383229, 1397634, 1416182, 1427564, 1494682, 1495758, 1503452, 1513465, 1514017, 1516392, 1518156, 1521738, 1526733, 1530370, 1537096, 1548243, 1555227, 1557766, 1558552, 1561813, 1561897, 1565079, 1568399, 1574630, 1580012, 1598321, 1604792, 1615579, 1623172, 1626383, 1627958, 1639847
Related CVE numbersNA


Resolution

Summaries and Symptoms

This patch updates the esx-base VIB to resolve the following issues:
  • Virtual machine performance metrics are not displayed correctly as the performance counter cpu.system.summation for a virtual machine is always displayed as 0.

  • When you create a virtual machine with the CPU limit and reservation set to maximum and sched.cpu.latencySensitivity is set to high, the exclusive affinity for the vCPUs might not get enabled.
    In earlier releases, the VM will not display a warning message when the CPU is not fully reserved. For more information, see Setting latency sensitivity to high on a virutal machine does not enable exclusive affinity (2087525).

  • Attempts to apply host profile on an Auto Deployed ESXi host during the first boot up might fail. The issue occurs as the host is unable to enter the maintenance mode if the base image used for Auto Deploy contains an expired evaluation licence. KB2116320.

  • Host profile remediation tries to mount NFS volumes prior to configuring /etc/hosts file entries. This might cause failure to mount NFS shares which are to be resolved using the hosts file.

  • The ESXi mClock IO scheduler does not limit the IOs with a lesser load even after you change the IOPS of the virtual machine using the vSphere Web Client.

  • Occasionally you might observe slight delay in LUN path state update, as an internal API does not issue scan call to the existing LUNs.

  • In a vSphere Distributed Switch (vDS) environment, vMotion might fail after removing Link Aggregation Groups (LAG). vMotion Wizard Compatibility shows an error message similar to the following:
    Currently connected network interface 'Network Adapter 1" uses network 'DSwitchName', which is not accessible

  • The ESXi host smartd reports wrong smart result for disk TEMPERATURE for the HGST HUS724030AL and HITACHI HDT721075SLA360 disks, there by triggering alerts every 30 minutes. An error log similar to the following is logged in the syslog.log file:
    nnnn-nn-nnTnn:nn:nnZ smartd: [warn] naa.5000cca22cc043f6: above TEMPERATURE threshold (176 > 0)

  • The Windows 10 VM vmx process might fail with an error message similar to the following:
    NOT_REACHED bora/devices/ahci/ahci_user.c:1530

  • The data store capacity for vSAN 5.5 might show incorrect storage value. The data store capacity might reduce from 2-3 TB to 1TB after you add a 4th vSAN node to the cluster or patching servers.

  • ESXi fails to set a active coredump partition. This happens due to vmhba number change of the configured coredump partition after you reboot an ESXi host.

  • An ESXi host displays a purple diagnostic screen when the node in the fcntl locklist is freed without being removed from the list leading to fcntl heap exhaustion.

  • An ESXi host might take a long time to boot and fail to load the VMW_SATP_ALUA Storage Array Type Plug-In (SATP) module due to stale entries in the esx.conf file of the LUN, which has gone into a Permanent Device Loss (PDL) condition.

  • Attempts to power ON more than 30 Citrix virtual machines per host might make the host unresponsive temporarily.

  • Stateless hosts might fail to apply a host profile during boot. An error message similar to the following is displayed:
    A specified parameter was not correct: dvsName

  • On an ESXi host that has 3D hardware, when hostd detects power state change of a virtual machine, a function is called to check the virtual machine state. In the case of vMotion, the source virtual machine is powered Off before being unregistered on the source host, the ManagedObjectNotFound exception is displayed and the hostd service might stop responding.

  • The Hardware Status tab might stop responding when the FRU device is accessed on word and the read_cnt is greater than or equal to 1. An error message similar to the following is logged in the syslog.log file:
    Dropped response operation details -- nameSpace: root/cimv2, className: OMC_RawIpmiEntity, Type: 0\

  • Attempts to upgrade ESXi 5.5 to ESXi 6.x in vSAN cluster might cause permanent loss of data. For information, see KB 2139969.

  • Attempts to power on a virtual machine after upgrading the hardware version might fail. This is an issue seen on ESXi 5.5 Update 3a or later. An error message similar to the following is displayed:
    cannot open the disk vmdk or one of the snapshot disks it depends on. Could not open/create change tracking file.
  • ESXi 5.5 host might fail with a purple diagnostic screen due to networking race condition. An error message similar to the following might be displayed:
    @BlueScreen: #PF Exception 14 in world 3346729:vmsyslogd IP 0x0 addr 0x0
    For more information, see KB 2136430.

  • e1000 vNIC device drops the packets if the GRE head bits 8-12 value is non-zero.

  • Dying Disk Handling might unmount the slow performing disks. The disk unmount feature in Dying Disk handling is disabled by default. Therefore, the disks are only monitored for slow performance but not unmounted.

  • ESXi generates shadow nic virtual MAC address that are duplicated across multiple ESXi hosts in the environment. The issue impacts the healthCheck functionality.This patch resolves the issue, however, you need to run the following steps when you upgrade the ESXi host for the changes take effect:
    1. Upgrade to ESXi 5.5 P03.
    2. Remove all virtualMac entries from /etc/vmware/esx.conf file.(Entries will be something like /net/pnic/child[0001]/virtualMac = "00:50:56:df:ff:06")
    3. Run the esxcfg-nics -r command
    4. Reboot the host.
    You can also edit the virtualMAC address using the following CLI command:
    #esxcli network nic set -n vmnic0 -V "00:50:56:59:82:e0"
  • During the live VIB installation, esximage creates stage data for live VIB. If you execute the esxcli software VIB get/list command at the same time, the stage data might get deleted and cause the live VIB installation transaction to fail.

  • The Hostd service might stop responding when you execute esxcli commands using PowerCLI resulting in memory leaks and memory consumption exceeding the hard limit. Error message similar to the following is logged in the hostd.log file:
    YYYY-MM-DDTHH:MM:SS.135Z [nnnnnnnn error 'UW Memory checker'] Current value 646016 exceeds hard limit 643993. Shutting down process.

  • Path Selection Policy as Round Robin (PSP_RR) with iops=1 set as default for vendor XtremIO & model XtremApp Array.

  • In the health report of a physical disk/host, the component metadata might be corrupt for some components.The components are in an Invalid State due to corrupt metadata.

  • Attempts to configure a RHEL VM with GRUB to use the at_keyboard input to allow configuration of the AZERTY keymap fail to activate countdown timer on boot.

  • Binding and unbinding dvfilter to a vNIC might cause the virtual machine to lose network connectivity temporarily.
  • Unable to query memory statistics using SNMP as the Real Memory entry is missing from the hrStorageTable leading to uncertainty in the environment.
  • The hostd service might stop responding and fail with an error. This happens due to memory leak when various vSAN operations are performed, causing hostd service to consume more memory and might stop responding.
  • In a cluster of hosts running the latest version of vSphere and older releases with shared storage, if a VMFS datastore is created from the vSphere host, the older version host might not recognize the new DS and allow overwrites on the same LUN resulting in data loss.
    This patch introduces the ability to detect newer versions of the VMFS and thus prevents accidental overwrites.

Additional Information

For translated versions of this article, see: