VMware ESXi 6.0, Patch Release ESXi-6.0.0-20171104001-no-tools
search cancel

VMware ESXi 6.0, Patch Release ESXi-6.0.0-20171104001-no-tools

book

Article ID: 322166

calendar_today

Updated On:

Products

VMware vSphere ESXi

Issue/Introduction

Profile NameESXi-6.0.0-20171104001-no-tools
Build6922994
VendorVMware, Inc.
Release DateNovember 9, 2017
Acceptance LevelPartnerSupported
Affected HardwareN/A
Affected SoftwareN/A
Affected VIBs
  • VMware_bootbank_esx-base_6.0.0-3.79.6921384
  • VMware_bootbank_vsan_6.0.0-3.79.6766495
  • VMware_bootbank_vsanhealth_6.0.0-3000000.3.0.3.79.6769065
  • VMware_bootbank_esx-dvfilter-generic-fastpath_6.0.0-3.79.6921384
  • VMware_bootbank_misc-drivers_6.0.0-3.79.6921384
  • VMware_bootbank_ipmi-ipmi-devintf_39.1-5vmw.600.3.79.6921384
  • VMware_bootbank_ipmi-ipmi-msghandler_39.1-5vmw.600.3.79.6921384
PRs Fixed1376775, 1403686, 1501505, 1503588, 1506110, 1594836, 1643035, 1647160, 1708494, 1767332, 1783061, 1788629, 1790776, 1791626, 1798426, 1804693, 1821111, 1821520, 1826091, 1830581, 1831208, 1831305, 1835672, 1836646, 1838528, 1838627, 1839800, 1840101, 1840800, 1849490, 1849683, 1850348, 1850620, 1851705, 1856577, 1857356, 1858656, 1859921, 1861236, 1861251, 1862219, 1862753, 1863155, 1867673, 1868075, 1868598, 1872167, 1872522, 1872685, 1874998, 1875227, 1875292, 1875784, 1876045, 1876472, 1876879, 1877471, 1878085, 1880112, 1884218, 1886426, 1892867, 1893077, 1897001, 1897989, 1901019, 1901818, 1902569, 1902609, 1903312, 1903319, 1904658, 1904747, 1905048, 1905070, 1905114, 1905119, 1905132, 1905462, 1905474, 1905484, 1905487, 1905851, 1906989, 1907792, 1908231, 1909911, 1910118, 1910992, 1911154, 1911171, 1911536, 1912460, 1913514, 1914249, 1915025, 1915222, 1916996, 1918678, 1918771, 1919721, 1919725, 1919770, 1920527, 1921077, 1921509, 1922263, 1922313, 1923046, 1923148, 1923376, 1923502, 1923573, 1924919, 1927016, 1930345, 1933407, 1939316, 1967742, 1968450, 1891971, 1911662, 1913514, 1948937,1888700, 1878101
Related CVE numbersN/A


Environment

VMware vSphere ESXi 6.0

Resolution

Summaries and Symptoms

This patch updates the following issues:
  • CPU utilization reports by Solaris 11 guest OS differ from the reports of the ESXi host for some workloads, due to the higher timer frequency of Solaris 11, which results in idle Solaris 11 virtual machines not being recognized by the host. The default value for the virtual machine parameter has been decreased to 75 microseconds from 400 microseconds for Solaris 11 guests to improve the consistency between the two reports.

  • The device configuration setting Is Shared Clusterwide, that can be configured through the ESXCLI command set, might not persist across reboots and cause host profile compliance issues. You must disable the Is Shared Clusterwide device setting on the reference host for storage area network (SAN) boot logical unit number (LUN) device before you extract the host profile from the reference host. 

  • Attempts to join Active Directory domains might fail intermittently with error LW_ERROR_KRB5KDC_ERR_C_PRINCIPAL_UNKNOWN: Client not found in Kerberos database

  • A virtual machine (VM) might fail to boot when you use a third-party NetBoot server with a VM configured to NetBoot Mac OS X Server, OS X or macOS. This update improves the Boot Service Discovery Protocol (BSDP) implementation in the virtual machine’s firmware to be more tolerant of variations in the NetBoot server responses.

  • For latency-sensitive virtual machines, the netqueue load balancer might try to reserve exclusive Rx queue. If the driver provides queue-preemption, then netqueue load balancer uses this to get exclusive queue for latency-sensitive virtual machines. The netqueue load balancer holds lock and execute queue preemption callback of the driver. With some drivers, this might result in a purple screen in the ESXi host, especially if a driver implementation involves sleep mode.

  • You might see a compliance error in a host profile on Security.PasswordQualityControl when the PAM password setting in the PAM password profile is different from the advanced configuration option Security.PasswordQualityControl. Because the advanced configuration option Security.PasswordQualityControl is unavailable for host profiles in this release, use the Requisite option in the Password PAM Configuration to change the password policy.

  • An ESXi host might fail to discover all PCI functions of a multi-function device that uses Alternative Routing ID Interpretation (ARI). This problem might occur when the device does not number its functions consecutively. If there is no function with a number multiple of 8, the ESXi host might fail to discover higher-numbered functions up to the next multiple of 8. For example, if Function 8 is not present, the ESXi host fails to discover the Functions between 9 and 15 or if Function 16 is not present, the ESXi host fails to discover the Functions between 17 and 23.

  • If the target connected to the ESXi host supports only implicit Asymmetric Logical Unit Access (ALUA) and has only standby paths, the device might get registered, but device attributes related to media access might not get populated. It might take up to 5 minutes for the VMware vSphere Storage APIs Array Integration (VAAI) attributes to refresh if an active path gets added on post registration. As a result, the VMFS volumes, configured with ATS-only, might fail to mount until the VAAI update.
    NOTE: If the target supports only standby paths and has only implicit ALUA, enable the FailDiskRegistration config option on the host by using the following ESXi CLI command: esxcli system settings advanced set -o /Disk/FailDiskRegistration -i 1. To take effect, you must set the config option and reboot the host. This will delay the registration of the devices until you see an active path.

  • You might see a session timeout if tasks for automatic space reclamation run on an ESXI host with the UnmapVmfsVolumeEx_Task() API method and take longer than 30 minutes. You cannot track progress of the task, which sends UNMAP commands to a storage array to reclaim space from deleted or moved virtual machines on that storage array.

  • When a virtual machine has multiple disks with different absolute paths, but with the same name, and the parent of any delta disk is inaccessible, during a snapshot consolidation the delta disk might be reparented to a wrong parent, causing a disk chain corruption and the virtual machine to power off. 

  • An ESXi host might stop responding if a LUN unmapping is made on the storage array side to those LUNs while connected to an ESXi host through Broadcom/Emulex fiber channel adapter with a lpfc driver and has I/O running.

  • The Load Balancer on the vSphere Distributed Switch (VDS) does not check the state of uplinks, but only the bandwidth. As a result, VDS might consider a physical NIC with down link state as a valid uplink and might use it for load balancing.

  • Slow or inaccessible storage might cause a virtual machine on an ESXi host to become unresponsive and to appear as in offline state. As a result, the ESXi host might transition to maintenance mode, while there is a VM powered on and the vmx process still runs.

  • When you run a virtual machine on an ESXi host, and the storage where the VM resides enters All-Paths-Down (APD) state, this might cause the VM to become inaccessible. However, the vmx process continues. Manual restart of the hostd service allows the transition of the ESXi host to maintenance mode, while there is a VM powered on and the vmx process still runs.

  • The SnmpWalk might cause the snmpd to stop responding on hardware server with greater than 128-bit CPUs. As a result, you must restart the snmpd process.

  • The VMODL object might be added to the networkInfo configuration with fields that are left unset, resulting in a  vCenter Server Agent (vpxa) service failure.

  • NSX Services rely on the vSphere ESX Agent Manager (EAM) to set up permissions for accessing privileged virtual machine information necessary for their operation. When the ESXi host reboots, the permissions might be replaced by invalid values. As a result, the VMs might be denied access to the required information and they cannot perform their tasks.

  • Some systems with particular configurations, while installing or booting an ESXi host, might fail with errors Machine Check Exception or a Non Maskable Interrupt (Exception 2). The firmware logs might indicate that the failure is due to access to an address that is immediately above the top of memory. The top of memory address can be obtained from the platform firmware memory map.

  • When the NFS 4.1 client on an ESXi host sends a write request with an oldState ID, this might cause the Guest OS to receive lock lost errors and cause the virtual machine to fail.

  • When you try to clone a virtual machine with digest disks, and those disks are in use of the Content Based Read Cache (CBRC), this might cause the digest disk to be cloned as thick provisioned instead of thin provisioned. As a result, the cloning operation might fail, because the VVol storage array does not support thick provisioning.

  • If you provision more than one network adapter card to an ESXi host, the Single Root I/O Virtualization (SR-IOV) feature might not work as expected. When interpreting the module parameter that lists the number of virtual functions to create for each physical function, the kernel might match parameter values from the list to physical functions in an unexpected order. This fix makes changes to the VMkernel Device Manager (vmkdevmgr) so that the physical functions are always considered and matched to max_vfs parameter values in PCI SBDF address order.

  • You can see all virtual machines of an ESXi host have input/output operations per second (IOPS) matching the lowest IOPS limit from all virtual machines deployed on a Network File System (NFS) datastore, and as a result overriding any higher IOPS limit of other VMs on the same NFS datastore.

  • You might be see false reports for network packet drops, because IOChain link statistics of Ethernet ports are not synchronized and might not provide accurate information. 

  • A temporary connection loss between the storage array and the ESXi host might cause storage connectivity issues, even after the connection is restored. As a result, the virtual machines hosted on a NFS 4.1 datastore might become inaccessible.

  • If the initialization of a session of the Virtual Volumes storage provider (VASA) takes long after a reboot of an ESXi host, attempted removal of a Virtual Volumes datastore might fail due to active bindings. This fix removes the active bindings after the initialization of a VASA session.

  • In the vSphere Web Client, if you change your selection of a storage controller for the CD/DVD drive to SATA or IDE with the defaults, controller 0 at bus node 0 like SATA(0:0), the change in the settings might not take effect on the actual configuration of the CD/DVD drive.

  • If you observe that an ESXi becomes unresponsive to vCenter Server, the vpxa process shuts down and restarts intermittently, and a dump file of the virtual machine executable (vmx) process is created under VM folders, this might be due to a failure of the vmx process while processing message event [msg.mks.noGPUResourceFallback] Hardware GPU resources are not available. The virtual machine will use software rendering. The vmx process passes a message event with invalid string to the agent of the ESX server (hostd), and hostd cannot create a proper response to the GetChange request from vpxa, so vpxa gets an error on deserializing the SOAP response body and opts to end the task, which fails the vmx process.

  • If you run a query for summary statistics and it finds no data for the requested object, hostd might fail with a log similar to this: Panic: NOT_REACHED bora/vim/hostd/statssvc/performanceManagerImpl.cpp:850. Backtrace:.

  • When performing vSAN health checks, you might see a warning similar to controller driver is VMware certified that indicates a controller driver version of the vSAN HCL DB not matching the one in vCenter Sever, even if they are actually the same, and this might create a false alarm.

  • A SCSI Generic IO command passed through an IOCTL function to a SCSI processor type device, especially a Marvell SCSI processor, might fail with the error message Inappropriate ioctl for device. This patch adds support to process the Pass-Through IOCTL commands in the VMkernel file system device driver. 

  • Logins to ESXi hosts by Active Directory domain user accounts ending with a dollar sign might fail because such accounts might be treated by default as machine accounts.  

  • An ESXi host might fail with a purple screen or become unavailable during standard operations with Microsoft Cluster Service (MSCS) such as resource failover and node failover, due to a race condition that depends on a narrow timing window.

  • If a virtual machine is configured with more than one vCPU and a VMXNET3 network interface, PXE booting of that VM might take long, because if traffic is heavy and the ring size of the network is small, this might congest the rings and cause a delay in the booting. This fix adds a rx burst queue for packets so that when rings are full, packets get in a queue to be consecutively processed in groups rather than get dropped. This rx burst queue starts by default with the PXE boot process and stops after the booting is complete.

  • Delays in processing data in deferred processing mode from lower layers in the local log-structured object manager (LSOM) might cause occasional latency and temporary throughput drops in writes.

  • Hostd might stop responding in attempt to load vSAN metadata from a corrupted disk, using the QueryMissingVsanDisks() method, as it might access an invalid vSAN header.

  • If you try to configure CIM indications via the VMware vSphere API, the hardware monitoring service sfcdb might fail or create invalid pointers. You may observe the issue by the zdump core files in the /var/core/ directory, and also by diagnostic messages on the Direct Console User Interface (DCUI) and in log files. To avoid failures, use host profiles to manage CIM indications instead of the API.

  • When you disable all NFS datastores in a host profile, instead of ignoring the datastores, the compliance check of the host profile might identify the datastores as new and remove them from the host at remediation.

  • vSAN Health Service logs might increase at random cycles and eat all log storage space in vCenter Server for Windows.

  • Large IO operations from virtual machines are split into multiple SCSI commands and if the VM flags from the parent command are not copied to the child commands, you might see a drop in performance.

  • Some changes in the network configuration of an ESXi host might lead to the deletion and recreation of the ESXi kernel virtual network interface (vmknic) without making vSAN aware of the change, which might affect the transmission of heartbeats of the Cluster Monitoring, Membership, and Directory Service (CMMDS) of vSAN and lead to unexpected vSAN cluster partitions. With this fix, vSAN recognizes the recreation of the vmknic and ensures that CMMDS continues to send heartbeats.

  • After you upgrade an ESXi host to ESXi 6.0 update 3, active I/O path information for the SCSI devices might not be available in the graphic interface, through either the vSphere Web Client or the vSphere Client.

  • When you upgrade an ESXi host from version 5.5 to 6.0., and you join it to the Active Directory, the ESX Admins group is not populated automatically in the Permissions tab. If you try to assign them manually, the operation might fail with the error message Giving an error: Incorrect group. This problem might occur with any other domain user or group.

  • When you deploy an ESXi host with vSphere Auto Deploy and attach a host profile, if the host profile specifies the hostname as a fully qualified domain name (FQDN), adding the ESXi host to the vCenter Server might not work due to a Thumbprint mismatch exception.

  • When you attempt to add a new disk to a virtual machine (VM), you might not be able to save it on a different datastore than the disk on which the VM is located.

  • If the I/O traffic of a virtual machine on an ESXi host in a vSAN cluster comes from application workloads and at the same time, the host attempts to restore components to meet Failures To Tolerate (FTT) level, background traffic can interfere with the VM I/O to an extent that the VM becomes unresponsive. You can watch for high I/O latency in the VM or monitor the vCenter Server performance charts to observe the issue.

  • A virtual machine with enabled Sparse snapshots on a vSAN datastore might become unresponsive and attempts to reset it might consistently fail, even if you attempt to manually stop the vmx process of the VM.

  • If you reset a virtual machine in a VMware vSphere Distributed Resource Scheduler (DRS) cluster several times in a row, the task might fail with an error message Available memory resources in the parent resource pool are insufficient for the operation.

  • An ESXi host in a vSAN cluster might fail with a purple screen due to a race condition in disk initialization during booting or recreation of a disk group.

  • In a vSAN stretched cluster, if all the hosts in a fault domain are no longer part of the cluster for a reason such as planned maintenance, and if these hosts rejoin the cluster later, then the host in the active fault domain that has the backup role of primary node, might be partitioned from the cluster for a short period. If there are virtual machines running on that host, they may lose connectivity and the high availability service might restart those virtual machines on another host that is still part of the vSAN cluster.
    NOTE: For the fix to take effect, all hosts in a vSAN cluster must be upgraded to this patch.

  • In a vSAN stretched cluster, a cluster primary host is elected from the preferred fault domain and if there are no hosts available, the cluster primary node is elected from the secondary fault domain, but if a node from the preferred fault domain joins the cluster, then that host is elected as primary node through a process called cluster takeover, which might get the cluster into a bad state and VMs on it to stop responding, if for some reason the cluster takeover does not complete.
    NOTE: For the fix to take effect, all hosts in a vSAN stretched cluster must be upgraded to this patch.

  • The Host disk-layer aggregate stats full graphs view under the vSAN Disk (deep-dive) tab might report wrong latency values, because the latency values are generated in nanoseconds, but wrongly interpreted as milliseconds.

  • If you change the name of a vSAN datastore, extract the host profile and attempt to remediate an ESXi host against an upgrade profile, the operation might fail, because vSAN might not be enabled before the new name is verified and throw errors similar to Cannot apply the host configuration and ERROR Stack: MethodFault.summary Unable to set datastore name.

  • This fix provides optimization of concurrent delete and write jobs during vSAN deduplication to prevent a drop in ESXi hosts performance.

  • In a host decommission operation with RAID5 and RAID6 objects in ensureAccessibility or Full data evacuation mode, vSAN transfers data of a whole object even if a single component is affected. This might cause unnecessary movement of data across the cluster. This fix attempts to reconfigure or rebuild only affected components.

  • This fix adds to a previous fix from Patch Release ESXi600-201611001, that prevented space leaks in vSAN datastores, to reclaim leaked space and make it available.

  • When an ESXi host is attached to a large number of storage LUNs, the ISO installation of ESXi and the configuration of stateless caching through vSphere Auto Deploy takes some time to scan all disks and this might cause a delay of ESXi deployment.

  • Under heavy load, when multiple concurrent VVol management operations are in progress, such as taking snapshots, the VVol management daemon of an ESXi host might fail and make VVol management operations temporarily unavailable. This failure of the VVol management daemon might also make management of the host temporarily unavailable and brake its connection to the vCenter Server.

  • Entering maintenance mode for an ESXi host might time out after 30 minutes even if the specified timeout is larger than 30 minutes.

  • If you use ESXCLI command esxcli network nic down -n <vmnic> to disable a vmnic, the SNMP link status with OID: 1.3.6.1.2.1.2.2.1.7.4 continues to show that the NIC is down even after you enable the NIC with command esxcli network nic up -n <vmnic>.

  • Third party CIM providers or agents might cause memory corruptions that lead to destabilization of the kernel or drivers of an ESXi host and the host might fail with a purple diagnostic screen. These memory corruptions are usually visible at a later point, not at the exact time of the corruption, and this fix hardens the IPMI driver to recover from or avoid such destabilization states. Exception 13 is a general protection fault error and not all Exception 13 errors would indicate this issue.

  • If you use ESXi 6.0 and vCenter Server 6.5, attempts to add an instant-clone desktop pool might fail with an error similar to The resource <number> is in use. This issue occurs because of an internal design incompatibility in port binding between ESXi 6.0 and vCenter Server 6.5 when instant clone deployment is used.

Patch Download and Installation

The typical way to apply patches to ESXi hosts is through the VMware vSphere Update Manager. For details, see the Installing and Administering VMware vSphere Update Manager.

ESXi hosts can be updated by manually downloading the patch ZIP file from the VMware download page and installing the VIB by using the esxcli software vib command. Additionally, the system can be updated using the image profile and the esxcli software profile command. For details, see the vSphere Command-Line Interface Concepts and Examples and the vSphere Upgrade Guide.