VMware ESXi 6.5, Patch ESXi650-201703401-BG: Updates esx-base, vsanhealth, vsan VIBs
search cancel

VMware ESXi 6.5, Patch ESXi650-201703401-BG: Updates esx-base, vsanhealth, vsan VIBs


Article ID: 326741


Updated On:


VMware vSphere ESXi


Release date: Mar 9, 2017



Patch Category Bugfix
Patch SeverityCritical
BuildFor build information, see KB 2148989.
Host Reboot RequiredYes
Virtual Machine Migration or Shutdown RequiredYes
Affected HardwareN/A
Affected SoftwareN/A
VIBs Included
  • VMware_bootbank_esx-base_6.5.0-0.14.5146846
  • VMware_bootbank_vsan_6.5.0-0.14.5146846
  • VMware_bootbank_vsanhealth_6.5.0-0.14.5146846
PRs Fixed1759763, 1763297, 1764075, 1767820, 1768709, 1772892, 1777234, 1778524, 1782130, 1797111, 1798703
Related CVE numbersN/A



VMware vSphere ESXi 6.5


Summaries and Symptoms

This patch updates the esx-base VIB to resolve the following issues:

  • If you hot-add a child disk to a virtual machine, and the path to the parent disk is different than the virtual machine's home directory, the virtual machine unexpectedly stops working, powers off, and you cannot power it on again. The issue does not reproduce if you hot-add a child disk and you do not specify the path, or just specify the datastore to which the child disk belongs.
  • When the Dump file set is called using the esxcfg-dumppart or other commands multiple times in parallel, an ESXi host might stop responding and display a purple diagnostic screen with entries similar to the following as a result of a race condition while dump block map is freed up:

    @BlueScreen: PANIC bora/vmkernel/main/dlmalloc.c:4907 - Corruption in dlmalloc
    Code start: 0xnnnnnnnnnnnn VMK uptime: 234:01:32:49.087
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]PanicvPanicInt@vmkernel#nover+0x37e stack: 0xnnnnnnnnnnnn
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]Panic_NoSave@vmkernel#nover+0x4d stack: 0xnnnnnnnnnnnn
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]DLM_free@vmkernel#nover+0x6c7 stack: 0x8
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]Heap_Free@vmkernel#nover+0xb9 stack: 0xbad000e
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]Dump_SetFile@vmkernel#nover+0x155 stack: 0xnnnnnnnnnnnn
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]SystemVsi_DumpFileSet@vmkernel#nover+0x4b stack: 0x0
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]VSI_SetInfo@vmkernel#nover+0x41f stack: 0x4fc
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]UWVMKSyscallUnpackVSI_Set@ <none></none># <none></none>+0x394 stack: 0x0
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]User_UWVMKSyscallHandler@ <none></none># <none></none>+0xb4 stack: 0xffb0b9c8
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]User_UWVMKSyscallHandler@vmkernel#nover+0x1d stack: 0x0
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]gate_entry_@vmkernel#nover+0x0 stack: 0x0
  • Tools in guest operating system might send unmap requests that are not aligned to the VMFS unmap granularity. Such requests are not passed to the storage array for space reclamation. In result, you might not be able to free space on the storage array.
  • Slow device discovery causes bootbank/scratch to not get mounted because the LUN is not available. A setup specific issue, for instance slow login to target ports or fabric delay, might cause the slow discovery. To compensate for possible delays, you now can configure device path discovery wait time. For more information, see Knowledge Base Article 2149444.
  • A newly added physical NIC might not have an entry in the esx.conf file even after a host reboot and as a result the NIC uses the virtual MAC address 00:00:00:00:00:00 during communication.
  • Attempts to cancel snapshot creation for a VM whose VMDKs are on Virtual Volumes datastores might result in virtual disks not getting rolled back properly and consequent data loss. This situation occurs when a VM has multiple VMDKs with the same name and these come from different Virtual Volumes datastores.
  • When a VMDK is configured with I/O filters, and the guest OS issues a successful SCSI unmap commands, it is possible that the SCSI unmap command is sucessful although one of the I/O filters might fail the operation. As a result, the state of the VMDK and the filter diverges and can result in data corruption.
  • The ESXi SNMP agent crashes randomly and triggers false host reboot alarms in the SNMP monitoring station. For stateful ESXi hosts the core dump is located in the /var/core directory, and the syslog.log file contains out of memory error messages. For stateless ESXi hosts, see Knowledge Base Article 1032051. The issue results in loss of monitoring of the host.
  • In an environment with Nutanix NFS storage, the secondary Fault Tolerance VM fails to take over when the primary Fault Tolerance VM is down. The issue occurs when an ESXi host does not receive a response of a CREATE call within a timeout period. After you apply this patch, you can configure the CreateRPCTimeout parameter by running the following command:

    esxcfg-advcfg -s 20 /NFS/CreateRPCTimeout

    Note: As the host profile does not capture the CreateRPCTimeout parameter, the value of CreateRPCTimeout is not persistent in stateless environments.
  • The issue occurs during I/O operations of a VM with SEsparse-based snapshots. The VM might hang when running a specific type of I/O workload in a number of threads because of interlocking SEsparse metadata resources.
  • In order to update the last seen time stamp for each LUN on an ESXi host, a process has to acquire a lock on /etc/vmware/lunTimestamps.log file. The lock is being held for a long time than necessary in each process. If there are too many such processes trying to update the /etc/vmware/lunTimestamps.log file, they might result in lock contention on this file. If hostd is one of these processes that is trying to acquire the lock, the ESXi host might get disconnected from the vCenter Server or become unresponsive with lock contention error messages (on lunTimestamps.log file) in the hostd logs. You might get a similar error message:

    Error interacting with configuration file /etc/vmware/lunTimestamps.log: Timeout while waiting for lock, /etc/vmware/lunTimestamps.log.LOCK, to be released. Another process has kept this file locked for more than 30 seconds. The process currently holding the lock is <process_name>(<PID>). This is likely a temporary condition. Please try your operation again.

    • process_name is the process or service that is currently holding the lock on the /etc/vmware/lunTimestamps.log. For example, smartd, esxcfg-scsidevs, localcli, etc.
    • PID is the process ID for any of these services.

Patch Download and Installation

The typical way to apply patches to ESXi hosts is through the VMware vSphere Update Manager. For details, see the Installing and Administering VMware vSphere Update Manager.
ESXi hosts can be updated by manually downloading the patch ZIP file from the VMware download page and installing the VIB by using the esxcli software vib command. Additionally, the system can be updated using the image profile and the esxcli software profile command. For details, see the vSphere Command-Line Interface Concepts and Examples and the vSphere Upgrade Guide.