VMware ESXi 6.0, Patch ESXi600-201509201-UG: Updates esx-base
search cancel

VMware ESXi 6.0, Patch ESXi600-201509201-UG: Updates esx-base

book

Article ID: 334423

calendar_today

Updated On:

Products

VMware vSphere ESXi

Issue/Introduction

Release date: Sep 10, 2015

Patch Category Bugfix
Patch SeverityCritical
BuildFor build information, see KB 2124715.
Host Reboot RequiredYes
Virtual Machine Migration or Shutdown RequiredYes
Affected HardwareN/A
Affected SoftwareN/A
VIBs IncludedVMware:esx-base:6.0.0-1.17.3029758
PRs Fixed1370778, 1374957, 1378321, 1397891, 1398409, 1412807, 1429541, 1431568, 1433014, 1438192, 1442807, 1443749, 1445823, 1446166, 1446597, 1447259, 1447333, 1453060, 1453162, 1454166, 1457323, 1459527, 1460217, 1460630, 1464230, 1466235, 1468905, 1474510, 1475186, 1481577, 1488335, 1491513
Related CVE numbersNA


Environment

VMware vSphere ESXi 6.0

Resolution

Summaries and Symptoms

This patch updates the esx-base VIB to resolve the following issues:

  • On HP systems with ESXi 6.0, you might see excessive logging of VmkAccess messages in vmkernel.log for the following system commands that are executed during runtime:
    • esxcfg-scsidevs
    • localcli storage core path list
    • localcli storage core device list

    Excessive log messages similar to the following are logged in the VmkAccess logs:

    cpu7:36122)VmkAccess: 637: localcli: access denied:: dom:appDom(2), obj:forkExecSys(88), mode:syscall_allow(2)
    cpu7:36122)VmkAccess: 922: VMkernel syscall invalid (1025)
    cpu7:36122)VmkAccess: 637: localcli: access denied:: dom:appDom(2), obj:forkExecSys(88), mode:syscall_allow(2)
    cpu7:36122)VmkAccess: 922: VMkernel syscall invalid (1025)
    cpu0:36129)VmkAccess: 637: esxcfg-scsidevs: access denied:: dom:appDom(2), obj:forkExecSys(88), mode:syscall_allow(2)

  • This release introduces a new VMX option sched.cpu.latencySensitivity.sysContexts to address issues on vSphere 6.0 where most system contexts are still worldlets. The Scheduler utilizes the sched.cpu.latencySensitivity.sysContexts option for each virtual machine to automatically identify a set of system contexts that might be involved in the latency-sensitive workloads. For each of these system contexts, exclusive affinity to one dedicated physical core is provided. The VMX option sched.cpu.latencySensitivity.sysContexts denotes how many exclusive cores a low-latency VM can get for the system contexts.

  • A Linux guest OS booted on EFI firmware might fail to respond to the keyboard and mouse input if any motion of the mouse occurs during the short window of EFI boot time.

  • While the PCI information on a device is collected for passthrough, the error reporting for that device is disabled.

    This issue is resolved in this release by providing VMkernel boot option pcipDisablePciErrReporting to enable PCI passthrough devices to report errors. By default the option is set to TRUE implying error reporting is disabled.

  • The vFlash cache metric counters such as FlashCacheIOPs, FlashCacheLatency, FlashCacheThroughput might not be available when CBT is enabled on a virtual disk. Error messages similar to the following might be logged in the stats.log file:

    xxxx-xx-xxTxx:xx:xx.200Z [xxxxxxxx error 'Statssvc.vim.PerformanceManager'] CollectVmVdiskStats : Failed to get VFlash Cache stats for vscsi id scsi0:0 for vm 3
    xxxx-xx-xxTxx:xx:xx.189Z [xxxxxxxx error 'Statssvc.vim.PerformanceManager'] GetVirtualDiskVFCStats: Failed to get VFlash Cache stat values for vmdk scsi0:0. Exception VFlash Cache filename not found!

  • Applying host profile on stateless ESXi host with large number of storage LUNs might take long time to reboot when you enable stateless caching with esx as the first disk argument. This happens when you manually apply host profile or during the reboot of the host.

  • After you upgrade from ESXi 5.5 to 6.0, attempts to add a vmnic to a VMware ESXi host connected to a vSphere Distributed Switch (VDS) might fail. The issue occurs when ipfix is enabled and IPv6 is disabled.

    In the /var/log/vmkernel.log file on the affected ESXi host, you see entries similar to:

    cpu10:xxxxx opID=xxxxxxxx)WARNING: Ipfix: IpfixActivate:xxx: Activation failed for 'DvsPortset-1': Unsupported address family
    cpu10:xxxxx opID=xxxxxxxx)WARNING: Ipfix: IpfixDVPortParamWrite:xxx: Configuration failed for switch DvsPortset-1 port xxxxxxxx : Unsupported address family
    cpu10:xxxxx opID=xxxxxxxx)WARNING: NetDVS: xxxx: failed to init client for data com.vmware.etherswitch.port.ipfix on port xxx
    cpu10:xxxxx opID=xxxxxxxx)WARNING: NetPort: xxxx: failed to enable port 0x4000002: Unsupported address family
    cpu10:xxxxx opID=xxxxxxxx)NetPort: xxxx: disabled port 0x4000002
    cpu10:xxxxx opID=xxxxxxxx)Uplink: xxxx: vmnic2: Failed to enable the uplink port 0x4000002: Unsupported address family

  • You are unable to view the real time performance graph for Network of a virtual machine configured with VMXNET3 adapter in the VMware vSphere Client 6.0 as the option is not available in the Switch to drop-down list.

  • An ESXi 6.0 host might fail with a purple diagnostic screen when multiple vSCSI filters are attached to a VM disk. The purple diagnostic screen or backtrace contains entries similar to the following:

    cpu24:nnnnnn opID=nnnnnnnn)@BlueScreen: #PF Exception 14 in world 103492:hostd-worker IP 0x41802c2c094d addr 0x30
    PTEs:0xnnnnnnnn;0xnnnnnnnnnn;0x0;
    cpu24:nnnnnn opID=nnnnnnnn)Code start: 0xnnnnnnnnnnnn VMK uptime: 21:06:32:38.296
    cpu24:nnnnnn opID=nnnnnnnn)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]VSCSIFilter_GetFilterPrivateData@vmkernel#nover+0x1 stack: 0xnnnnnnn
    cpu24:nnnnnn opID=nnnnnnnn)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]VSCSIFilter_IssueInternalCommand@vmkernel#nover+0xc3 stack: 0xnnnnnn
    cpu24:nnnnnn opID=nnnnnnnn)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]CBRC_FileSyncRead@<None>#<None>+0xb1 stack: 0x0
    cpu24:nnnnnn opID=nnnnnnnn)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]CBRC_DigestRecompute@<None>#<None>+0xnnn stack: 0xnnnn
    cpu24:nnnnnn opID=nnnnnnnn)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]CBRC_FilterDigestRecompute@<None>#<None> +0x36 stack: 0x20
    cpu24:nnnnnn opID=nnnnnnnn)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]VSI_SetInfo@vmkernel#nover+0x322 stack: 0xnnnnnnnnnnnn
    cpu24:nnnnnn opID=nnnnnnnn)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]UWVMKSyscallUnpackVSI_Set@<None>#<None>+0xef stack: 0x41245111df10
    cpu24:nnnnnn opID=nnnnnnnn)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn] User_UWVMKSyscallHandler@<None>#<None>+0x243 stack: 0xnnnnnnnnnnnn
    cpu24:nnnnnn opID=nnnnnnnn)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]User_UWVMKSyscallHandler@vmkernel#nover+0x1d stack: 0xnnnnnnnn
    cpu24:nnnnnn opID=nnnnnnnn)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]gate_entry@vmkernel#nover+0x64 stack: 0x0

  • When some VIBs are installed on the system, esxupdate constructs a new image in /altbootbank and changes the /altbootbank boot.cfg bootstate to be updated. When a live installable VIB is installed, the system saves the configuration change to /altbootbank. The stage operation deletes the contents of /altbootbank unless you perform a remediate operation after the stage operation. The VIB installation might be lost if you reboot the host after a stage operation.

  • vSphere APIs for I/O Filtering (VAIO) provide a framework that allows third parties to create software components called I/O filters. The filters can be installed on ESXi hosts and can offer additional data services to virtual machines by processing I/O requests that move between the guest operating system of a virtual machine and virtual disks.

  • Attempts to launch virtual machines with higher display resolution and multiple monitors setup from VDI using PCOIP solutions might fail. The VMs fail on launch and go into a Power Off state. In the /var/log/vmkwarning.log, you see entries similar to:

    cpu3:xxxxxx)WARNING: World: vm xxxxxx: 12276: vmm0:VDI-STD-005:vmk: vcpu-0:p2m update buffer full
    cpu3:xxxxxx)WARNING: VmMemPf: vm xxxxxx: 652: COW copy failed: pgNum=0x3d108, mpn=0x3fffffffff
    cpu3:xxxxxx)WARNING: VmMemPf: vm xxxxxx: 2626: PhysPageFault failed Failure: pgNum=0x3d108, mpn=0x3fffffffff
    cpu3:xxxxxx)WARNING: UserMem: 10592: PF failed to handle a fault on mmInfo at va 0x60ee6000: Failure. Terminating...
    cpu3:xxxxxx)WARNING: World: vm xxxxxx: 3973: VMMWorld group leader = 255903, members = 1
    cpu7:xxxxxx)WARNING: World: vm xxxxxx: 3973: VMMWorld group leader = 255903, members = 1
    cpu0:xxxxx)WARNING: World: vm xxxxxx: 9604: Panic'd VMM world being reaped, but no core dumped.


  • The new Host Profile Plugin is now available to collect the DEBUG log and enable Trace Log of ESXi host profile engine when the host is booted through Active Directory.

  • When multiple virtual machines share storage space, the vSphere Client summary page might display incorrect values for the following:

    • Not-shared Storage in the VM Summary Page
    • Provisioned Space in the data Summary Page
    • Used Space in the VM tab of the host

  • An ESXi host might stop responding and the virtual machines become inaccessible. Also, the ESXi host might lose connection to vCenter Server due to a deadlock during storage hiccups on Non-ATS VMFS datastores.

  • When resetting a large number of virtual machines at the same time with NVIDIA GRID vGPU device, there might be a reboot failure for some of the virtual machines. A reboot error similar to the following might be displayed:

    VMIOP: no graphics device is available for vGPU grid_k100.

  • Enabling vMotion on vmk10 or higher might cause vmk1 to have vMotion enabled on reboot of the ESXi host. This issue can cause excessive traffic over vmk1 and result in network issues.

  • Virtual machines using VMXNET3 virtual adapter might fail when attempting to boot from iPXE (open source boot firmware).

  • When ServerView CIM Provider and Emulex CIM Provider are installed on the same ESXi host, the Emulex CIM Provider (sfcb-emulex_ucn) might fail to respond resulting in failure to monitor hardware status.

  • Attempts to modify storage policies of Powered On virtual machine created from linked clones might fail in vCenter Server with an error message similar to the following:

    The scheduling parameter change failed.

  • Host Profiles become non-compliant with a simple change to SNMP syscontact or syslocation. The issue occurs as the SNMP host profile plugin applies only a single value to all hosts attached to the host profile. An error message similar to the following might be displayed:

    SNMP Agent Configuration differs

    This issue is resolved in this release by enabling per-host value settings for certain parameters like syslocation, syscontact, v3targets, v3users and engineid.

  • When using load balancing based on physical NIC load on VDS 6.0, if one of the uplinks is disconnected or shut down, failover is not initiated.

  • Attempts to migrate a secondary VM enabled with fault tolerance might fail and the VM might become unresponsive under heavy workload.

  • Unnecessary periodic device and file system rescan triggered by vSAN might cause the ESXi host and virtual machines within the environment to randomly stop responding.

  • When you clone a virtual machine across different storage containers, the VMId of the source Virtual Volume (VVOL) is taken as the initial value for the cloned VVOL VMID.

  • Attempts to migrate a secondary VM enabled with fault tolerance might fail and the VM might become unresponsive under heavy workload.

  • Unnecessary periodic device and file system rescan triggered by vSAN might cause the ESXi host and virtual machines within the environment to randomly stop responding.

  • When applying host profile during Auto Deploy, you might lose network connectivity because the VXLAN Tunnel Endpoint (VTEP) NIC gets tagged as management vmknic.

  • VMFS volume on an ESXi host might remain locked due to failed metadata operations. An error message similar to the following is observed in vmkernel.log file:

    WARNING: LVM: 12976: The volume on the device naa.50002ac002ba0956:1 locked, possibly because some remote host encountered an error during a volume operation and could not recover.

  • When an All Paths Down (APD) event occurs, LUNs connected to ESXi might remain inaccessible after paths to the LUNs recover. You see the following events in sequence in the /var/log/vmkernel.log:

    1. Device enters APD.
    2. Device exits APD.
    3. Heartbeat recovery and filesystem operations on the device fail due to not found.
    4. The APD timeout expires despite the fact that the device exited APD previously.

  • An ESXi host might intermittently lose connectivity and e1000e virtual NIC might get reset. An All Paths Down (APD) condition to NFS volumes might also be observed. An error message similar to the following is written to the vmkernel.log file

    packets completion seems stuck, issuing reset

  • Slow NFS storage performance is observed on virtual machines running on VSA-provisioned NFS storage. This is due to the delayed acknowledgements from the ESXi machine to NFS Read responses.

    This issue is resolved in this release by disabling delayed acks for LRO TCP packets.

Patch Download and Installation

The typical way to apply patches to ESXi hosts is through the VMware vSphere Update Manager. For details, see the Installing and Administering VMware vSphere Update Manager.
ESXi hosts can be updated by manually downloading the patch ZIP file from the VMware download page and installing the VIB by using the esxcli software vib command. Additionally, the system can be updated using the image profile and the esxcli software profile command. For details, see the vSphere Command-Line Interface Concepts and Examples and the vSphere Upgrade Guide.