Alert triggered for host configuration: "An error occurred during host configuration: /usr/sbin/esxupdate returned with exit status: 10"
search cancel

Alert triggered for host configuration: "An error occurred during host configuration: /usr/sbin/esxupdate returned with exit status: 10"

book

Article ID: 431834

calendar_today

Updated On:

Products

VMware vSphere ESXi

Issue/Introduction

Symptoms

  • Following prompt with description "Apply Solution" will show below frequently running task in vCenter
    “An error occurred during host configuration: /usr/sbin/esxupdate returned with exit status: 10”

    The task initiated by:
    VSPHERE.LOCAL\com.vmware.vr-sa-59c4b753-4c02-####-####-##########

  • All ESXi hosts reporting frequent alerts were confirmed to have the vmware-hbr-agent.vib installed, as verified using the command below

    #esxcli software vib list | grep hbr

    Sample output:
    vmware-hbr-agent           8.0.3-0.0.24299506            VMware  VMwareCertified   2025-01-02    host
    vmware-hbrsrv                  8.0.3-0.0.23305546          VMware  VMwareCertified   2025-01-02    host

        var/run/log/vpxa.log  indicates that the install VIB task on the ESXi host was initiated by Patch Manager.

 In(166) Vpxa[16086607] [Originator@6876 sub=vpxLro opID=fea4a3d3-ae17-4ab9-a71b-d4ee5fcf494d-HMSINT-2610-c2-e6] [VpxLRO] -- BEGIN task-58359 -- patchManager -- vim.host.PatchManager.InstallV2 -- 520ecc61-86db-d798-3e6f-d93c27caaa8f
Wa(164) Vpxa[16083193] [Originator@6876 sub=Vmomi opID=fea4a3d3-ae17-4ab9-a71b-d4ee5fcf494d-HMSINT-2610-c2-e6] VMOMI activation LRO failed; <<520ecc61-86db-d798-3e6f-d93c27caaa8f, <TCP '127.0.0.1 : 8089'>, <TCP '127.0.0.1 : 52183'>>, patchManager, vim.host.PatchManager.InstallV2, <vpxapi.version.v8_0_3_0, official, 8.0.3.0>, (null)>, N3Vim5Fault19PlatformConfigFault9ExceptionE(Fault cause: vim.fault.PlatformConfigFault
 Wa(164) Vpxa[16083179] -->[context]zKq7AVICAgAAACWJdAEPdnB4YQAA48lHbGlidm1hY29yZS5zbwCBT4EdAWxpYnZpbS10eXBlcy5zbwCBk/wdAQJkgx52cHhhAALTxR0CzcQkAue1HAISEB0Cx5UcAtJaHQAe2ywA4P8sADtQUgNSeABsaWJwdGhyZWFkLnNvLjAABD9SD2xpYmMuc28uNgA=[/context]
 In(166) Vpxa[16083193] [Originator@6876 sub=vpxLro opID=fea4a3d3-ae17-4ab9-a71b-d4ee5fcf494d-HMSINT-2610-c2-e6] [VpxLRO] -- FINISH task-58359
 Er(163) Vpxa[16083193] [Originator@6876 sub=Default opID=fea4a3d3-ae17-4ab9-a71b-d4ee5fcf494d-HMSINT-2610-c2-e6] [VpxLRO] -- ERROR task-58359 -- 520ecc61-86db-d798-3e6f-d93c27caaa8f -- patchManager -- vim.host.PatchManager.InstallV2: :vim.fault.PlatformConfigFault
 Er(163) Vpxa[16083179] --> Result:
 Er(163) Vpxa[16083179] --> (vim.fault.PlatformConfigFault) {
 Er(163) Vpxa[16083179] -->    faultCause = (vmodl.MethodFault) null,
 Er(163) Vpxa[16083179] -->    faultMessage = <unset>,
 Er(163) Vpxa[16083179] -->    text = "/usr/sbin/esxupdate returned with exit status: 10"
 Er(163) Vpxa[16083179] -->    msg = "An error occurred during host configuration: /usr/sbin/esxupdate returned with exit status: 10."
 Er(163) Vpxa[16083179] --> }
 Er(163) Vpxa[16083179] --> Args:
 Er(163) Vpxa[16083179] -->
 Er(163) Vpxa[16083179] --> Arg metaUrls:
 Er(163) Vpxa[16083179] --> (string) []
 Er(163) Vpxa[16083179] --> Arg bundleUrls:
 Er(163) Vpxa[16083179] --> (string) []
 Er(163) Vpxa[16083179] --> Arg vibUrls:
 Er(163) Vpxa[16083179] --> (string) [
 Er(163) Vpxa[16083179] -->    "https://VCenter-IP:443/vib/vmware-hbr-agent.vib"

        /var/log/vmware/vpxd/vpxd.log indicates the task to verify VIB status on the host has failed even though the VIBs are already installed on the ESXi host

 info vpxd[1366293] [Originator@6876 sub=vpxLro opID=ba68e846-fef6-4fe5-8d1c-f35edd22dbc5-HMSINT-2602-16] [VpxLRO] -- BEGIN task-7337829 -- patchManager-1810759 -- vim.host.PatchManager.InstallV2 -- 52d61923-21ef-cf55-eca8-4a74d4f1bd42(522f19a8-a6ac-e203-9767-636c45147a5f)
warning vpxd[1020126] [Originator@6876 sub=Vmomi opID=ba68e846-fef6-4fe5-8d1c-f35edd22dbc5-HMSINT-2602-16] VMOMI activation LRO failed; <<52d61923-21ef-cf55-eca8-4a74d4f1bd42, <TCP '127.0.0.1 : 8085'>, <TCP '127.0.0.1 : 44166'>>, patchManager-1810759, vim.host.PatchManager.InstallV2, <vim.version.v8_0_3_0, official, 8.0.3.0>, (null)>, N3Vim5Fault19PlatformConfigFault9ExceptionE(Fault cause: vim
.fault.PlatformConfigFault
error vpxd[1020126] [Originator@6876 sub=Default opID=ba68e846-fef6-4fe5-8d1c-f35edd22dbc5-HMSINT-2602-16] [VpxLRO] -- ERROR task-7337829 -- 52d61923-21ef-cf55-eca8-4a74d4f1bd42(522f19a8-a6ac-e203-9767-636c45147a5f) -- patchManager-1810759 -- vim.host.PatchManager.InstallV2: :vim.fault.PlatformConfigFault
--> Result:
--> (vim.fault.PlatformConfigFault) {
-->    faultCause = (vmodl.MethodFault) null,
-->    faultMessage = <unset>,
-->    text = "/usr/sbin/esxupdate returned with exit status: 10"
-->    msg = "An error occurred during host configuration: /usr/sbin/esxupdate returned with exit status: 10."
--> }
--> Args:
-->
--> Arg metaUrls:
--> (string) []
--> Arg bundleUrls:
--> (string) []
--> Arg vibUrls:
--> (string) [
-->    "https://VCenter-IP:443/vib/vmware-hbr-agent.vib"
In(14) esxupdate[18677736]: Opening https://VCenter-IP:443/vib/vmware-hbr-agent.vib for download
In(14) esxupdate[18677736]: Skipping installed VIBs VMware_bootbank_vmware-hbr-agent_8.0.3-0.0.24299508

Environment

VMware Live Recovery 9.X

Cause

The install VIBs task on the VMware ESXi host failed due to memory allocation exhaustion by VMware lifecycle process during cached VIB loading. Prior to the failure, multiple VIB verification errors with duplicate names were recorded, indicating conflicts in cached VIB state stored within the lifecycle directory on the boot LUN. Storage analysis further revealed that each host was presented with four boot LUNs, although only one device per host was marked with Is Boot Device: true, while the remaining LUNs belonged to other hosts due to improper zoning or LUN masking. This misconfiguration caused ESXi to detect multiple lifecycle directories during VIB verification, leading to duplicate metadata conflicts, increased memory consumption, and eventual failure of the VIB verification and installation workflow.

 

     /var/log/esxupdate.log indicates memory allocation failure occurred during the verification phase of a cached VIB operation.

 Er(11) esxupdate[9110700]: An esxupdate error exception was caught:
 Er(11) esxupdate[9110700]: Traceback (most recent call last):
 Er(11) esxupdate[9110700]: File "/usr/sbin/esxupdate", line 378, in main
 Er(11) esxupdate[9110700]: cmd.Run()
 Er(11) esxupdate[9110700]: File "/lib64/python3.11/site-packages/vmware/esx5update/Cmdline.py", line 161, in Run
 Er(11) esxupdate[9110700]: res = t.InstallVibsFromSources(viburls, metaurls, None,
 Er(11) esxupdate[9110700]: File "/lib64/python3.11/site-packages/vmware/esximage/Transaction.py", line 943, in InstallVibsFromSources
 Er(11) esxupdate[9110700]: inst, removed, exitstate = self._installVibs(curprofile,
 Er(11) esxupdate[9110700]: File "/lib64/python3.11/site-packages/vmware/esximage/Transaction.py", line 1222, in _installVibs
 Er(11) esxupdate[9110700]: _, exitstate = self._validateAndInstallProfile(
 Er(11) esxupdate[9110700]: File "/lib64/python3.11/site-packages/vmware/esximage/Transaction.py", line 1456, in _validateAndInstallProfile
 Er(11) esxupdate[9110700]: resVibCache = ReservedVibCache()
 Er(11) esxupdate[9110700]: File "/lib64/python3.11/site-packages/vmware/esximage/ImageManager/HostSeeding.py", line 1137, in __init__
 Er(11) esxupdate[9110700]: self._loadCachedVibs()
 Er(11) esxupdate[9110700]: File "/lib64/python3.11/site-packages/vmware/esximage/ImageManager/HostSeeding.py", line 1206, in _loadCachedVibs
 Er(11) esxupdate[9110700]: for vibId in os.listdir(esxioCachePath):
 Er(11) esxupdate[9110700]: OSError: [Errno 12] Cannot allocate memory: '/vmfs/volumes/625d44be-##########-#####-##########/vmware/lifecycle/hostSeed/esxioCachedVibs'

/var/log/hostd.log indicates that multiple duplicate VIB instances have been detected, resulting in failure of the VIB verification process.

In(166) Hostd[2099010]: [Originator@6876 sub=Vimsvc.ha-eventmgr] Event 6544552 : Issue detected on ESXI_host_name in ha-datacenter: DC: ###: Duplicate name 'VMW_bootbank_vmksdhci-esxio_1.0.3-3vmw.803.0.0.24022510.vib' entry in cache.
In(166) Hostd[2099020]: [Originator@6876 sub=Vimsvc.TaskManager opID=76a6cd4f-d66b-4748-86a3-03c79723b17d-HMSINT-2605-82-cd-74c4 sid=523ce760 user=vpxuser:VSPHERE.LOCAL\com.vmware.vr-sa-59c4b753-####-####-####-##########] Task Completed : haTask-ha-host-vim.host.PatchManager.InstallV2-14684313 Status error
 In(166) Hostd[2099020]: [Originator@6876 sub=Hostsvc.PatchManager opID=76a6cd4f-d66b-4748-86a3-03c79723b17d-HMSINT-2605-82-cd-74c4 sid=523ce760 user=vpxuser:VSPHERE.LOCAL\com.vmware.vr-sa-59c4b753-####-####-####-##########] PatchManager InstallV2 task returned: 10

       Multiple boot LUNs mapped to a single VMware ESXi host were identified from the output of the following command:

         #esxcli storage vmfs extent list

Volume Name                                                                              VMFS UUID                                     Extent Number                           Device Name                                         Partition
-------------------------------------------------------------------          ----------------------------------                               -------------                           ------------------------------------                      ---------
OSDATA-62024646-########-####-############     62024646-########-####-############             0                      naa.#############################33          7
OSDATA-6203891d-########-####-############     6203891d-########-####-############             0                      naa.#############################34          7
OSDATA-625d44be-########-####-############     625d44be-########-####-############             0                      naa.#############################35          7
OSDATA-625e84b6-########-####-############     625e84b6-########-####-############             0                      naa.#############################37          7

Resolution

To resolve the issue caused by presenting boot LUNs from other ESXi hosts, the additional boot LUNs must be detached from the host

Step 1: Identify the Valid Boot LUN

  • Log in to each ESXi host via SSH.
  • Execute the following command to list all storage devices:

    #esxcli storage core device list

naa.XXXXXXXXXXXXXXXXXXXX:
   Display Name: HITACHI Fibre Channel Disk (naa.XXXXXXXXXXXXXXXXXXXX)
   Has Settable Display Name: true
   Size: 51200
   Device Type: Direct-Access
   Multipath Plugin: NMP
   Devfs Path: /vmfs/devices/disks/naa.XXXXXXXXXXXXXXXXXXXX
   Vendor: HITACHI
   Model: OPEN-V
   Revision: 5001
   SCSI Level: 4
   Is Pseudo: false
   Status: on
   Is RDM Capable: true
   Is Local: false
   Is Removable: false
   Is SSD: false
   Is VVOL PE: false
   Is Offline: false
   Is Perennially Reserved: false
   Queue Full Sample Size: 0
   Queue Full Threshold: 0
   Thin Provisioning Status: yes
   Attached Filters: VAAI_FILTER
   VAAI Status: supported
   Other UIDs: vml.#####################################
   Is Shared Clusterwide: true
   Is SAS: false
   Is USB: false
   Is Boot Device: true        -------> True indicate this is BOOTLUN 
   Device Max Queue Depth: 64
   No of outstanding IOs with competing worlds: 32
   Drive Type: unknown
   RAID Level: unknown
   Number of Physical Drives: unknown
   Protection Enabled: false
   PI Activated: false
   PI Type: 0
   PI Protection Mask: NO PROTECTION
   Supported Guard Types: NO GUARD SUPPORT
   DIX Enabled: false
   DIX Guard Type: NO GUARD SUPPORT
   Emulated DIX/DIF Enabled: false

 

Identify the device where the parameter Is Boot Device: true is set. This LUN represents the active and valid boot device for that specific host.

Step 2: Identify extra Boot LUNs mapped to ESXi host

  • Review the remaining LUNs presented to the host.
  •  Identify all the boot LUN mapped to ESXi host from the output of below command
     #esxcli storage vmfs extent list
  • From the list exclude the LUN with Is Boot Device: true

 

Step 3: Detach extra Boot LUNs mapped to ESXi host

  • From the vSphere Client:
    • Navigate to:
      Host → Configure → Storage → Storage Devices
    • Locate the LUNs using their NAA IDs (as identified in step 2).
    • Select the device and click Detach.
    • Confirm the operation once all prerequisite checks pass.
  • Repeat this step for all unintended boot LUNs

If Detach LUN operation fails:

If the datastore cannot be detached due to “resource busy” error and incorrect boot LUN mapping persists:

  • The ESXi host should be reinstalled on a dedicated boot LUN.
  • Ensure each host can only see its own boot device.