Unable to Deploy or create VMs on cluster level
search cancel

Unable to Deploy or create VMs on cluster level

book

Article ID: 385908

calendar_today

Updated On: 07-14-2025

Products

VMware vCenter Server 7.0 VMware vCenter Server 8.0

Issue/Introduction

  • VM Deployment task fails even from the ESXi UI.

  • Below log snippets could be observed.
    • /var/log/vmware/vsphere-ui/logs/vsphere_client_virgo.log

[YYYY-MM-DDTHH:MM:SS] [ERROR] http-nio-5090-exec-1525      71189500 113157 200764 c.v.v.client.vm.storageDrs.impl.StorageRecommendationsValidator   com.vmware.vim.vmomi.client.exception.ConnectionException: http://localhost:1080/external-vecs/http2/Test.test.test.s.com/443/sdk invocation failed with "java.net.SocketTimeoutException600,000 milliseconds timeout on connection http-outgoing-28382 [ACTIVE]"]

    • /var/log/vmware/vsphere-ui/logs/opid.log

[YYYY-MM-DDTHH:MM:SS] [ERROR] http-nio-5090-exec-1525      71189500 113157 200764 Invocation of 'void com.vmware.vim.binding.vim.StorageResourceManager.recommendDatastores(com.vmware.vim.binding.vim.storageDrs.StoragePlacementSpec,com.vmware.vim.vmomi.core.Future)' for https://Test.test.test.s.com:443/sdk (guid=3de21686-6c7f-40a3-b237-d2967fc43fe3, id=3000613) for opId 'm2fxe2mf-9889226-auto-5vyks-h5:71189500' failed in 600559 ms, error: com.vmware.vim.vmomi.client.exception.ConnectionException: http://localhost:1080/external-vecs/http2/Test.test.test.s.com/443/sdk invocation failed with "java.net.SocketTimeoutException: 600,000 milliseconds timeout on connection http-outgoing-28382 [ACTIVE]"

[YYYY-MM-DDTHH:MM:SS] [WARN ] http-nio-5090-exec-1525       Request processing for opId 'm2fxe2mf-9889226-auto' to URL /ui/mutation/validate/urn%3Avmomi%3AVirtualMachine%3Avm-3018060%3A3de21686-6c7f-40a3-b237-d2967fc43fe3 took too long: 600578 ms

    • /var/log/vmware/vpxd/vpxd.log

YYYY-MM-DDTHH:MM:SS info vpxd[39097] [Originator@6876 sub=vpxLro opID=m2fxe2mf-9889226-auto-5vyks-h5:71189500-e9] [VpxLRO] -- BEGIN lro-1214853025 -- StorageResourceManager -- vim.StorageResourceManager.recommendDatastores -- 521db0c8-ac86-f3af-b578-26a0a8930554(52ad7d6a-65fa-9f38-13a2-2589464a02a9)

YYYY-MM-DDTHH:MM:SS error vpxd[14140] [Originator@6876 sub=vmomi.soapStub[392875] opID=m2fxe2mf-9889226-auto-5vyks-h5:71189500-e9] Initial service state request failed, disabling pings; /invsvc/vmomi/sdk, , <TCP '127.0.0.1 : 10080'>>>, HTTP Status:400 'Bad Request'

YYYY-MM-DDTHH:MM:SS info vpxd[14066] [Originator@6876 sub=vpxLro opID=sps-Main-328281-438-260643-68] [VpxLRO] -- BEGIN session[5265c884-c1ae-cc2c-2127-8a2431a0fc5a]5295ae6f-c3a1-4a97-566b-dbeae5883cc6 -- CatalogSyncManager -- vim.vslm.vcenter.CatalogSyncManager.queryCatalogChange -- 5265c884-c1ae-cc2c-2127-8a2431a0fc5a(524bf87c-40ea-11db-1a07-d318762e14a7)
YYYY-MM-DDTHH:MM:SS error vpxd[14066] [Originator@6876 sub=HostPicker opID=sps-Main-328281-438-260643-68] Couldn't find any candidate hosts for the provided urls
YYYY-MM-DDTHH:MM:SS warning vpxd[14066] [Originator@6876 sub=Vmomi opID=sps-Main-328281-438-260643-68] VMOMI activation LRO failed; <<5265c884-c1ae-cc2c-2127-8a2431a0fc5a, <TCP '127.0.0.1 : 8085'>, <TCP '127.0.0.1 : 38242'>>, CatalogSyncManager, vim.vslm.vcenter.CatalogSyncManager.queryCatalogChange>, N3Vim5Fault21InaccessibleDatastore9ExceptionE(Fault cause: vim.fault.InaccessibleDatastore
--> )
--> [context]zKq7AVECAQAAAAZLcQEWdnB4ZAAAAto3bGlidm1hY29yZS5zbwAAmXksABdtLQAf6jIBgfBxdnB4ZAABh1B0AUdKlQEGkZWBx6YaAYEBVhoBgv2fPgFsaWJ2aW0tdHlwZXMuc28AgfoBagGB9QRpAYE2BmkBgWQVaQGBx0JoAYGS7GgBAOdJIwB1nyMAwGU3A4d/AGxpYnB0aHJlYWQuc28uMAAEvzYPbGliYy5zby42AA==[/context]
YYYY-MM-DDTHH:MM:SS info vpxd[14066] [Originator@6876 sub=vpxLro opID=sps-Main-328281-438-260643-68] [VpxLRO] -- FINISH session[5265c884-c1ae-cc2c-2127-8a2431a0fc5a]5295ae6f-c3a1-4a97-566b-dbeae5883cc6
YYYY-MM-DDTHH:MM:SS info vpxd[14066] [Originator@6876 sub=Default opID=sps-Main-328281-438-260643-68] [VpxLRO] -- ERROR session[5265c884-c1ae-cc2c-2127-8a2431a0fc5a]5295ae6f-c3a1-4a97-566b-dbeae5883cc6 -- CatalogSyncManager -- vim.vslm.vcenter.CatalogSyncManager.queryCatalogChange: vim.fault.InaccessibleDatastore:
--> Result:
--> (vim.fault.InaccessibleDatastore) {
-->    faultCause = (vmodl.MethodFault) null,
-->    faultMessage = ,
-->    datastore = 'vim.Datastore:3de21686-6c7f-40a3-b237-d2967fc43fe3:datastore-4770264',
-->    name = "Datastore-Name-14",
-->    detail = "notAccessible"
-->    msg = ""
--> }
--> Args:
-->
--> Arg catalogChangeSpec:
--> (vim.vslm.CatalogChangeSpec) {
-->    datastore = 'vim.Datastore:datastore-4770264',
-->    startVClockTime = (vim.vslm.VClockInfo) {
-->       vClockTime = 0
-->    },
-->    fullSync = false
--> }

    • /var/log/vmware/vmware-sps/sps.log

YYYY-MM-DDTHH:MM:SS [pool-28-thread-5] INFO  opId=sps-Main-328281-438 com.vmware.vim.storage.common.util.OperationIdUtil - OperationID present in invoker thread, adding suffix and re-using it sps-Main-328281-438-260647
YYYY-MM-DDTHH:MM:SS [pool-2-thread-15] INFO  opId= com.vmware.vim.storage.common.task.CustomThreadPoolExecutor - [VLSI-client] Active thread count is: 1, Core Pool size is: 20, Queue size: 0, Time spent waiting in queue: 0 millis
YYYY-MM-DDTHH:MM:SS [pool-2-thread-15] INFO  opId= com.vmware.vim.storage.common.task.CustomThreadPoolExecutor - [VLSI-client] Request took 4 millis to execute.

YYYY-MM-DDTHH:MM:SS [jaeger.RemoteReporter-QueueProcessor] WARN  opId=sps-Main-328281-438 io.jaegertracing.internal.reporters.RemoteReporter - FlushCommand execution failed! Repeated errors of this command will not be logged.
io.jaegertracing.internal.exceptions.SenderException: Failed to flush spans.
        ... 3 more
Caused by: org.apache.thrift.transport.TTransportException: Cannot flush closed transport
        at io.jaegertracing.thrift.internal.reporters.protocols.ThriftUdpTransport.flush(ThriftUdpTransport.java:151) ~[jaeger-thrift-1.6.0.jar:1.6.0]
        at org.apache.thrift.TServiceClient.sendBase(TServiceClient.java:73) ~[libthrift-0.14.1.jar:0.14.1]
        at org.apache.thrift.TServiceClient.sendBaseOneway(TServiceClient.java:66) ~[libthrift-0.14.1.jar:0.14.1]
        at io.jaegertracing.agent.thrift.Agent$Client.send_emitBatch(Agent.java:70) ~[jaeger-thrift-1.6.0.jar:1.6.0]
        at io.jaegertracing.agent.thrift.Agent$Client.emitBatch(Agent.java:63) ~[jaeger-thrift-1.6.0.jar:1.6.0]
        at io.jaegertracing.thrift.internal.senders.UdpSender.send(UdpSender.java:84) ~[jaeger-thrift-1.6.0.jar:1.6.0]
        at io.jaegertracing.thrift.internal.senders.ThriftSender.flush(ThriftSender.java:114) ~[jaeger-thrift-1.6.0.jar:1.6.0]
        ... 3 more
Caused by: java.net.PortUnreachableException: ICMP Port Unreachable
        at java.net.PlainDatagramSocketImpl.send(Native Method) ~[?:1.8.0_402]
        at java.net.DatagramSocket.send(DatagramSocket.java:693) ~[?:1.8.0_402]
        at io.jaegertracing.thrift.internal.reporters.protocols.ThriftUdpTransport.flush(ThriftUdpTransport.java:149) ~[jaeger-thrift-1.6.0.jar:1.6.0]
        at org.apache.thrift.TServiceClient.sendBase(TServiceClient.java:73) ~[libthrift-0.14.1.jar:0.14.1]
        at org.apache.thrift.TServiceClient.sendBaseOneway(TServiceClient.java:66) ~[libthrift-0.14.1.jar:0.14.1]
        at io.jaegertracing.agent.thrift.Agent$Client.send_emitBatch(Agent.java:70) ~[jaeger-thrift-1.6.0.jar:1.6.0]
        at io.jaegertracing.agent.thrift.Agent$Client.emitBatch(Agent.java:63) ~[jaeger-thrift-1.6.0.jar:1.6.0]
        at io.jaegertracing.thrift.internal.senders.UdpSender.send(UdpSender.java:84) ~[jaeger-thrift-1.6.0.jar:1.6.0]
        at io.jaegertracing.thrift.internal.senders.ThriftSender.flush(ThriftSender.java:114) ~[jaeger-thrift-1.6.0.jar:1.6.0]

Environment

  • VMware vCenter Server 8.x
  • VMware vCenter Server 7.x

Cause

This issue occurs when Storage DRS cannot place new virtual machines due to one or more of the following reasons:

  • Datastore utilization exceeds the configured threshold, blocking new VM creation.
  • Storage DRS rules prevent placement because the VM does not yet exist during creation.
  • Datastore is inaccessible or offline.
  • Network or connection timeouts between vCenter, ESXi hosts, or storage.

These conditions stop VM deployment at the cluster level.

Resolution

To resolve the issue, please follow below:

  • Add a new datastore or increase the size of the existing datastore to ensure sufficient space for VM deployment.
  • Remove or modify any configured rules or policies that are causing Storage DRS faults, such as affinity/anti-affinity rules or storage policies that block VM creation.
  • Alternatively, deploy or create the virtual machine directly on a specific datastore instead of the datastore cluster.
  • Temporarily disable Storage DRS on the datastore cluster, create the VM, and then re-enable Storage DRS.

These actions address both storage capacity constraints and DRS rule conflicts that can prevent VM creation.