VMware Aria Automation Orchestrator HTTP REST plugin fails with handshake message size error
search cancel

VMware Aria Automation Orchestrator HTTP REST plugin fails with handshake message size error

book

Article ID: 434240

calendar_today

Updated On:

Products

VCF Automation

Issue/Introduction

When executing requests using the HTTP REST plugin in VMware Aria Automation Orchestrator to a specific device, the workflow fails during the TLS handshake.

The following error message is observed in the logs or the vRO inventory:

Cannot execute the request: ; The size of the handshake message (37832) exceeds the maximum allowed size (32768)

This occurs when the target endpoint sends a TLS handshake message (often containing a large certificate chain) that exceeds the default buffer limit of the Java runtime environment.

Cause


The issue is caused by a default security configuration in the underlying JDK/Java properties. The property jdk.tls.maxHandshakeMessageSize defaults to 32,768 bytes (32 KB).

If the remote device's handshake message is larger than this—as seen in the error where the message is 37,832 bytes—the connection is terminated by the vRO server.

Resolution

To resolve this issue, you must increase the maximum allowed handshake message size by modifying the JVM_OPTS in the vRO deployment descriptor.

  1. Open an SSH session to the VMware Aria Automation Orchestrator appliance.

  2. Run the following command to edit the deployment descriptor:

    kubectl -n prelude edit deployment/vco-app
    
  3. Search for the JVM_OPTS environment variable:

    • Press / and type JVM_OPTS, then hit Enter.

  4. Add the following property to the end of the existing list of values within the JVM_OPTS string:

    -Djdk.tls.maxHandshakeMessageSize=131072
    

    (This example sets the value to 128 KB, which is sufficient for the reported 37 KB error).

  5. Save and exit the editor (:wq).

  6. The vco-app pods will automatically terminate and restart to apply the new configuration.

  7. Monitor the status of the pods until they are back in a Running state:

    kubectl -n prelude get pods
    

Important Note on Persistence: These JVM settings are applied to the deployment descriptor and are not persistent across product upgrades or patches. You must review and re-apply these configurations manually after any upgrade or patching activity.