SSL Handshake Timeout errors when associating a VMware Cloud Director service Instance with an SDDC via VMware Reverse Proxy
search cancel

SSL Handshake Timeout errors when associating a VMware Cloud Director service Instance with an SDDC via VMware Reverse Proxy

book

Article ID: 325487

calendar_today

Updated On:

Products

VMware Cloud Director

Issue/Introduction

Symptoms:
  • Cannot Associate a VMware Cloud Director service Instance with an SDDC via VMware Proxy, for example Google Cloud VMware Engine, Azure VMware Solution, or on-premises infrastructure resources.
  • Running the transporter-status.sh command from the VMware Reverse Proxy Client VM shows an SSL Handshake timeout:
command_channel_1": "DISCONNECTED: Error on channel connect: io.netty.handler.ssl.SslHandshakeTimeoutException: handshake timed out after 10000ms
  • Running a curl request to the VMware Proxy Service hangs on the client handshake:
curl -v <VMware-Proxy-Service-address>

*   Trying VMware-Proxy-Service-address:443...
* Connected to VMware-Proxy-Service-address (VMware-Proxy-Service-address) port 443 (#0)
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
* successfully set certificate verify locations:
*  CAfile: /etc/pki/tls/certs/ca-bundle.crt
*  CApath: none
* TLSv1.2 (OUT), TLS header, Certificate Status (22):
* TLSv1.2 (OUT), TLS handshake, Client hello (1):


Cause

This issue can occur if the MTU of the underlying networking on which the VMware Reverse Proxy Client VM is deployed is lower than the default 1500 expected, for example if the network has an MTU of 1460 as is the case in some environments.

This behaviour can occur with Google Cloud VMware Engine environments when routing internet traffic back through an on-premises environment.

Resolution

To resolve this issue the MTU on the VMware Reverse Proxy Client VM can be changed to the recommended setting.
This MTU setting should be appropriate for the underlying networking in the environment where it is deployed.
For example Google Cloud VMware Engine recommend an MTU of 1440 as the safest value.

  1. Locate the VMware Reverse Proxy client VM in the vSphere UI where it is deployed.
  2. Log in to the OS of the VMware Reverse Proxy client VM as root.
NOTE: You can find the password for the root user in the vApp properties of the VM, under root-password.
  1. Set the MTU on the uplink NIC (usually eth0) using network manager:
netmgr link_info --set --interface eth0 --mtu 1440
  1. Verify that the MTU has been updated:
ip a
 
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:50:56:ae:c0:1b brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.145/24 brd 192.168.1.255 scope global dynamic eth0
       valid_lft 7650sec preferred_lft 7650sec
    inet6 fe80::250:56ff:feae:c01b/64 scope link tentative
       valid_lft forever preferred_lft forever
  1. Set the MTU on the docker daemon by adding {"mtu": 1440} to /etc/docker/daemon.json:
vi /etc/docker/daemon.json
  1. After setting the new MTU confirm the change, for example:
cat /etc/docker/daemon.json

{
      "bip": "172.17.0.1/16",
      "mtu": 1440
}
  1. Reboot the VMware Reverse Proxy client VM to complete the changes:
shutdown -r now
  1. Attempt again to Associate a VMware Cloud Director Instance with an SDDC via VMware Proxy.


Additional Information

How Do I Associate a VMware Cloud Director Instance with an SDDC via VMware Reverse Proxy.
How Do I Troubleshoot the Connection of a VMware Cloud Director Instance to an SDDC through VMware Proxy Service.