To resolve this issue the MTU on the VMware Reverse Proxy Client VM can be changed to the recommended setting.
This MTU setting should be appropriate for the underlying networking in the environment where it is deployed.
For example Google Cloud VMware Engine recommend an MTU of 1440 as the safest value.
- Locate the VMware Reverse Proxy client VM in the vSphere UI where it is deployed.
- Log in to the OS of the VMware Reverse Proxy client VM as root.
NOTE: You can find the password for the root user in the vApp properties of the VM, under root-password.
- Set the MTU on the uplink NIC (usually eth0) using network manager:
netmgr link_info --set --interface eth0 --mtu 1440
- Verify that the MTU has been updated:
ip a
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc fq_codel state UP group default qlen 1000
link/ether 00:50:56:ae:c0:1b brd ff:ff:ff:ff:ff:ff
inet 192.168.1.145/24 brd 192.168.1.255 scope global dynamic eth0
valid_lft 7650sec preferred_lft 7650sec
inet6 fe80::250:56ff:feae:c01b/64 scope link tentative
valid_lft forever preferred_lft forever
- Set the MTU on the docker daemon by adding {"mtu": 1440} to /etc/docker/daemon.json:
vi /etc/docker/daemon.json
- After setting the new MTU confirm the change, for example:
cat /etc/docker/daemon.json
{
"bip": "172.17.0.1/16",
"mtu": 1440
}
- Reboot the VMware Reverse Proxy client VM to complete the changes:
shutdown -r now
- Attempt again to Associate a VMware Cloud Director Instance with an SDDC via VMware Proxy.