HCX - Mobility Optimized Networking (MON) Troubleshooting Guide
search cancel

HCX - Mobility Optimized Networking (MON) Troubleshooting Guide

book

Article ID: 321599

calendar_today

Updated On:

Products

VMware HCX

Issue/Introduction

Introduction

Mobility Optimized Networking (MON) is a feature available for the HCX Network Extension (NE) service.
It provides control for the location of the default gateway for a VM migrated to the cloud (target) side, when connected to an extended segment. The VM can have routing (L3) access to the rest of the network through the cloud or the on-premises gateway.

Without MON, all routing access over an extended cloud segment is only available through the on-premises gateway. Traffic for a VM migrated to the cloud communicating to other VMs in different segments also on the cloud side, it will have to traverse the on-premises infrastructure (a.k.a. tromboning), resulting in added latency that could potentially impact application performance.

After MON is enabled, routing between the extended segment and other networks in the cloud (East-West) will be performed locally using the cloud gateway (NSX T1). Additionally, routing access to on-premises networks (North-South) will be performed through the on-premises gateway.

MON can be enabled on a per extended segment as well as per VM in a MON enabled segment.
 

Consider the following diagram for a VMC on AWS deployment

How It Works

In a regular NE scenario, the local router at the cloud side is not used. The Tier-1 NSX router interface for that segment remains in "disconnected" state. With MON enabled, the NSX-T Tier-1 router interface gets connected and using a host route (/32 prefix) injection, it enables the T1 to perform local routing between the attached segments. The optimized path is highlighted as a green line in the above diagram.

With MON enabled, the networking paths are:

  1. L3 over T1 cloud gateway (MON Path) – Traffic between VM A on the L2E_Network (cloud) and VM B on the Cloud Network (cloud) is routed locally through the NSX Tier-1 gateway. (green arrow)
  2. L2 over NE bridge (L2 Path) – Traffic between VM A on the L2E_Network (cloud) and VM C on the Extended Network (on-premises) is bridged by the NE appliances (red line) over the encrypted tunnel (brown line). Extended segments retain the same IP subnet.
  3. L3 over on-premise gateway (L3 Path) – Traffic between VM A on the L2E_Network (cloud) and VM D on the Local Network (on-premises) is first routed locally through the NSX Tier-1 gateway (yellow line). Using Policy Based Routing, the T1 redirects the traffic to the cloud NE appliance using a well-known MAC. The on-premises NE appliance will then forward traffic to the local default gateway based on the learned MAC address using probing or using the well-known DLR MAC, if NSX is used on-premises (yellow line). Return traffic or traffic originated from the on-premises gateway will be forwarded directly by the cloud NE appliance to the VM.

Enabling MON on a segment

1. Extend a network with MON enabled. Use the Mobility Optimized Networking Toggle button to enable MON while extending a network.



2. Un-extend an existing network with MON enabled.
     a. Select network on the HCX UI and click the UNEXTEND button.



       b. Make sure to check "Connect Cloud Network" option, if the interface on the cloud T1 gateway must be connected upon removal to allow for L3 connectivity.



3. Enable MON on a previously extended network.


 
IMPORTANT: While activating MON, connectivity across the network extension may experience a brief outage.



4. Disable MON on a previously extended network.

Enabling MON on a VM

1. Once a network is extended with MON enabled, all member VMs can be seen in the HCX UI by expanding the view.



2. MON feature activation per VM is based on the migration type:

a. HCX vMotion/RAV migrated VMs will use the gateway on-premises by default, until transitioned to use the cloud gateway.
b. HCX Bulk migrated VMs will be MON optimized to use the cloud gateway by default.
c. VMs created in the cloud will use the gateway on-premises by default, until transitioned to use the cloud gateway.


The VM default gateway location can be toggled by selecting the target router location and clicking on submit:
IMPORTANT: Changing the router location may cause a brief outage for ongoing traffic to/from that VM.

HCX Policy-Routes

1. Upon completion of the first network extension with MON enabled, HCX will automatically configure policy-routes on the NSX T1 cloud gateway for the subnets defined in RFC 1918. The routes list can be viewed and modified through the Network Extension section in the HCX Connector UI:



When the destination network for traffic originated by the VM on the cloud is not within the SDDC, policy-routes configured by MON will be used:

a. If the destination IP matches the policy-routes, traffic will be forwarded to the on-premises gateway via the cloud NE appliance.
b. If the destination IP does NOT match the policy-routes, traffic will be forwarded to the Tier-0 gateway depending on NSX-T routing policy.
c. For internet traffic (non RFC1918 networks) traffic may be forwarded to NSX-T Tier-0 gateway for external access from the SDDC but NAT configuration is required since inbound traffic is not supported using MON.

 

 

Resolution

Symptoms & Diagnostics

For each condition described in the symptoms column, check and verify the requirements listed in the diagnostics column. Troubleshooting should be performed accordingly based on those requirements.

1. MON option on HCX vCenter plugin or standalone UI
 

Symptoms Diagnostics
No checkbox to extend new Networks with MON HCX Cloud Manager is registered with NSX-T Manager
MON license is enabled on the HCX Connector
No toggle button to enable MON on already extended Networks HCX Cloud Manager is registered with NSX-T Manager
MON license is enabled on the HCX Connector


 2. NSX-T segments on cloud (destination) side
 

Symptoms Diagnostics
Segment is not configured with /32 logical router interface on NSX-T Tier-1 VMC Portal shows segment as MON enabled
HCX Connector shows Extended Network as MON Enabled
HCX Cloud Manager is registered with NSX-T Manager
NSX-T is correctly configured in HCX


3. VM connectivity 
 

Symptoms Diagnostics
VM created on Extended Network is not MON Enabled vNIC is connected to extended segment
An IP address is assigned to the VM
VMware Tools is reporting the IP configuration to vCenter
IP address and mask are in the Subnet for the extended network
VM on cloud extended segment failed to get IP from DHCP  DHCP client packets are allowed by Segment Security Policy on cloud NSX-T (destination)
VM connectivity using static IP Address
Reachability to default gateway and DHCP server
VM in cloud extended segment cannot reach VM on a different segment in the cloud NE appliances heath and tunnel status
IP is as expected and recognized by vCenter
VM reachability to other VMs on the same segment on the Cloud
VM reachability to the T1 Default Gateway
ARP table on Cloud VM has the MAC address of the default gateway
Status of the NSX T1 and interface is connected to both cloud segments
Host routes /32 are installed in the T1
VM in cloud extended segment cannot reach VM on the same segment on-premises NE appliances heath and tunnel status
IP is as expected and recognized by vCenter
VM reachability to other VMs on the same segment on the Cloud
VM reachability to the T1 Default Gateway
ARP table on Cloud VM has the MAC address of the default gateway
Status of the NSX T1 and interface is connected to both cloud segments
Host routes /32 are installed in the T1
MON policy routes configured and active
Bridge configuration on NE appliances
MAC address table on NE appliances
Location of NE appliance VM in deployment cluster and location of the target VM on the service cluster on-premises
Deployment cluster configuration and access to local network resources on the ESXi host where the NE appliance is deployed on-premises
Teaming policy configuration for ESXi host uplinks where NE appliance is deployed on-premises
Local network infrastructure on-premises
VM in cloud extended segment cannot reach default gateway on-premises Reachability to other VM on the same segment on-premises (previous failure scenario)
Type of router on-premises: external or NSX DLR
MAC address of default gateway on-premises identified by NE appliance
Use of HSRP or VRRP redundancy for default gateway connectivity on-premises
VM in cloud extended segment cannot reach VM in other networks on-premises Reachability to default gateway on-premises (previous failure scenario)
Routing information on default gateway on-premises

 

Restrictions and other consideration for MON

1. IP unavailable
If the IP of the VM is not available or removed, HCX cannot track the workload therefore the /32 host route on the Tier-1 logical router will not be programed and the originated traffic will be sent back through cloud NE appliance towards the on-premises.

2. IP address outside of extended network:
If the IP of the VM is not in the subnet of the extended network that the vNIC is attached to, the /32 host route on the Tier-1 logical router will not be programed and the originated traffic will be sent back through cloud NE appliance towards the on-premises.

3. Secondary IP and dual vNIC
If the VM has secondary IP on a different subnet or two or more vNIC attached to the same extended segment, the /32 host route on the Tier-1 logical router will not be programed and the originated traffic will be sent back through cloud NE appliance towards the on-premises.

4. IPv6 and dual stack:
MON does not support IPv6 and that traffic will not be optimized.

5. NE appliance upgrade:
Traffic disruption between cloud and on-premises is expected until the transport tunnels are re-established. Local traffic at source and destination will not be affected.