Purpose
This article outlines the procedures to update NSX-T design changes that you must implement VMware Cloud Foundation 3.10.
NSX-T workload domains with multiple availability zones in VMware Cloud Foundation 3.10 uses NSX-T Data Center 2.5.1. NSX-T Data Center 2.5.x uses a three N-VDS edge node architecture and edge nodes pinned to Availability Zone 1 and Availability Zone 2. In NSX-T Data Center 3.x, this design changed to a single-N-VDS architecture. The edge nodes fabric design also changed to uplink networks stretched between availability zones instead of pinned to the individual zones with different uplinks.
This documentation focusses on addressing two design changes related to NSX-T Data Center in VMware Cloud Foundation 3.11.
Follow this procedure prior to updating the components to the versions described in the bill of materials for VMware Cloud Foundation 3.11.
VMware Software Versions in the Update
Product Name | Product Version in VMware Cloud Foundation 3.10 | Product Version in VMware Cloud Foundation 3.11 |
NSX-T Data Center | 2.5.1 15314288 | 3.0.3.1 19067109 |
Modify the existing host uplink profile standby uplinks.
Name | Teaming Policy | Active Uplinks | Standby Uplinks |
Uplink01 | Failover Order | uplink-1 | uplink-2 |
Uplink02 | Failover Order | uplink-2 | uplink-1 |
Create an overlay profile required to configure the uplinks of the new NSX-T Edge nodes that are required for the migration to a single N-VDS architecture.
Name | Teaming – Teaming Policy | Teaming – Active Uplinks | Transport VLAN | MTU |
new-overlay-profile | Load Balance Source | uplink-1,uplink-2 | VLANID | 9000 |
The Load Balance Source option represents load balancing that is based on the source port ID.
Name | Teaming Policy | Active Uplinks |
uplink01 | Failover Order | uplink1 |
uplink02 | Failover Order | uplink2 |
Create transport zones for uplink traffic.
Setting | Value |
Name | edge-uplink-tz |
N-VDS Name | Enter the N-VDS name |
N-VDS Mode | Standard |
Traffic Type | VLAN |
Uplink segments are required to connect to the new uplink transport zone.
Segment Name | Uplink & Type | Transport Zone | VLAN |
nvds01-uplink01 | None | edge-uplink-tz | 0-4094 |
nvds01-uplink01 | None | edge-uplink-tz | 0-4094 |
Setting | Value |
Name | nvds01-uplink01 |
Transport Zone | edge-uplink-tz |
VLAN | 0-4094 |
Segment | Uplink Teaming Policy |
uplink01 | uplink01 |
uplink02 | uplink02 |
Deploy a new NSX Edge cluster with new edge nodes with the single N-VDS architecture. Then, you migrate the networking components to the new edge nodes and delete the legacy edge nodes.
Setting | Value for en01 | Value for en02 |
Network 3 | Unused | Unused |
Network 2 | nvds01-uplink02 | nvds01-uplink02 |
Network 1 | nvds01-uplink01 | nvds01-uplink01 |
Network 0 | management port group | management port group |
Management IP address | 172.16.41.21,172.16.41.22 | 172.16.41.21,172.16.41.22 |
Default gateway | 172.16.41.253 | 172.16.41.253 |
Setting | Value |
Name | Enter the Name for Edge Node01 |
Host Name/FQDN | Enter the Name for Edge Node01 |
Form Factor | Medium |
Setting | Value |
CLI "admin" User Password / Confirm Password | nsx_edge_admin_password |
System Root User Password / Confirm Password | nsx_edge_root_password |
CLI "audit" User Password / Confirm Password | nsx_edge_audit_password |
Allow SSH Login | Yes |
Allow Root SSH Login | Yes |
Setting | Value |
Compute Manager | Select the Compute Manager |
Cluster | Select the Cluster |
Datastore | Select the Datastore |
Setting | Value |
IP Assignment | Static |
Management IP | Enter the Management IP |
Default Gateway | Enter the Default Gateway |
Management Interface | Enter the Management Portgroup |
Search Domain Names | Enter DNS Search Names |
DNS Servers | Enter DNS Servers |
NTP Servers | Enter NTP Servers |
Setting | Value |
Transport Zone | edge-uplink-tz, overlay-tz |
Edge Switch Name | sfo01-w-nvds01 |
Uplink Profile | new-overlay-profile |
IP Assignment | Use Static IP List |
Static IP List | Enter Static IPs |
Gateway | Enter Gateway |
Subnet Mask | Enter Subnet Mask |
DPDK Fastpath Interfaces |
|
Create an anti-affinity rule to ensure that the edge nodes run on different ESXi hosts. If an ESXi host is unavailable, the edge nodes on the other hosts continue to provide support for the NSX management and control planes.
Option | Description |
Name | Select the Existing Edge affinity rule |
Members | Click Add, select the two Edge new nodes, and click OK. |
After edge nodes deployed, you manually move the NSX-T Edge Nodes to the correct resource pool to ensure the correct allocation of resources during times of contention, move the NSX-T Edge Cluster Nodes to the available edge node resource pool.
To define a common configuration for NSX-T Edge nodes, you create an edge cluster profile.
Setting | Value |
Name | Enter the Name of Edge Cluster Profile |
BFD Probe | 1000 |
BFD Allowed Hops | 255 |
BFD Declare Dead Multiple | 3 |
StandBy Relocation Threshold (mins) | 30 |
Adding multiple NSX-T Edge nodes to a cluster increases the availability of networking services. An NSX-T Edge cluster is necessary to support the Tier-0 and Tier-1 gateways in the workload domain.
Setting | Value |
Name | Enter the Name for Edge Cluster |
Edge Cluster Profile | Select Name of Edge Cluster Profile |
The Tier-0 gateway in the NSX-T Edge cluster provides a gateway service between the logical and physical network. The NSX-T Edge cluster can back multiple Tier-0 gateways.
Setting | Value |
Tier-0 Gateway Name | tier0-01 |
High Availability Mode | Active-Active |
Edge Cluster | Select Edge Cluster Newly Created |
Name | Name | IP Address/Mask | Connected to (Segment) | Edge Node | MTU |
en01-Uplink01 | External | 172.16.47.2/24 | segment of uplink01 | edge_node1 | 9000 |
en01-Uplink02 | External | 172.16.48.2/24 | segment of uplink02 | edge_node1 | 9000 |
en02-Uplink01 | External | 172.16.47.3/24 | segment of uplink01 | edge_node2 | 9000 |
en02-Uplink02 | External | 172.16.48.3/24 | segment of uplink02 | edge_node2 | 9000 |
Expand BGP, configure the settings, and click Save.
Setting | Value |
Local AS | bgp_asn |
BGP | On |
Graceful Restart | Disabled |
Inter SR iBGP | On |
ECMP | On |
Multipath Relax | On |
IP Address | BFD | Remote AS/Source Addresses | Hold Down Time | Keep Alive Time | Password | Out Filter | In Filter |
ip_bgp_neighbor1 | Disabled | bgp_asn | 12 | 4 | bgp_password | - | - |
ip_bgp_neighbor2 | Disabled | bgp_asn | 12 | 4 | bgp_password | - | - |
NSX-T Data Center Configuration for Availability Zone 2
Configure IP Prefixes in the New Tier-0 Gateway for Availability Zone 2
You configure default and any IP prefixes on the Tier-0 gateway to permit access to route advertisement by any network and by the 0.0.0.0/0 network. These IP prefixes are used in route maps to prepend a path to one or more autonomous systems (AS-path prepend) for BGP neighbors and to configure local-reference on the learned default-route for BGP neighbors in availability zone 2.
Setting | Value |
Network | any |
Action | Permit |
Name | Default Route |
Network | 0.0.0.0/0 |
Action | Permit |
Configure Route Maps in the New Tier-0 Gateway for Availability Zone 2
To define which routes are redistributed in the workload domain, you configure route maps in the New Tier-0 Gateway.
Setting | Value for Default Route | Value for Any |
Type | IP Prefix | IP Prefix |
Members | Default Route | Any |
Local Preference | 80 | 90 |
Action | Permit | Permit |
Repeat step 4 to create a route map for outgoing traffic from availability zone 2 with the following configuration.
Setting | Value |
Route map name | rm-out-az2 |
Permit | IP Prefix |
Members | Any |
As Path Prepend | bgp_asn |
Local Preference | 100 |
Action | Permit |
Configure BGP in the Tier-0 Gateway for Availability Zone 2
To enable fail-over from availability zone 1 to Availability Zone 2, you configure BGP neighbors on the Tier-0 gateway in the management or workload domain to be stretched. You add route filters to configure localpref on incoming traffic and prepend of AS on outgoing traffic.
You configure two BGP neighbors with route filters for the uplink interfaces in availability zone 2.
BGP Neighbors for Availability Zone 2
Setting | BGP Neighbor 1 | BGP Neighbor 2 |
IP Address | ip_bgp_neighbor1 | ip_bgp_neighbor2 |
BFD | Disabled | Disabled |
Remote AS | asn_bgp_neighbor1 | asn_bgp_neighbor2 |
Hold downtime | 12 | 12 |
Keep alive time | 4 | 4 |
Password | bgp_password | bgp_password |
Setting | BGP Neighbor 1 | BGP Neighbor 2 |
IP Address Family | IPV4 | IPV4 |
Enabled | Enabled | Enabled |
Out Filter | rm-out-az2 | rm-out-az2 |
In Filter | rm-in-az2 | rm-in-az2 |
Maximum Routes | - | - |
Setting | Value |
IP address | ip_bgp_neighbor1 |
IP address |
Disabled Note: Enable BFD only if the network supports and is configured for BFD. |
Remote AS | asn_bgp_neighbor1 |
Hold downtime | 12 |
Keep alive time | 4 |
Password | bgp_password |
Setting | Value |
IP address Family | IPV4 |
Enabled | Enabled |
Out Filter | rm-out-az2 |
In Filter | rm-in-az2 |
Maximum Routes | - |
The Tier-1 gateway must be migrated to the new Tier-0 gateway and new NSX-T Edge cluster.
Setting | Value |
Tier-1 Gateway Name | tier1_gateway |
Linked Tier-0 Gateway | new_tier0_gateway |
Edge Cluster | new_edge_cluster |
All the segments in workload are automatically connected to new Tier-0 gateway.
To delete the edge cluster, delete the legacy Tier-0 gateway, deselect the edge nodes from edge cluster and delete the edge nodes and edge cluster. Deleting edge nodes works only if you remove the edge nodes from the edge cluster first.
In a Web browser, log in to the NSX Manager cluster.
This update does not impact the SDDC design and implementation, ensures interoperability, and introduces bug fixes.
Prerequisites
Before you upgrade the virtual infrastructure layer of the SDDC, verify that your existing VMware Cloud Foundation environment meets certain general prerequisites.