Migration of NSX TEPs to Static IP Pools from DHCP on VCF 5.2
search cancel

Migration of NSX TEPs to Static IP Pools from DHCP on VCF 5.2

book

Article ID: 399413

calendar_today

Updated On:

Products

VMware Cloud Foundation

Issue/Introduction

VCF 5.2 supports bring up and workload domain deployment with either DHCP or Static IP Pools for NSX host TEPs. Customers who are running VCF 5.2 and want to change the IP assignment method can use this KB article, for instance, to replace DHCP with Static IP Pools so that VCF can manage the process including future cluster expansions.

The following is a manual procedure to update the existing host TEPs to Static IP Pools from DHCP, or vice-versa. If you use Static IP Pools, the TEP assignment for future cluster expansion workflows will automatically select available IP's from the pools.
This is applicable to both VCF on Ready Nodes and VCF on VxRail on version 5.2, including for stretched clusters.

Environment

VMware Cloud Foundation 5.2

Resolution

Migration of NSX Host TEPs to Static IP Pools from DHCP on VCF 5.2

Two use-cases:

  • Greenfield deployment of VCF 5.2 or upgrade from VCF 4.x to VCF 5.2 in a single site or multiple Availability Zones, and that has NSX Transport Node Profiles (TNP) attached to the vSphere clusters
  • Upgrade from VCF 4.x to VCF 5.2 in a single site or multiple Availability Zones, where the NSX Transport Node Profiles (TNP) are detached from the vSphere clusters

 

General notes:

  • SDDC Manager does not store the Transport Node Profile configuration in its database. Therefore, you only need to update Transport Node Profiles from NSX.
  • Some of these operations disrupt VM traffic. It is recommended to plan this procedure during a maintenance window. However, it is not required to set hosts in Maintenance Mode.
  • Both use-cases are compatible with the two vSphere lifecycle management methods: VMware Update Manager (VUM) and vSphere Lifecycle Manager (vLCM) Images. Note that when using the vLCM Image method, a Transport Node Profile (TNP) must be attached.

 

NSX has Transport Node Profiles (TNP) attached to the vSphere clusters

Note, in case of multiple Availability Zones, TNP will include Sub-TNP

  1. Create IP Pools in NSX Manager (Networking -> IP Address Pools -> Add IP Address Pool). If the vSphere cluster is not stretched, create one IP Pool. If it is stretched, create two pools, one for each Availability Zone.
  2. For the first Availability Zone, enter a Name (e.g., Static-TEP-Pool-A), then click on the Set link under the Subnets column.
  3. On the Set Subnets properties page, click the ADD SUBNET button and choose IP Ranges.
  4. On the IP Ranges / Block column, click on the IP Ranges property box, then enter an IP Range that does not conflict with the addresses administered by the DHCP server (e.g., 172.16.30.200-172.16.30.249). Press Enter.

    Note, if you don’t have enough addresses available in that network follow the procedure described later in this article to change the networks used for host TEPs.
  5. In the CIDR property box enter the CIDR (e.g., 172.16.30.0/24).
  6. On the Gateway IP, enter the default gateway address (e.g., 172.16.30.1).
  7. When finished, click ADD, then APPLY, and SAVE on the IP Address Pools page.
  8. Verify that the IP Address Pool Status column indicates Success for the newly created IP Address Pool.
  9. If the cluster is stretched, repeat the steps for the second Availability Zone. For example:
    Name: Static-TEP-Pool-B
    Range: 172.16.31.200-172.16.31.249
    Gateway: 172.16.31.1
  10. Identify the Transport Node Profile applied to the vSphere cluster (System -> Fabric -> Hosts -> Clusters) shown under the column Applied Profile. Then click on Transport Node Profile, find the Transport Node Profile and click on Edit. Then click on the number under Host Switch
  11. On the Host Switch properties page, click Edit on the host switch definition. On the IPv4 Assignment property drop-down, replace Use DHCP with Use IP Pool. Then, on IPv4 Pool, select the first IP Pool created before (e.g., Static-TEP-Pool-A).
  12. If the vSphere cluster is stretched, scroll down on the same page and expand SUB-TRANSPORT NODE PROFILE, then click on the number for Sub-Transport Node Profile(Sub-TNP). Otherwise, jump to Step 14.
  13. Edit the Sub-TNP associated to the second Availability Zone and repeat Step 11 to replace DHCP with the second IP Pool created (e.g., Static-TEP-Pool-B). Click on ADD and APPLY.
  14. Click on ADD and APPLY. Then SAVE to finish the configuration of the Transport Node Profile.
  15. Click on Clusters and expand the vSphere cluster. The new settings of the Transport Node Profile will be applied to all hosts of the cluster automatically. Monitor the progress under the Node Status column.

    Note, this step can cause minor disruptions to communications of VMs that use NSX segments while host TEPs are reconfigured.
  16. Verify that IPs from the IP Pools have been assigned to hosts in the vSphere cluster nodes. On the Cluster view, expand the vSphere cluster and examine the TEP IP Address column.


NSX does not have Transport Node Profiles (TNP) attached to the vSphere clusters

  1. Create IP Pools in NSX Manager (Networking -> IP Address Pools -> Add IP Address Pool). If the vSphere cluster is not stretched, create one IP Pool. If it is stretched, create two Pools, one for each Availability Zone.
  2. For the first Availability Zone, enter a Name (e.g., Static-TEP-Pool-A), then click on the Set link under the Subnets column.
  3. On the Set Subnets properties page, click the ADD SUBNET button and choose IP Ranges.
  4. On the IP Ranges / Block column, click on the IP Ranges property box, then enter an IP Range that does not conflict with the addresses administered by the DHCP server (e.g., 172.16.30.200-172.16.30.249). Press Enter.

    Note, if you don’t have enough addresses available in that network follow the procedure described later in this article to change the networks used for host TEPs.
  5. In the CIDR property box enter the CIDR (e.g., 172.16.30.0/24).
  6. On the Gateway IP, enter the default gateway address (e.g., 172.16.30.1).
  7. When finished, click ADD, then APPLY, and SAVE on the IP Address Pools page.
  8. Verify that the IP Address Pool Status column indicates Success for the newly created IP Address Pool.
  9. If the cluster is stretched, repeat the steps for the second Availability Zone. For example:
    Name: Static-TEP-Pool-B
    Range: 172.16.31.200-172.16.31.249
    Gateway: 172.16.31.1
  10. There should be an existing TNP in NSX per vSphere cluster that was used to configure the vSphere cluster and later detached, and that can be reused now.  Otherwise, create new Transport Node Profiles.
    Note, if you use SDDC Manager to expand the vSphere cluster with additional hosts, it will automatically create a new TNP and attach it to the cluster. In this case, jump to Step 11 and edit the new TNP attached.
    Identify the Transport Node Profile used to configure the vSphere cluster (System -> Fabric -> Hosts -> Transport Node Profile). SDDC Manager uses the vSphere cluster name to name TNPs. The TNP will show 0 under the Applied Cluster column.
  11. Click on Edit. Then click on the number under Host Switch.
  12. On the Host Switch properties page, click Edit on the host switch definition. On the IPv4 Assignment property drop-down, replace Use DHCP with Use IP Pool. Then, on IPv4 Pool, select the first IP Pool created before (e.g., Static-TEP-Pool-A).
  13. If the vSphere cluster is stretched, scroll down on the same page and expand SUB-TRANSPORT NODE PROFILE, then click on Set. Otherwise, jump to Step 22.
  14. Click on ADD SUB-TRANSPORT NODE PROFILE. Enter a name for the Sub-TNP, e.g., profile-mgmt-cluster-2. Select the VDS prepared for NSX in the vSphere cluster.
  15. There should be existing Uplink Profiles in NSX for hosts in the Availability Zone 2 (AZ2) for every vSphere cluster that was stretched by SDDC Manager. Identify the correct profile and select it in the Uplink Profile property drop-down, then jump to Step 20.  Otherwise, create new Uplink Profiles for the hosts in AZ2. Click on the three vertical dots and select Create New. 
  16. On the Create Uplink Profile properties page, enter a name (e.g., uplink-profile-mgmt-cluster-2), the host TEP VLAN, and then click on Set for Teamings.
  17. On the Set Teamings properties page, select a Teaming Policy for the Default Teaming, e.g., Load Balance Source. Then for the Active Uplinks, add as many arbitrary labels separated by commas as uplinks defined in the VDS selected above. In most cases, you will use the same names used in the main TNP. For instance, if the VDS prepared for NSX has two uplinks, enter uplink-1, uplink-2. Then press Enter, ADD and APPLY.

    Note, you can add additional named Teamings, although they will not be used by the hosts.
  18. Click on SAVE on the Create Uplink Profile properties page.
  19. Returning to the Sub-Transport Node Profile properties page, ensure that the Uplink Profile property drop-down has the profile just created
  20. Select IPv4 for IP Address Type and Use IP Pool for IPv4 Assignment. Then, select the second IP Pool created (e.g., Static-TEP-Pool-B).
  21. On the Teaming Policy Uplink Mapping, map the strings entered in Step 17 (new Uplink Profile) or the strings created by SDDC Manager (existing Uplink Profile) to the appropriate VDS Uplinks. Click on ADD and APPLY.
  22. Click on ADD and APPLY. Then SAVE to finish the configuration of the Transport Node Profile.
  23. Click on Clusters tab and select the vSphere cluster to configure with the updated TNP. If the vSphere cluster is stretched, click on Set on the column Sub-cluster. Otherwise, jump to Step 26.
  24. On the Sub-Cluster properties page, click ADD SUB-CLUSTER. Enter a name (e.g., mgmt-cluster01-profile1). Click on Set under Nodes.
  25. On the Set Host Nodes properties page, select the hosts that belong to the second Availability Zone. Click on APPLY, SAVE and CLOSE.
  26. Returning to the Clusters tab, select again the vSphere cluster to configure with the updated TNP. Click on CONFIGURE NSX.
  27. On the NSX Installation properties page, select the updated TNP.
  28. If the vSphere cluster is stretched, click on the Sub-Cluster Nodes tab, expand the Sub-Cluster name created before (e.g., mgmt-cluster01-profile1) and select the Sub-TNP name created for AZ2 (e.g., profile-mgmt-cluster-2). Click on SAVE.
  29. The new settings of the Transport Node Profile will be applied to all hosts of the cluster automatically. Monitor the progress under the Node Status column.

    Note, this step can cause minor disruptions to communications of VMs that use NSX segments while host TEPs are reconfigured.
  30. Verify that IPs from the IP Pools have been assigned to hosts in the vSphere cluster nodes. On the Cluster view, expand the vSphere cluster and examine the TEP IP Address column.

 

Procedure to change the networks used by host TEPs

If you cannot reuse the networks and/or VLANs used in the DHCP server, for instance, because there are not enough IP addresses available for the host TEPs, you can follow this procedure to change the IP networks.

  1. Contact your Network Administrator and obtain the ranges of new IP addresses and/or VLANs to use for host TEPs. Ensure that those ranges are large enough to accommodate all host TEP addresses.
  2. Create new IP Pools in NSX Manager (Networking -> IP Address Pools -> Add IP Address Pool) as described above with the new IP addresses.
  3. Identify the vSphere clusters in NSX that will require the new networks, their TNPs and Uplink Profiles.
  4. If you need to change the VLANs used for the host TEPs, edit the Uplink Profiles assigned to the TNP and Sub-TNP identified earlier (System -> Fabric -> Profiles -> Uplink Profiles).

    Note, VLAN changes will be applied immediately, which can cause disruptions to VM traffic momentarily.
  5. Edit the TNP (and Sub-TNP, if the cluster is stretched) and assign the new IP Pools.
  6. Monitor the progress under the Node Status column of the Clusters view.
  7. Verify that IPs from the IP Pools have been assigned to hosts in the vSphere cluster nodes. On the Cluster view, expand the vSphere cluster and examine the TEP IP Address column. 
  8. After successful completion, VM communications will be resumed.

Additional Information