Interfaces configured on a T0 in the NSX Manager UI are not pushed to the Edge
search cancel

Interfaces configured on a T0 in the NSX Manager UI are not pushed to the Edge

book

Article ID: 383500

calendar_today

Updated On:

Products

VMware NSX

Issue/Introduction

  • VFR Lite is used within the environment.
  • From within the NSX Manager UI, an interface is successfully configured on a T0 VRF.
  • Within the T0 VRF, on the Edge CLI "get interfaces" does not return the configured interface.
  • A route for this directly connected interface does not appear in the T0 VRF routing table.

[Edge](tier0_vrf_sr[X])> get route
Flags: t0c - Tier0-Connected, t0s - Tier0-Static, b - BGP, o - OSPF
t0n - Tier0-NAT, t1s - Tier1-Static, t1c - Tier1-Connected,
t1n: Tier1-NAT, t1l: Tier1-LB VIP, t1ls: Tier1-LB SNAT,
t1d: Tier1-DNS FORWARDER, t1ipsec: Tier1-IPSec, isr: Inter-SR,
ivs: Inter-VRF-Static, > - selected route, * - FIB route
Total number of routes: 2
t0c> * [IPV4] is directly connected, backplane-[ID], [TIME]
t0c> * [IPV6] is directly connected, loopback-[ID], [TIME]
<<<<< interface configured in the UI should have a directly connected route

  • The segment that is connected to the problematic interface may be using a name that was previously used.
  • That is to say, the segment was created with a name "test_segment". It was then deleted and created again with the same name.
  • The logical port may have been deleted previously via API 

Delete: /api/v1/logical-ports/{{ item }}?detach=true

  • NSX Manager logs shows the following error

/var/log/proton/nsxapi.log
[TIMESTAMP]  WARN workerTaskExecutor-1-31 LogicalRouterWorker 85466 ROUTING [nsx@6876 comp="nsx-manager" level="WARNING" subcomp="manager"] Scheduling retry of unprocessed work-items [WorkItem{identifier=LrPort/[UUID], Timestamp{epoch=28, address=[]}}]
[TIMESTAMP] ERROR workerTaskExecutor-1-31 WorkerInvocationTask 85466 POLICY [nsx@6876 comp="nsx-manager" errorCode="PM0" level="ERROR" subcomp="manager"] Exception occurred while processing work-items
com.vmware.nsx.management.policy.workerframework.worker.exception.RetriableException: null
        at com.vmware.nsx.management.edge.publish.worker.LogicalRouterWorker.process_aroundBody0(LogicalRouterWorker.java:323) ~[?:?]
        at com.vmware.nsx.management.edge.publish.worker.LogicalRouterWorker$AjcClosure1.run(LogicalRouterWorker.java:1) ~[?:?]
        at org.aspectj.runtime.reflect.JoinPointImpl.proceed(JoinPointImpl.java:149) ~[?:?]
        at io.micrometer.core.aop.TimedAspect.processWithTimer(TimedAspect.java:119) ~[?:?]
        at io.micrometer.core.aop.TimedAspect.ajc$inlineAccessMethod$io_micrometer_core_aop_TimedAspect$io_micrometer_core_aop_TimedAspect$processWithTimer(TimedAspect.java:1) ~[?:?]
        at io.micrometer.core.aop.TimedAspect.timedMethod(TimedAspect.java:97) ~[?:?]

  • NSX Manager logs also shows reference to a stale trunk port.

/var/log/proton/nsxapi.log
[TIMESTAMP] INFO providerTaskExecutor-1-126 LogicalRouterPortUtilsNsxT 85466 POLICY [nsx@6876 comp="nsx-manager" level="INFO" subcomp="manager"] Trunk port /infra/realized-state/enforcement-points/default/logical-ports/[PORT] is still referenced by 1 active access port(s).

Environment

VMware NSX-T Data Center 3.x
VMware NSX 4.x

Cause

When deleting an access port on an edge node with member index 0, a check is done for other access ports that are using the trunk. If there are no access ports found, the trunk port is deleted on the edge. The access port query, however, doesn't check if the access port has an edge member index or not. For access ports that have no edge member index set (instead have an edge path set), a default 0 is returned. This wrongly returns the access ports on other edge nodes. This prevents the trunk from being deleted even though there is no other access ports on same edge node. 

Resolution

This issue is resolved in VMware NSX 4.2.1

Workaround: 
When creating an interface on the T0 VRF, attach a segment with a name that wasn't used previously.

Note: This issue may lead to stale entries in the Corfu DB. If this issue is encounter please open a case with Broadcom Support and reference this KB article.