NSX-T Gateway Firewall policy sections are in a Down state
search cancel

NSX-T Gateway Firewall policy sections are in a Down state


Article ID: 324206


Updated On:


VMware NSX Networking


  • NSX-T Gateway Firewall Policy section has an Overall Status of Down
  • In details section Edge Transport Node has a Failed Status with "Internal error occurred on transport node <UUID>"
  • Log messaging similar to these examples are observed
  2020-02-05T09:21:30.334Z N00100PNSXM0197 NSX 10090 - [nsx@6876 comp="nsx-manager" level="WARN" reqId="<..>" subcomp="policy" username="admin"] com.vmware.nsx.management.container.exceptions.ConcurrentUpdateException
"failure_message":"Internal error occurred on transport node <UUID>."

  2020-01-14T13:42:53.642Z  WARN Owl-worker-0 StaticRouteTable - - [nsx@6876 comp="nsx-controller" level="WARN" subcomp="NewL3"] Relation remove for fib <UUID> not supported.
  2020-01-14T13:42:53.642Z  WARN Owl-worker-1 Tier1ServiceRouterClusterEntity - - [nsx@6876 comp="nsx-controller" level="WARN" subcomp="NewL3"] Exception while relationship add 00002000-0000-0000-0000-000000000002-> <UUID>: StaticRoute add relationship for a Tier1ServiceRouterCluster is not supported


VMware NSX-T Data Center
VMware NSX-T Data Center 2.x


This issue may occur due to a race condition when performing the following operations
 -  Tier-1 Gateway is disconnected from the Tier-0 Gateway
 -  Tier-1 Gateway is deleted
When the processing of these events is interleaved, routing information may not be correctly removed from the Controller.
The Gateway Firewall Rules publication triggers an error because of the orphaned routing configuration.
It is important to note that in this scenario the firewall rules are realized correctly, the realization error presented on the UI is triggered by the stale config entry.


This is a known issue affecting VMware NSX-T Data Center. Currently, there is no resolution.

The error condition can be cleared by performing these steps on the NSX-T Managers sequentially

1. ssh to NSX-T Manager as admin user
2. > stop service controller
3. > start service controller
4. > get cluster status
5. Confirm the Controller cluster comes back up before proceeding to the next Manager