NSX segments in failed status with error can not modify IP/MAC pool id
search cancel

NSX segments in failed status with error can not modify IP/MAC pool id

book

Article ID: 415545

calendar_today

Updated On:

Products

VMware NSX

Issue/Introduction

When looking at "Networking > Segments," there is a "Failed" status, and when clicking on "Failed," there is a similar message:

Realization Errors: LogicalSwitch LogicalSwitch/########-####-####-####-############ still have logical ports attached, can not modify IP/MAC pool id.

Environment

VMware NSX

Resolution

Please add the ip pool back through API

1. GET https://<manager_ip>/policy/api/v1/infra/segments/seg_####2-2##f-4##3-9##1-6####106c##_0

2. Add the ip pool policy path to the advanced config of the segment body taken from step 1.

{
        "_create_time": 1718111334482,
        "_create_user": "pks-b###f1-6##b-4##5-a##c-0####1a",
        "_last_modified_time": 1744147416769,
        "_last_modified_user": "admin",
        "_protection": "REQUIRE_OVERRIDE",
        "_revision": 30,
        "_system_owned": false,
        "admin_state": "UP",
        "advanced_config": {
          "address_pool_paths": [
            "/infra/ip-pools/ipp_d3###32-2##f-4##3-9##1-6#######cf2_0"
          ],
          "connectivity": "ON",
          "hybrid": false,
          "inter_router": false,
          "local_egress": false,
          "urpf_mode": "STRICT"
        },
        "connectivity_path": "/infra/tier-1s/pks-b####f1-6##b-4##5-a##c-0e####801a-cluster-router",
        "display_name": "seg-pks-pks-b####f1-6##b-4##5-a##c-0e####801a-pks-system-0",
        "id": "seg_d####32-2##f-4##3-9##1-62#####6cf2_0",
        "marked_for_delete": false,
        "overlay_id": 65557,
        "overridden": false,
        "owner_id": "e####5f-4##c-4##e-8##9-5######50e",
        "parent_path": "/infra",
        "path": "/infra/segments/seg_d#####32-2##f-4##3-9##1-62#####6cf2_0",
        "realization_id": "a####a5-5##4-4##7-a##7-9d6####09fb",
        "relative_path": "seg_d####32-2##f-4##3-9##1-62#####6cf2_0",
        "remote_path": "",
        "replication_mode": "MTEP",
        "resource_type": "Segment",
        "subnets": [
          {
            "gateway_address": "172.##.5.#/24",
            "network": "##.32.#.0/24"
          }
        ],
        "tags": [
          {
            "scope": "ncp/version",
            "tag": "1.2.0"
          },
          {
            "scope": "ncp/cluster",
            "tag": "pks-b####f1-6##b-4##5-a##c-0e####801a"
          },
          {
            "scope": "ncp/project",
            "tag": "pks-system"
          },
          {
            "scope": "external_id",
            "tag": "d####2-2##f-4##3-9##1-62#####cf2"
          },
          {
            "scope": "ncp/project_uid",
            "tag": "d####2-2##f-4##3-9##1-62#####cf2"
          },
          {
            "scope": "kubernetes.io/metadata.name",
            "tag": "pks-system"
          },
          {
            "scope": "logs",
            "tag": "true"
          },
          {
            "scope": "metrics",
            "tag": "true"
          },
          {
            "scope": "nodeExporter",
            "tag": "true"
          }
        ],
        "transport_zone_path": "/infra/sites/default/enforcement-points/default/transport-zones/b#####81-4##7-4##8-a2##2-07####bdf7",
        "type": "ROUTED",
        "unique_id": "a####5-5##4-4##7-a##7-9######fb"
      }

 

3. PUT https://<manager_ip>/policy/api/v1/infra/segments/seg_d####2-2##f-4##3-9##1-6#####cf2_

Append the step two output to this and send the API