General error has occurred observed while running upgrade precheck in NSX
search cancel

General error has occurred observed while running upgrade precheck in NSX

book

Article ID: 402616

calendar_today

Updated On:

Products

VMware NSX

Issue/Introduction

  • Upgrade pre-check is being carried out.
  • In the Upgrade Coordinator UI, upgrade precheck fails for all Host Transport Nodes with error:
    Failed to execute Manager connectivity check. [UC] Error in rest call. url= /nsxapi/api/v1/transport-nodes/<host-transport-node-UUID>/status , method= GET , response= { "module_name" : "common-services", "error_message" : "General error has occurred.", "details" : "java.lang.NullPointerException", "error_code" : 100 } , error= 500 : "{<EOL> "module_name" : "common-services",<EOL> "error_message" : "General error has occurred.",<EOL> "details" : "java.lang.NullPointerException",<EOL> "error_code" : 100<EOL>}<EOL>" .
  • When a REST API call is run against any host transport node, it will fail with error "General error has occurred":
    GET /api/v1/transport-nodes/<host-transport-node-UUID>/state
    GET /api/v1/transport-nodes/<host-transport-node-UUID>/status
  • When browsing to the host transport node in NSX UI, "View Details" is greyed out. 
  • In the logs, the following error may be observed:
    /var/log/proton/nsxapi.log 
    2025-06-23T09:28:51.569Z  INFO http-nio-127.0.0.1-7440-exec-37 FabricModuleServiceImpl 85596 FABRIC [nsx@6876 comp="nsx-manager" level="INFO" reqId="########-####-####-####-########034d" subcomp="manager" username="admin"] Fabric module by name: hostprep not found
    2025-06-23T09:28:51.570Z ERROR http-nio-127.0.0.1-7440-exec-37 NsxBaseRestController 85596 SYSTEM [nsx@6876 comp="nsx-manager" errorCode="MP100" level="ERROR" subcomp="manager"]
    java.lang.NullPointerException: null
            at com.vmware.nsx.management.service_fabric.sfm.deployment.service.DeploymentUnitInstanceServiceImpl.getDeploymentUnitInstanceForHostPrepByHostId(DeploymentUnitInstanceServiceImpl.java:536) ~[?:?]
            at com.vmware.nsx.management.service_fabric.sfm.consumer.hostprep.service.HostPrepServiceFabricDeploymentServiceImpl.findDeploymentUnitInstanceFromNodeId(HostPrepServiceFabricDeploymentServiceImpl.java:328) ~[?:?]
            at com.vmware.nsx.management.service_fabric.sfm.consumer.hostprep.service.HostPrepServiceFabricDeploymentServiceImpl.getHostNodeDeploymentStatus(HostPrepServiceFabricDeploymentServiceImpl.java:346) ~[?:?]

Environment

VMware NSX

Cause

  • An API has been used in the past to manually manipulate fabric modules installed in NSX Manager: PUT /api/v1/fabric/modules/<fabric-module-UUID>
  • This API may have incorrectly manipulated the existing fabric modules, creating inconsistency in fabric module configurations.

Resolution

If you believe you have encountered this issue, please open a support case with Broadcom Support and refer to this KB article.
For more information, see Creating and managing Broadcom support cases.

Additional Information

If you are contacting Broadcom support about this issue, please provide the following:

  • NSX Manager support bundles.
  • Output of API: GET https://<nsx-manager>/api/v1/fabric/modules
  • Dump of table FabricModules from NSX Manager:
    • SSH to any Manager as root.
    • Dump the table into Manager's /tmp directory:
      /opt/vmware/bin/corfu_tool_runner.py --tool corfu-editor -n nsx -o showTable -t FabricModule > /tmp/FabricModule.txt
    • Use scp or similar (WinSCP) to copy the file from the Manager.

Handling Log Bundles for offline review with Broadcom support: