Malware Prevention service in the NAPP UI appears down.
search cancel

Malware Prevention service in the NAPP UI appears down.

book

Article ID: 399448

calendar_today

Updated On:

Products

VMware NSX VMware vDefend Firewall VMware vDefend Firewall with Advanced Threat Prevention

Issue/Introduction

Symptoms:

After logging into NSX Manager navigate to System>NSX Application Platform

You will see Malware Prevention Service appears to be down.

SSH into NSX Manager using root credentials and execute the below command:

  • napp-k get pods | grep -vi running | grep -vi completed

 Output may show pods are in "CrashLoopBackoff" state:

NAME                                                   READY   STATUS             RESTARTS   AGE
cloud-connector-check-license-status-xxx               0/1     CrashLoopBackOff   5 (xxs ago)     22m
cloud-connector-update-license-status-yyy              0/1     CrashLoopBackOff   8 (xxs ago)     22m

 

Describe the failing pod to check the details:

  • napp-k describe pod <cloud-connector-check-license-status-xxx>

 

From the above output you might see the below error in events:

 

Events:
Type     Reason     Age   From               Message
----     ------     ---   ----               -------
Normal   Pulling    22m   kubelet            Pulling image "projects.registry.vmware.com/nsx_application_platform/clustering/nsx-cloud-connector-check-nsx-licensing-status-with-lastline-cloud:xx"
Normal   Scheduled  22m   default-scheduler  Successfully assigned nsx-platform/cloud-connector-check-license-status to nsx-napp-worker-xx
Normal   Pulled     21m   kubelet            Successfully pulled image "projects.registry.vmware.com/nsx_application_platform/clustering/nsx-cloud-connector-check-nsx-licensing-status-with-lastline-cloud:xx" in 11.777s (including waiting)
Warning  Unhealthy  21m   kubelet            Liveness probe failed: 2025-05-22 21:15:09,917 = cli = WARNING - Checking liveness status failed: Failed to read liveness status file: [Errno 2] No such file or directory: '/tmp/liveness_status'
Warning  Unhealthy  20m   kubelet            Liveness probe failed: 2025-05-22 21:15:40,096 = cli = WARNING - Unable to read from liveness status file: [Errno 2] No such file or directory: '/tmp/liveness_status'
Warning  Unhealthy  20m   kubelet            Readiness probe failed: 2025-05-22 21:15:40,100 = cli = WARNING - Checking liveness status failed: Failed to read liveness status file: [Errno 2] No such file or directory: '/tmp/liveness_status'
Warning  Unhealthy  20m   kubelet            Readiness probe failed: 2025-05-22 21:16:09,916 = cli = WARNING - Unable to read from liveness status file: [Errno 2] No such file or directory: '/tmp/liveness_status'
Warning  Unhealthy  20m   kubelet            Readiness probe failed: 2025-05-22 21:16:10,429 = cli = WARNING - Checking liveness status failed: Failed to read liveness status file: [Errno 2] No such file or directory: '/tmp/liveness_status'
Warning  Unhealthy  19m   kubelet            Liveness probe failed: 2025-05-22 21:16:40,094 = cli = WARNING - Unable to read from liveness status file: [Errno 2] No such file or directory: '/tmp/liveness_status'
Warning  Unhealthy  19m   kubelet            Readiness probe failed: 2025-05-22 21:16:40,102 = cli = WARNING - Checking liveness status failed: Failed to read liveness status file: [Errno 2] No such file or directory: '/tmp/liveness_status'
Warning  Unhealthy  19m   kubelet            Readiness probe failed: 2025-05-22 21:17:09,913 = cli = WARNING - Unable to read from liveness status file: [Errno 2] No such file or directory: '/tmp/liveness_status'
Warning  Unhealthy  19m   kubelet            Readiness probe failed: 2025-05-22 21:17:25,413 = cli = WARNING - Checking liveness status failed: Failed to read liveness status file: [Errno 2] No such file or directory: '/tmp/liveness_status'
Normal   Created    18m   kubelet            Created container check-license-status
Normal   Started    18m   kubelet            Started container check-license-status
Normal   Pulled     18m   kubelet            Successfully pulled image "projects.registry.vmware.com/nsx_application_platform/clustering/nsx-cloud-connector-check-nsx-licensing-status-with-lastline-cloud:xx" already present on machine
Warning  Unhealthy  11m   kubelet            (x50 over 11m) Readiness probe failed: 2025-05-22 21:34:40,089 = cli = WARNING - Checking liveness status failed: Failed to read liveness status file: [Errno 2] No such file or directory: '/tmp/liveness_status'

Check the logs of the failing pod:

  • napp-k logs <pod-name>

From the pod logs, you may see errors like the below:

2025-05-22 21:34:27,798 = default = INFO - Created status file in the config file, use default value: /tmp/liveness_status
2025-05-22 21:34:27,798 = llutils.proc_liveness = INFO - Checking liveness
2025-05-22 21:34:27,799 = nsx_cloud_connector.service.update_cloud_license = INFO - Retrieving serial numbers from NSX manager...

2025-05-22 21:34:30,876 = nsx_cloud_connector.service.common = WARNING - Error while connecting to Lastline cloud. It might be due to a temporary network or client/server side issue. Retrying one more time...
HTTPSConnectionPool(host='nsx.lastline.com', port=443): Max retries exceeded with URL: /nsx/cloud-connector/api/v1/papi/accounting/nsx/update_license.json
(Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0xXXXXXXXXXXXX>: Failed to establish a new connection: [Errno 101] Network is unreachable'))

2025-05-22 21:34:31,912 = PapiClientConfig = ERROR - HTTPSConnectionPool(host='nsx.lastline.com', port=443): Max retries exceeded with URL: /nsx/cloud-connector/api/v1/papi/accounting/nsx/update_license.json
(Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0xXXXXXXXXXXXX>: Failed to establish a new connection: [Errno 101] Network is unreachable'))

[...repeated entries for retries omitted for brevity...]

2025-05-22 21:36:26,120 = main = INFO - Received signal

 

Verify Network Connectivity from NSX Manager CLI

Run the following command from the NSX Manager CLI to verify the network connectivity for nsx.lastline.com:

  • curl -kv nsx.lastline.com:443

If the response is "Failed to connect to nsx.lastline.com port 443: Network is unreachable"

 

— proceed with checking from the vCenter side as outlined below.

 

Verify Network Connectivity from the Workload Cluster

To perform this, access the guest (workload) cluster through the supervisor control plane.

  • SSH into the vCenter Server Appliance
    • Log in to the Supervisor Cluster:
    • SSH into the vCenter Server Appliance, log in as root, and switch to shell mode:
      shell
      
    • Retrieve the Supervisor Control Plane (SCP) IP address and credentials:
      /usr/lib/vmware-wcp/decryptK8Pwd.py
      
    • Example output:
      Cluster: domain-c46:def22104-2b40-4048-b049-271b1de46b94  
      IP: 10.99.2.10  
      PWD: 3lnCN5ccPhg0cl1WQTZTGNzL[...]  
      
    • SSH into the Supervisor Cluster using the retrieved IP and password:
      ssh [email protected]
  • Log in to the Guest Cluster from the Supervisor Cluster:
    • List available Guest Clusters:
      kubectl get tkc -A
      
    • Retrieve the SSH password for the Guest Cluster:
      kubectl get secrets <guest-cluster-name>-ssh-password -n <namespace> -o yaml
      
    • Example output:
      apiVersion: v1
      data:
        ssh-passwordkey: S2J1OWNCZ01XbXNzMm1JaW1GMmJxMTZnNHV0YjFWYUdYS2FkQjVVcmpUYz0=
      
    • Decode the password:
      echo <copied-ssh-passwordkey> | base64 -d
      
    • Save the decoded password for use.
  • SSH into the Guest Cluster Worker Node:
    • List the machines in the Supervisor Cluster to identify the worker node's IP:
      kubectl get vm -A -owide
      
    • SSH into the Guest Cluster worker node:
      ssh vmware-system-user@<worker-plane-node-ip>
      
    • Enter the password obtained earlier to access the Guest Cluster.

Test Connectivity from the Guest Cluster

Once logged into the guest cluster control plane or worker node, run the following command to check the network connectivity for nsx.lastline.com.

  • curl -kv nsx.lastline.com:443

You will see similar error : "Failed to connect to nsx.lastline.com port 443: Network is unreachable"

 

If any one of the above symptoms do not match, this KB is not a relevant match for your problem statement.

Environment

VMware vDefend Firewall with Advanced Threat Prevention(VMware vDefend Firewall with Advanced Threat Prevention)

NAPP 4.2.0.1

Cause

This issue can happen when network access from NSX Manager and work load network to nsx.lastline.com on TCP port 443 is blocked by a firewall, proxy, or security device. The NSX Application Platform requires this connection for malware signature updates and threat intelligence. Without it, pods fail health checks and crash repeatedly.

Resolution

Collaborate with Network/Security Teams

  • Allow network access from NSX Manager and workload network to nsx.lastline.com on port 443.

  • If a proxy is used, ensure it is configured to permit this traffic.

  • Whitelist the domain on any firewall or security gateway.

After restoring access, check pod status again:

napp-k get pods | grep -v running | grep -v completed

Pods should transition to a Running state.

Confirm Platform Status

Log in to the NSX Manager UI, navigate to:

System > NSX Application Platform

Confirm that Malware Prevention is up and the platform status is stable.

Additional Information

If issues persist after validating network connectivity, please contact Broadcom support