VMware NSX NDR (Network Detection and Response) UI returns error "400 Bad Request: Request Header Or Cookie Too Large"
search cancel

VMware NSX NDR (Network Detection and Response) UI returns error "400 Bad Request: Request Header Or Cookie Too Large"

book

Article ID: 322654

calendar_today

Updated On:

Products

VMware NSX VMware vDefend Network Detection and Response

Issue/Introduction

  • Any VMware NSX version between 3.2 and 4.1.2 is running
  • VMware NSX is configured with LDAP or vIDM integration, and a user from one of these sources is logged in.
  • When accessing the NDR pages of the VMware NSX UI, it fails to load and displays an error like below:
    "400 Bad Request Request Header Or Cookie Too Large nginx/1.18.0 (Ubuntu)"
  • The same error may also be seen in other UI pages for NSX Malware Prevention.

Environment

VMware NSX 4.x
VMware NSX-T Data Center 3.x

Cause

This issue is caused by large HTTP header size which the NDR components cannot process, It occurs when a LDAP or vIDM user is part of many groups.

Resolution

This issue is resolved in VMware NSX 4.2.0

Workaround:

To work around this issue on an NSX installation, you will need to edit the configuration of two kubernetes "deployment" objects in the NSX Application Platform, in the nsxi-platform namespace:

  • cloud-connector-proxy
  • cloud-connector-file-server

Both of these deployments include an nginx server, that by default supports a maximum request header size of 8k.
To increase this limit to e.g. 32k, we need to add the "large_client_header_buffers" option to the nginx configuration.

In the commands below, the location of the <kubeconfig file> may vary depending on where NAPP was deployed.
It may be /config/vmware/napps/.kube/config on the NSX manager.
When NAPP is deployed, the napp-k alias should be setup and can be used instead on the NSX-T manager as root user, substitute 'kubectl --kubeconfig <kubeconfig file>' in the below commands with 'napp-k' if using napp -k.

Before making changes to the deployments, we will save a copy of their current configuration to the file system:

kubectl --kubeconfig <kubeconfig file> -n nsxi-platform get deploy cloud-connector-proxy -o yaml > cloud-connector-proxy.orig.yaml
kubectl --kubeconfig <kubeconfig file> -n nsxi-platform get deploy cloud-connector-file-server -o yaml > cloud-connector-file-server.orig.yaml

For this, we will apply the same change to both deployments, by using the "kubectl edit deploy" command:

kubectl --kubeconfig <kubeconfig file> -n nsxi-platform edit deploy cloud-connector-proxy
kubectl --kubeconfig <kubeconfig file> -n nsxi-platform edit deploy cloud-connector-file-server


Each of these commands will open an editor that allows us to modify the deployment.
In both of these deployments, we should modify the section TILLER_YAML_PROXY_NGINX_SETTINGS, adding the following two lines:

service_configs:
  large_client_header_buffers: "4 32k"


In context, the modified section will look like this:

  - name: TILLER_YAML_PROXY_NGINX_SETTINGS
    value: |
      global:
        proxy:
          nginx:
            service_configs:
              large_client_header_buffers: "4 32k"
            server_name: "cloud-connector-file-server"

 

Note: Make sure to indent the added lines correctly (with "service_configs" at same level as "server_name").

After the edits are saved, these deployments will be automatically restarted.
Once the restart is complete the symptoms should be resolved.

Note: This workaround may be reverted by certain configuration changes or upgrade operations, in which case it would need to be applied once again.