How to identify the worker's network interface that is being used by a particular pod (in NSX-T env)
search cancel

How to identify the worker's network interface that is being used by a particular pod (in NSX-T env)

book

Article ID: 298614

calendar_today

Updated On:

Products

VMware Tanzu Kubernetes Grid Integrated Edition

Issue/Introduction

How to identify the worker's network interface that is being used by a particular pod (in NSX-T env)

This how-to has the steps to identify the worker's network interface that is being used by a particular pod on TKGI with NSX-T.  This procedure doesn't require you to exec into a running container and is particularly useful if you want to identify the interface of a container that doesn't have a shell (e.g., bash, sh, zsh, csh, etc.).

If not using NSX-T and/or if the container has an available shell, then you may also find the following KB useful and easier:
How to get tcpdump for containers inside Kubernetes pods

Environment

Product Version: 1.7

Resolution

To get the specific network interface name in each worker for that namespace.

 

1) List the pods with the nodes

$ kubectl -n xtestxx get pods -o wide
     
    NAME                    READY  STATUS   RESTARTS  AGE    IP           NODE                                  NOMINATED NODE  READINESS GATES
     nginx-6db489d4b7-v7lv9  1/1    Running  0         33m    172.##.##.#  8567fa9b-8d56-4543-8773-688d1a16f0c5  <none>          <none>
     redis-5c7c978f78-ppscp  1/1    Running  0         8m10s  172.##.##.#  8567fa9b-8d56-4543-8773-688d1a16f0c5  <none>          <none>
$

 

2) List the nodes and correlate the node(s) of the pods

    $ kubectl get nodes -o wide
    NAME                                  STATUS  ROLES   AGE   VERSION           INTERNAL-IP   EXTERNAL-IP   OS-IMAGE            KERNEL-VERSION      CONTAINER-RUNTIME
    8567fa9b-8d56-4543-8773-688d1a16f0c5  Ready   <none>  6d2h  v1.17.8+vmware.1  172.##.###.#  172.##.###.#  Ubuntu 16.04.7 LTS  4.15.0-112-generic  docker://19.3.5
    $

 

3) ssh into each node that hosts the particular pods. You can correlate the IP address between the nodes list (step 2) and bosh vms output.

    $ bosh -d service-instance_2fc6b439-876e-4a6d-b3cf-64948f3682bd vms
    Using environment '172.33.0.11' as client 'ops_manager'
    Task 2132. Done
     
    Deployment 'service-instance_2fc6b439-876e-4a6d-b3cf-64948f3682bd'
    Instance                                    Process State AZ  IPs          VM CID                                  VM Type     Active Stemcell
    master/a9c86ecd-3209-4bcd-ae33-02d6f20769af running       az1 172.##.###.# vm-31fc8878-####-####-b1ec-47028ec94b6c medium.disk true   bosh-vsphere-esxi-ubuntu-xenial-go_agent/621.82
    worker/1fbbf452-f90f-4ee9-a788-9ecc3bb86f52 running       az1 172.##.###.# vm-1d904d17-####-####-801d-e3b8efe24783 medium.disk true   bosh-vsphere-esxi-ubuntu-xenial-go_agent/621.82
    2 vms
    Succeeded
    $
     
    $ bosh -d service-instance_2fc6b439-####-####-b3cf-64948f3682bd ssh worker/1fbbf452-f90f-4ee9-a788-9ecc3bb86f52

 

4) Become root. Then, run the following nsxcli command to retrieve some more needed info. Make sure to replace 'xtestxx.nginx' with 'namespace.first-part-of-pod-name'

    $ sudo -i
     
    $ /var/vcap/jobs/nsx-node-agent/bin/nsxcli -c get container-caches | grep xtestxx.nginx -A 9
    nsx.xtestxx.nginx-6db489d4b7-v7lv9:
       interfaces:
           eth0:
               cif_id: b8cdb021-2022-4da7-a338-c23b66d9e06b
               default: True
               gateway_ip: 172.34.16.1/24
               ip: 172.##.##.#/24
               mac: 04:50:56:00:e8:8f
               vlan_id: 16

 

5) Run the following to get the interface name. Make sure to replace 'nginx-6db489d4b7-v7lv9' with the value from line 1 of the output in step 4.

    $ alias ovs-vsctl="/var/vcap/packages/openvswitch/bin/ovs-vsctl --db=unix:/var/vcap/sys/run/openvswitch/db.sock"
    $ ovs-vsctl show | grep nginx-6db489d4b7-v7lv9 -A 2
    Port "nginx-6db489d4b7-v7lv9_7e6fc54af32cf20"
       tag: 16
       Interface "7e6fc54af32cf20"
     
    $ ifconfig 7e6fc54af32cf20
    7e6fc54af32cf20 Link encap:Ethernet HWaddr ae:cb:2e:1b:6c:f7
             UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
             RX packets:50 errors:0 dropped:0 overruns:0 frame:0
             TX packets:90 errors:0 dropped:0 overruns:0 carrier:0
             collisions:0 txqueuelen:1000
             RX bytes:5016 (5.0 KB) TX bytes:43521 (43.5 KB)
    $

 

6) you can now run tcpdump to capture traffic from the pod.

    $ tcpdump -i 7e6fc54af32cf20