How a container to container (C2C) network works
search cancel

How a container to container (C2C) network works

book

Article ID: 298076

calendar_today

Updated On:

Products

VMware Tanzu Application Service for VMs

Issue/Introduction

This article explains how does a container to container (C2C) network work.

Resolution

For developers and operators who are using a container to container (C2C) network on Tanzu Application Service (TAS) for VMs, you may have questions like: 

  • Is my C2C traffic encrypted?
  • Can the traffic be monitored by other containers? 
  • Can the traffic be monitored on VMs? 

The quick answers for the questions above are No, No, and Yes. This Knowledge Base (KB) article demonstrates how a C2C network works between containers. 

Deploy source and test apps

1. Create a temporary directory, create one file for pushing the app:

$ mkdir test 
$ cd test && touch file


2. `cf push` test apps implemented with nc utility:

$ cf push source-app -p . -b binary_buildpack -u none -c "while true; do echo -e \"HTTP/1.1 200 OK\n\n\" | nc -l -p \$PORT; done"

$ cf push dest-app -p . -b binary_buildpack -u none -c "while true; do echo -e \"HTTP/1.1 200 OK\n\n\" | nc -l -p \$PORT; done"


3. Fetch app GUIDs:

$ cf app source-app --guid
6b817f74-####-7fe5290ffb61

$ cf app dest-app --guid
20e39f68-####-d432d5716f29


4. Fetch container IPs:

$ cf ssh source-app -c "ip addr | grep inet"
    inet 127.0.0.1/8 scope host lo
    inet 10.##.##.4 peer 169.##.##.1/32 scope link eth0

$ cf ssh dest-app -c "ip addr | grep inet"
    inet 127.0.0.1/8 scope host lo
    inet 10.##.##.7 peer 169.##.##.1/32 scope link eth0


5. Fetch Diego Cell IPs:

$ cf curl /v2/apps/$(cf app source-app --guid)/stats | grep host
         "host": "10.##.##.101",

$ cf curl /v2/apps/$(cf app dest-app --guid)/stats | grep host
         "host": "10.##.##.102",


6. (Optional) Create internal domain route for destination app:

$ cf map-route dest-app apps.internal --hostname dest-app


C2C Network Architecture

The traffic from the source app (container B on host 1) to the destination app (container C on host 2) looks like:

Allow C2C Traffic with Network Policy

By default, no iptable rules are configured for allowing access between containers. In order to enable access from source app to the destination app, create according network policies, which is finally applied on Diego Cells as iptable rules.  

(port 61443: internal envoy TLS port proxied to 8080, port 8080: app non-TLS port)

$ cf add-network-policy source-app --destination-app dest-app --port 61443 --protocol tcp
$ cf add-network-policy source-app --destination-app dest-app --port 8080 --protocol tcp


Check iptables on Diego Cells

1. On the Diego Cell which hosts the source app, run the following command:

$ sudo iptables -L
MARK       all  --  10.##.##.4        anywhere             /* src:6b817f74-####-7fe5290ffb61 */ MARK set 0x1


2. On the Diego Cell which hosts the test app:

ACCEPT     tcp  --  anywhere             10.##.##.7        tcp dpt:http-alt mark match 0x1 /* src:6b817f74-####-7fe5290ffb61_dst:20e39f68-####-d432d5716f29 */
ACCEPT     tcp  --  anywhere             10.##.##.7        tcp dpt:61443 mark match 0x4 /* src:6b817f74-####-7fe5290ffb61_dst:20e39f68-####-d432d5716f29 */


The new iptables rules added above by garden CNI guarantees only the source with GUID can access the destination app at port 8080 (http-alt) and 61443 (envoy proxy) over TCP protocol. 


Confirm C2C traffic over HTTP

$ cf ssh source-app -c "curl -s -I http://10.##.##.7:8080"
HTTP/1.1 200 OK
# cf ssh source-app -c "curl -k -s -I https://10.##.##.7:61443"
HTTP/1.1 200 OK
# cf ssh source-app -c "curl -s -I http://dest-app.apps.internal:8080"
HTTP/1.1 200 OK
# cf ssh source-app -c "curl -k -s -I https://dest-app.apps.internal:61443"
HTTP/1.1 200 OK


Overlay Implementation

In this test case, in the packets from source app container, the ID for the source CF App is set as 6b817f74-####-7fe5290ffb61, which is the same as the source app GUID. Therefore the packets can only be routed to the destination app according to iptable rules.


About Encryption

TLS or mTLS is not in the scope of C2C networking, the encryption can be implemented 

  • via envoy proxy
  • with customized port

TLS via Envoy Proxy

App container by default open 3 ports over C2C network. 

  • 61001 - envoy port on mTLS, proxied to backend 8080 (app)
  • 61002 - envoy port for ssh, proxied to backend 2222 (sshd) 
  • 61443 - envoy port reserved for C2C TLS, proxied to backend port 8080 (app)

By accessing destination app at envoy proxy port, the traffic can be TLS-encrypted, envoy proxy communicates with the app process inside of container over plain text. Please be aware of some limitations with this approach. 


Container certificate SAN does not contain app internal route

If you are using app internal domain, the cert SAN won’t contain the internal route. The long term fix is the in-flight work to provide automatic transparent mtls everywhere via the sidecar proxy.  But for now, the best you can do is either disable hostname checking or use the IP SAN, which means resolving DNS to IP in your app code, or skip TLS certificate validation. 

$ curl -I https://dest-app.apps.internal:61443
curl: (51) SSL: no alternative certificate subject name matches target host name 'dest-app.apps.internal'
$ curl -k -I https://dest-app.apps.internal:61443
HTTP/1.1 200 OK


(Option) TLS with customized port

Open a port on destination app (for example: 443) on C2C network over TLS protocol. Source app reaches out to the destination app at the port, and completes TLS handshake. In this case, developers should implement TLS handshake in source and destination apps.