Netflow on VeloCloud Edges
search cancel

Netflow on VeloCloud Edges

book

Article ID: 323779

calendar_today

Updated On:

Products

VMware SD-WAN by VeloCloud

Issue/Introduction


NetFlow is one of the most widely-used standard for flow data statistics, it was developed to monitor and record all traffic as it passes in to or out of an interface. Some of the information we can find is:

  •     Source IP
  •     Destination IP
  •     Source port
  •     Destination port
  •     Class of service
  •     Layer 3 protocol type
  •     Interface

You can find NetFlow in different versions, the most common are:
 
    NetFlow version 5 – this one is the most common version available, although it does not support IPv6 traffic, MAC addresses, VLANs or other extension fields.

    NetFlow version 9 – template based standard described in RFC 3954. It supports IPv6 as well as the fields missing in version 5. 

    NetFlow version 10 aka IPFIX – Internet Protocol Flow Information Export (IPFIX) was created by the IETF, previously network professionals relied on the proprietary Cisco NetFlow standard, the IPFIX is a more flexible successor of the NetFlow format and allows us to extend flow data with more information about network traffic. It extends version 9 to support variable length fields (e.g. HTTP hostname or HTTP URL) as well as Enterprise-defined fields. IPFIX is backwards compatible with version 9 traffic, adding the element fields which are not previously available.

IPFIX was developed by the IETF.

VeloCloud supports version 10 ONLY. Edges will send information that can be found on the chat stats (link stats and or application stats) to third party network monitoring applications so it can be analyzed, this information may help determine the source and destination of the traffic and the amount that was generated.

Environment

VMware SD-WAN by VeloCloud

Resolution

Netflow can be test it to and from:

  • NetFlow collector reachable through Edge2Edge
  • NetFlow collector reachable through LAN interface
  • NetFlow collector reachable through WAN
  • NetFlow collector in OPG subnet.

To configure a collector, go to the Netflow Settings area and click the New button at the right side of the Collector table:

  • In the Collector Name text box, enter a unique name for the collector.
  • In the Collector IP text box, enter the IP address of the collector.
  • In the Collector Port text box, enter the port ID of the collector.

In this case we are collecting to DEMO server:



You do not need to have SNMP enabled on the edge to utilize NetFlow (it's not directly related to packet sampling), although some collectors do require SNMP access to the device. Enabling SNMP next to Netflow will help determine the name of the interfaces, if SNMP connectivity is not enabled, the interfaces could show up only by number (i.e. "Interface 1") instead of by name (i.e. "br-network1").

Some top vendors of Netflow utilize SNMP for collecting information are:

  • Solarwinds NPM
  • OpManager by ManageEngine
  • PRTG
  • Nagios

Among others.

In the following example, Solarwinds is the collector, so SNMPv3 was configured as well, see how credential test passed and, based on previous image, we are collecting over port 4739:



Very little information collected as this is a Demo:



Now, here is the configuration from the edge, first Netflow and then SNMP


Useful commands from CLI:


We can verify this information on the CLI using the following commands. Please note that the CLI commands need to be run from Secure CLI access which is available from the release x onwards.


edge:CR-AZURE-DEMO:~# debug  --netflow_collectors
CollectorId   SegmentId           IP  Port  AllowAll     SourceIP  SourceInterface
0                     0  192.168.1.8  4739      True  192.168.1.4              GE2

 


edge:CR-AZURE-DEMO:~#
edge:CR-AZURE-DEMO:~#


And,
edge:CR-AZURE-DEMO:~#
edge:CR-AZURE-DEMO:~# debug --netflow_intervals
{
  "app_table": 300,
  "flow_link_stats_table": 60,
  "flow_table": 60,
  "interface_table": 300,
  "link_table": 300,
  "tunnel_table": 60,
  "vrf_table": 300
}


Capture of the traffic going out:
edge:CR-AZURE-DEMO:~# tcpdump -i eth1 host 192.168.1.8 and port 4739
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth1, link-type EN10MB (Ethernet), capture size 262144 bytes
17:10:36.849818 IP 192.168.1.4.16002 > 192.168.1.8.4739: UDP, length 1350
17:10:36.849838 IP 192.168.1.4.16002 > 192.168.1.8.4739: UDP, length 1370
17:10:36.849841 IP 192.168.1.4.16002 > 192.168.1.8.4739: UDP, length 308
17:11:41.857742 IP 192.168.1.4.16002 > 192.168.1.8.4739: UDP, length 1350
17:11:41.857762 IP 192.168.1.4.16002 > 192.168.1.8.4739: UDP, length 1370
17:11:41.857765 IP 192.168.1.4.16002 > 192.168.1.8.4739: UDP, length 308
17:12:26.857981 IP 192.168.1.4.16002 > 192.168.1.8.4739: UDP, length 1395
17:12:26.858004 IP 192.168.1.4.16002 > 192.168.1.8.4739: UDP, length 286
17:12:46.867239 IP 192.168.1.4.16002 > 192.168.1.8.4739: UDP, length 1350
17:12:46.867268 IP 192.168.1.4.16002 > 192.168.1.8.4739: UDP, length 1370
17:12:46.867272 IP 192.168.1.4.16002 > 192.168.1.8.4739: UDP, length 308
17:13:51.875322 IP 192.168.1.4.16002 > 192.168.1.8.4739: UDP, length 1350
17:13:51.875341 IP 192.168.1.4.16002 > 192.168.1.8.4739: UDP, length 1385
^C
13 packets captured
13 packets received by filter
0 packets dropped by kernel
edge:CR-AZURE-DEMO:~#