How can I find out what NetFlow version my nfcapd is capturing? - netflow

What version are my NetFlows?
I have an appliance that is exporting NetFlow to my NetFlow collector. My collector is collecting with nfcapd. The only information I can find is that nfcapd will capture different NetFlow versions "transparently".
My network appliance doesn't tell me in what version it is exporting flows. I need to explore a different NetFlow collector so I'm trying to get an idea of my requirements.
I could contact the vendor of the network appliance but I have several appliances exporting NetFlow so I would prefer to check on the collector end what version these flows are. Is there a way to do this with nfsen/nfcapd/nfdump tools? I'm not having any luck.

There are really only two versions that it's likely to be: NetFlow v5 or NetFlow v9 (IPFIX is essentially v9). The version number is included in the datagram, so the easiest way to find out which version it's exporting is to sniff the traffic in something like Wireshark, which will list the traffic as CFLOW. The first two bytes in each datagram will be the version number.

Related

storing netflow v9 in a time series data base

I am looking for a recommendation about a free linux tool that can collect netflow v9 traffic and store the parsed data in a time series data base for further analysis. I don't need analysis capabilities just good reliable collection and storage performance

How to collect binary [log] files over https from 1000+ Win/Linux machines?

I need to collect performance/tracing/config binary [log] files (they use proprietary formats, like winperf .BLG or sqlserverxe .XEL) over https from 1000+ Win/Linux servers distributed across public and private clouds (i.e. not in the same LAN/VPN/Active Directory). For political reasons I'm nailed to these binary formats as well as to https and I'm not allowed to transform the files on site and send out the resulting data. I can ensure the files are rolled over every ~10 minutes, to keep their sizes ~manageable (up to 100 MB with average < 10 MB).
QUESTION: Is there any FOSS solution out there which can be modified or is already capable of collecting all these files in a centralized location?
I'm looking for something that already covers as many as possible of these requirements:
ensure upload channel security
endure network outages/problems of all kinds
pass through corporate proxies in all ways that a web browser can
throttle upload speed based on local network usage or at least based on preconfigured limits (to avoid overloading the upload server as well as avoid local network problems induced when hogging all upload bandwidth)
be configurable on how to select which files to upload (e.g. upload all files in a dir order by X rule except the newest file / any file modified in the last 2 minutes).
Basically, I'm looking for MS BITS on steroids, open sourced and able to run on Linux.
This is a DevOps question. The solution I'm looking for is nowhere near as simple as writing a script for wput and running it on cron. It's more like writing a custom ELK beats module or logstash variant and that's why I'm asking here.

how to find my netflow data version number?

Is there any option to know the version number of my netflow data.
I have pcap file generated using tcpdump. Then using some opensource (which depends on tshark) I converted the pcap data into netflow.
I am not able to find out which version of netflow it is in? netflow v5 or v7....or IPFIX.
Is there any way to tell netflow version by looking at the data?
If you are using the PCAP file to generate and export NetFlow over the wire, then the version number is in the second byte of the payload of the UDP packet. The value will be 5, 7, 9, or 'A' (in case of IPFIX).
However, if you have used a textual format to dump the records to disk, then they are technically not really versioned NetFlow until you export them somehow over the wire.

Netflow sample data sets

Does anyone know of an open netflow data set, I want to use it to run a little experiment on it, and analyse some of the flows. I looked around but there is nothing. Or if there is a good method to capture netflow data without actually having a cisco router.
Thanks!
You best/quickest option is to generate NetFlow through a software exporter that uses live capture (see for instance nProbe: http://www.ntop.org/products/netflow/nprobe/ or FlowTraq's free exporter: http://www.flowtraq.com/corporate/product/flow-exporter/).
Both these software exporters also have the capability to generate netflow from PCAP files. This can be convenient if you either have PCAP files, or download PCAP datasets, which are much more available than netflow datasets.

Nfcapd to pcap conversion?

I've got few NetFlow dumps captured by nfcapd deamon. Is there any possibility to convert them to .pcap format so I can analyse ones with my software?
Basically no; most of the information from the packets is lost, including the entire payloads. NetFlow summarizes the header information from all the packets in a given session: it could be a dozen or thousands. The NetFlow dumps do not (to my recollection) include partial updates either. So, you can go one way (convert from pcap to NetFlow) but not the other way.
That said, if all you need for your analysis are the IP headers of the first packets, you might be able to fake something. But I don't know of any tool that does it.