Nfcapd to pcap conversion? - pcap

I've got few NetFlow dumps captured by nfcapd deamon. Is there any possibility to convert them to .pcap format so I can analyse ones with my software?

Basically no; most of the information from the packets is lost, including the entire payloads. NetFlow summarizes the header information from all the packets in a given session: it could be a dozen or thousands. The NetFlow dumps do not (to my recollection) include partial updates either. So, you can go one way (convert from pcap to NetFlow) but not the other way.
That said, if all you need for your analysis are the IP headers of the first packets, you might be able to fake something. But I don't know of any tool that does it.

Related

how to find my netflow data version number?

Is there any option to know the version number of my netflow data.
I have pcap file generated using tcpdump. Then using some opensource (which depends on tshark) I converted the pcap data into netflow.
I am not able to find out which version of netflow it is in? netflow v5 or v7....or IPFIX.
Is there any way to tell netflow version by looking at the data?
If you are using the PCAP file to generate and export NetFlow over the wire, then the version number is in the second byte of the payload of the UDP packet. The value will be 5, 7, 9, or 'A' (in case of IPFIX).
However, if you have used a textual format to dump the records to disk, then they are technically not really versioned NetFlow until you export them somehow over the wire.

Netflow sample data sets

Does anyone know of an open netflow data set, I want to use it to run a little experiment on it, and analyse some of the flows. I looked around but there is nothing. Or if there is a good method to capture netflow data without actually having a cisco router.
Thanks!
You best/quickest option is to generate NetFlow through a software exporter that uses live capture (see for instance nProbe: http://www.ntop.org/products/netflow/nprobe/ or FlowTraq's free exporter: http://www.flowtraq.com/corporate/product/flow-exporter/).
Both these software exporters also have the capability to generate netflow from PCAP files. This can be convenient if you either have PCAP files, or download PCAP datasets, which are much more available than netflow datasets.

How can I find out what NetFlow version my nfcapd is capturing?

What version are my NetFlows?
I have an appliance that is exporting NetFlow to my NetFlow collector. My collector is collecting with nfcapd. The only information I can find is that nfcapd will capture different NetFlow versions "transparently".
My network appliance doesn't tell me in what version it is exporting flows. I need to explore a different NetFlow collector so I'm trying to get an idea of my requirements.
I could contact the vendor of the network appliance but I have several appliances exporting NetFlow so I would prefer to check on the collector end what version these flows are. Is there a way to do this with nfsen/nfcapd/nfdump tools? I'm not having any luck.
There are really only two versions that it's likely to be: NetFlow v5 or NetFlow v9 (IPFIX is essentially v9). The version number is included in the datagram, so the easiest way to find out which version it's exporting is to sniff the traffic in something like Wireshark, which will list the traffic as CFLOW. The first two bytes in each datagram will be the version number.

how to convert binary data into string format in j2me

I am using the gzip algorithm in j2me. After compressing the string I tried to send the compressed string as text message but the size was increasing drastically. So I used base64 encoding to convert the compressed binary to text. But while encoding the size is still increasing, please help me with an encoding technique which when used the data size remains the same.
I tried sending binary sms but as its limit is 134 characters I want to compress it before sending the sms.
You have some competing requirements here.
The fact you're considering using SMS as a transport mechanism makes me suspect that the data you have to send is quite short to start with.
Compression algorithms (in general) work best with large amounts of data and can end up creating a longer output than you started with if you start with something very short.
There are very few useful encoding changes that will leave you with output the same length as when you started. (I'm struggling to think of anything really useful right now.)
You may want to consider alternative transmission methods or alternatives to the compression techniques you have tried.

Best way to generate million tcp connection

I need to find a best way to generate a million tcp connections. (More is good,less is bad). As quickly as possible machinely :D
Why do I need this ? I am testing a nat, and I want to load it with as many entries as possible.
My current method is to generate a subnet on a dummy eth and serially connect from that dummy to actual eth to lan to nat to host.
subnetnicfake----routeToRealEth----RealEth---cable---lan----nat---host.
|<-------------on my machine-------------------->|
One million simultaneous TCP sessions might be difficult: If you rely on standard connect(2) sockets API to create the functions, you're going to use a lot of physical memory: each session will require a struct inet_sock, which includes a struct sock, which includes a struct sock_common.
I quickly guessed at sizes: struct sock_common requires at roughly 58 bytes. struct sock requires roughly 278 bytes. struct inet_sock requires roughly 70 bytes.
That's 387 megabytes of data before you have receive and send buffers. (See tcp_mem, tcp_rmem, tcp_wmem in tcp(7) for some information.)
If you choose to go this route, I'd suggest setting the per-socket memory controls as low as they go. I wouldn't be surprised if 4096 is the lowest you can set it. (SK_MEM_QUANTUM is PAGE_SIZE, stored into sysctl_tcp_rmem[0] and sysctl_tcp_wmem[0].)
That's another eight gigabytes of memory -- four for receive buffers, four for send buffers.
And that's in addition to what the system requires for your programs to open one million file descriptors. (See /proc/sys/fs/file-max in proc(5).)
All of this memory is not swappable -- the kernel pins its memory -- so you're really only approaching this problem on a 64-bit machine with at least eight gigabytes of memory. Probably 10-12 would do better.
One approach taken by the Paketto Keiretsu tools is to open a raw connection, perform all the TCP three-way handshakes using a single raw socket, and try to compute whatever is needed, rather than store it, to handle much larger amounts of data than usual. Try to store as little as possible for each connection, and don't use naive lists or trees of structures.
The Paketto Keiretsu tools were last updated around 2003, so they still might not scale into the million range well, but they would definitely be my starting point if this were my problem to solve.
After searching for many days, I found the problem. Apparently this problem is well thought over, and it should be ,since its so very basic. The problem was, I didnt know what this problem should be called . Among know-ers, it apparently called as c10k problem. What I wanted is c1m problem. However there seems to be some effort done to get C500k . or Concurrent 500k connections.
http://www.kegel.com/c10k.html AND
http://urbanairship.com/blog/2010/09/29/linux-kernel-tuning-for-c500k/
#deadalnix.
Read above links ,and enlighten yourself.
Have you tried using tcpreplay? You could prepare - or capture - one or more PCAP network capture files with the traffic that you need, and have one or more instances of tcpreplay replay them to stress-test your firewall/NAT.
as long as you have 65536 port available in TCP, this is impossible to achive unless you have an army of servers to connect to.
So, then, what is the best way ? Just open as many connection as you can on servers and see what happens.