Does pcap_t *pcap_open_offline(const char *fname, char *errbuf) from libpcap read the whole pcap file into memory? - pcap

Does
pcap_t *pcap_open_offline(const char *fname, char *errbuf)
from libpcap read the whole pcap file into memory? If not so, I have to use tcpslice or similar tools to split pcap file up?
Thanks.

A strange way of wording your question, but I'll try and answer what I can.
pcap_open_offline() takes a .dump file (or similarly named output from tcpdump, tcpslice, or libpcap's pcap_dump_open() + pcap_dump() functions) as an input.
This file is exactly the same in format and function as a live trace of a network device, IE, you can use this pcap_t object in pcap_next, pcap_loop, etc.
Altering a dump file in any way (IE, stripping information or parsing out only what you want with tcpslice or wireshark) will render it unreadable by pcap_open_offline(), as it will not be formatted in the manner of a live packet trace.
However, it does not load the entire file at any one time into memory. It streams the file, as you would stream packets from a live trace.
To summarize: pcap_open_live() opens an unaltered tcpdump/tcpslice dump and reads it like a live stream. It does not load the entire file into its memory, as dumps can get quite large! Instead it just goes through the file only loading one packet's worth of the file at a time.

Related

dumpcap, save to text file and line separated

I'm trying to build a solution where dumpcap saves to text file in the format:
timestamp_as_detailed_as_possible, HEX-raw-packet
My goal is to have this continuously streaming each single data packet to the file, separated by newline.
2 questions?:
Is it possible for dumpcap to take care of fragmented packets, so I'm guaranteed each line contains 1 single full packet?
Is it OK to have another thread afterwards running and reading lines from the same file, do something with the data and then delete the line when processed - without this interfering with dumpcap?
Is it OK to have another thread afterwards running and reading lines from the same file, do something with the data and then delete the line when processed - without this interfering with dumpcap?
No. But this is the wrong approach. A pipe is actually what you should use here, i.e. dumpcap writing to a pipe and the analyzing process reading from it, i.e.
dumpcap -w - | analyzer
Is it possible for dumpcap to take care of fragmented packets, so I'm guaranteed each line contains 1 single full packet?
No, and it is also unclear here what exactly you expect. Usually there is no fragmentation done at the IP level and all since TCP tries to adjust the packet size to not be larger than the MTU anyway. And TCP should be treated as a byte stream only, i.e. don't expect anything you send to end up in a single packet or that multiple send will actually result in multiple packets.
I'm trying to build a solution where dumpcap saves to text file
Dumpcap doesn't save to text files, it saves to binary pcap or pcapng files.
You might want to consider using tcpdump instead, although you'd have to pipe it to a separate program/script to massage its output into the format you want.

Merging two pcap files with libpcap

I already know how to read a pcap file and get the packets it have.B ut how can I write the packets into a new pcap file? I need this to merge two pcap files into one.
As per my comment, libpcap/WinPcap is a library, not a program, so to use libpcap/WinPcap to merge capture files, you'd have to write your own code to do the merging, using libpcap/WinPcap to read the input files and write the output files.
You could use an existing tool, such as tracemerge or Wireshark's mergecap, to merge the captures.
Assuming the goal is to merge two files' packets by time stamp, then, if you wanted to write your own code, you'd:
attempt to open the two files, and fail if you can't;
if the two files have different link-layer header types or snapshot lengths, fail (you'd have to write a pcap-ng file to handle that, and libpcap/WinPcap don't support that yet);
if the files have the same link-layer header types and snapshot lengths, open an output file using one of the pcap_ts (it doesn't matter which one; all the pcap_t does is tell pcap_dump_open() what link-layer header type and snapshot length to use);
and have a loop where you:
if there's no packet already read from the first file, and the first file is still open, read a packet from it - if that gets an EOF, close the first file;
if there's no packet already read from the second file, and the second file is still open, read a packet from it - if that gets an EOF, close the second file;
if you have two packets, write out the one with the older time stamp and mark that packet as no longer being there, so you read another packet from the file from which it came;
if you have only one packet, write it out and mark it as no longer being there, so you read another packet from the file from which it came;
if you have no packets, you're done - exit the loop;
and then, when you exit the loop, close the dump file. At that point, you're done.
This can be done using joincap.
go get -u github.com/assafmo/joincap
To merge 1.pcap and 2.pcap:
joincap 1.pcap 2.pcap > merged.pcap
I wrote joincap to overcome what I believe is bad error handling by mergecap and tcpslice.
For more details go to https://github.com/assafmo/joincap.

Perl and wireshark export file dialog

I am interested in opening a capture file in wireshark and then exporting the data in "C arrays" format [Wireshark provides that option in its GUI. One can do it by following "File->Export->as C arrays file" from the main menu.My question is how can I do this in perl? Can someone help me with a script for this?
I Would like to parse each and every packet of the wireshark capture. So I thought, I will first convert each packet to an array and then parse it. Do you have any suggestions on this? My capture consists of all IEEE 802.11 frames.
If you want to do all the parsing yourself, i.e. look at the raw packet data, I would suggest writing your own program using libpcap to read pcap-format capture files (on UN*X, libpcap 1.1.0 and later can also read pcap-ng-format capture files, which is what Wireshark 1.8.0 and later write by default). No need to write stuff out as C arrays.

ExtAudioFileSeek and ExtAudioFileWrite together on the same file

I have a situation where I can save a post-processing pass through the audio by taking some manipulated buffer from the end of the track and writing them to the beginning of my output file.
I originally thought I could do this by resetting the write pointer using ExtAudioFileSeek, and was about to implement it when I saw this line in the docs
Ensure that the file you are seeking in is open for reading only. This function’s behavior with files open for writing is undefined.
Now I know I could close the file for writing then reopen it, but the process is a little more complicated than that. Part of the manipulation I am doing is reading from buffers that are in the file I am writing to. The overall process looks like this:
Read buffers from the end of the read file
Read buffers from the beginning of the write file
Process the buffers
Write the buffers back to the beginning of the write file, overwriting the buffers I read in step 2
Logically, this can be done in 1 pass no problem. Programmatically, how can I achieve the same thing without corrupting my data, becoming less-efficient (opposite of my goal) or potentially imploding the universe?
Yes, using a single audio file for both reading and writing may, as you put it, implode the universe, or at least lead to other nastiness. I think that the key to solving this problem is in step 4, where you should write the output to a new file instead of trying to "recycle" the initial write file. After your processing is complete, you can simply scrap the intermediate write file.
Or have I misunderstood the problem?
Oh, and also, you should use ExtAudioFileWriteAsync instead of ExtAudioFileWrite for your writes if you are doing this in realtime. Otherwise the I/O load will cause audio dropouts.

iPhone file corruption

Is it possible (on iPhone/iPod Touch) for a file written like this:
if (FILE* file = fopen(filename, "wb")) {
fwrite(buf, buf_size, 1, file);
fclose(file);
}
to get corrupted, e.g. when app is forced to terminate?
From what I know fwrite should be an atomic operation, so when I write whole file with one instruction no corruption should occure. I could not find any information on the net that would say otherwise.
Since when fwrite is atomic? I can't find any references. Anyway, even if fwrite can be atomic, the time between fopen and fwrite is not, so if your app is forced to terminate between those times, you'll get an empty file.
As you're writing for iPhoneOS, you can use -[NSData writeToFile:atomically:] to ensure the whole open-write-close procedure is atomic (it works by writing to a temporary file, then replace the original one.)
You could make things easier for yourself and write the data using the NSData class that has a writeToFile:atomically: method waiting for you. Wrapping the raw buffer with NSData is not hard, there are the dataWithBytes:length or dataWithBytesNoCopy:length:freeWhenDone: initializers.
Data written with fwrite is buffered. So a sudden termination might not flush the buffers. fclose will flush the buffer but this does not implicate that the bytes are also written to the disk (due to OS level caches) AFAIK.