dumpcap, save to text file and line separated - packet

I'm trying to build a solution where dumpcap saves to text file in the format:
timestamp_as_detailed_as_possible, HEX-raw-packet
My goal is to have this continuously streaming each single data packet to the file, separated by newline.
2 questions?:
Is it possible for dumpcap to take care of fragmented packets, so I'm guaranteed each line contains 1 single full packet?
Is it OK to have another thread afterwards running and reading lines from the same file, do something with the data and then delete the line when processed - without this interfering with dumpcap?

Is it OK to have another thread afterwards running and reading lines from the same file, do something with the data and then delete the line when processed - without this interfering with dumpcap?
No. But this is the wrong approach. A pipe is actually what you should use here, i.e. dumpcap writing to a pipe and the analyzing process reading from it, i.e.
dumpcap -w - | analyzer
Is it possible for dumpcap to take care of fragmented packets, so I'm guaranteed each line contains 1 single full packet?
No, and it is also unclear here what exactly you expect. Usually there is no fragmentation done at the IP level and all since TCP tries to adjust the packet size to not be larger than the MTU anyway. And TCP should be treated as a byte stream only, i.e. don't expect anything you send to end up in a single packet or that multiple send will actually result in multiple packets.

I'm trying to build a solution where dumpcap saves to text file
Dumpcap doesn't save to text files, it saves to binary pcap or pcapng files.
You might want to consider using tcpdump instead, although you'd have to pipe it to a separate program/script to massage its output into the format you want.

Related

Is there a way to replay a gzip-compressed log file in kdb+ without uncompressing it first?

Streaming execute, -11!, does not work on named pipes, so an obvious solution of redirecting the gzip -cd output to a named pipe and passing it to -11! does not work.
-11! accepts a compressed file and streams it so long as it was compressed with -19! (using 2 as the compression algorithm parameter, which is gzip).
The only difference between a normal gzipped file and a kdb compressed one is a few bytes at the beginning of the file.
EDIT (see comment) Thanks, this isn't true - the bytes are different at the end of the file
So a possible solution is to prepend your gzipped files (if they weren't produced by -19!) with the appropriate byte array first.
For anyone using kdb v3.4+ streaming execution for named pipes was introduced as the function .Q.fps.
Here is a simple example of .Q.fps in action, first create the pipe via command line:
echo "aa" > test.pipe
Next in a q session:
q).Q.fps[0N!]`:test.pipe
,"aa"
Where 0N! is a function used to display the contents of the file.

Merging two pcap files with libpcap

I already know how to read a pcap file and get the packets it have.B ut how can I write the packets into a new pcap file? I need this to merge two pcap files into one.
As per my comment, libpcap/WinPcap is a library, not a program, so to use libpcap/WinPcap to merge capture files, you'd have to write your own code to do the merging, using libpcap/WinPcap to read the input files and write the output files.
You could use an existing tool, such as tracemerge or Wireshark's mergecap, to merge the captures.
Assuming the goal is to merge two files' packets by time stamp, then, if you wanted to write your own code, you'd:
attempt to open the two files, and fail if you can't;
if the two files have different link-layer header types or snapshot lengths, fail (you'd have to write a pcap-ng file to handle that, and libpcap/WinPcap don't support that yet);
if the files have the same link-layer header types and snapshot lengths, open an output file using one of the pcap_ts (it doesn't matter which one; all the pcap_t does is tell pcap_dump_open() what link-layer header type and snapshot length to use);
and have a loop where you:
if there's no packet already read from the first file, and the first file is still open, read a packet from it - if that gets an EOF, close the first file;
if there's no packet already read from the second file, and the second file is still open, read a packet from it - if that gets an EOF, close the second file;
if you have two packets, write out the one with the older time stamp and mark that packet as no longer being there, so you read another packet from the file from which it came;
if you have only one packet, write it out and mark it as no longer being there, so you read another packet from the file from which it came;
if you have no packets, you're done - exit the loop;
and then, when you exit the loop, close the dump file. At that point, you're done.
This can be done using joincap.
go get -u github.com/assafmo/joincap
To merge 1.pcap and 2.pcap:
joincap 1.pcap 2.pcap > merged.pcap
I wrote joincap to overcome what I believe is bad error handling by mergecap and tcpslice.
For more details go to https://github.com/assafmo/joincap.

Snort: Reporting packet numbers

I am making use of snort to match packets in pcap file against a set of rules. I want to log the results. I looked at the log file produced at var/log/snort but I want to know that which packet numbers corresponding to the original wireshark pcap file have reported matches. Which command will do that?
You can use the test logger. When running from the command line, add the option '-A test'. The alert's output will have the format
(packet_number) (gid) (sid) (rev).
packet_number corresponds to the pcap's packet number. You can use the other three pieces of information to determine the rule which was triggered.

Does pcap_t *pcap_open_offline(const char *fname, char *errbuf) from libpcap read the whole pcap file into memory?

Does
pcap_t *pcap_open_offline(const char *fname, char *errbuf)
from libpcap read the whole pcap file into memory? If not so, I have to use tcpslice or similar tools to split pcap file up?
Thanks.
A strange way of wording your question, but I'll try and answer what I can.
pcap_open_offline() takes a .dump file (or similarly named output from tcpdump, tcpslice, or libpcap's pcap_dump_open() + pcap_dump() functions) as an input.
This file is exactly the same in format and function as a live trace of a network device, IE, you can use this pcap_t object in pcap_next, pcap_loop, etc.
Altering a dump file in any way (IE, stripping information or parsing out only what you want with tcpslice or wireshark) will render it unreadable by pcap_open_offline(), as it will not be formatted in the manner of a live packet trace.
However, it does not load the entire file at any one time into memory. It streams the file, as you would stream packets from a live trace.
To summarize: pcap_open_live() opens an unaltered tcpdump/tcpslice dump and reads it like a live stream. It does not load the entire file into its memory, as dumps can get quite large! Instead it just goes through the file only loading one packet's worth of the file at a time.

ExtAudioFileSeek and ExtAudioFileWrite together on the same file

I have a situation where I can save a post-processing pass through the audio by taking some manipulated buffer from the end of the track and writing them to the beginning of my output file.
I originally thought I could do this by resetting the write pointer using ExtAudioFileSeek, and was about to implement it when I saw this line in the docs
Ensure that the file you are seeking in is open for reading only. This function’s behavior with files open for writing is undefined.
Now I know I could close the file for writing then reopen it, but the process is a little more complicated than that. Part of the manipulation I am doing is reading from buffers that are in the file I am writing to. The overall process looks like this:
Read buffers from the end of the read file
Read buffers from the beginning of the write file
Process the buffers
Write the buffers back to the beginning of the write file, overwriting the buffers I read in step 2
Logically, this can be done in 1 pass no problem. Programmatically, how can I achieve the same thing without corrupting my data, becoming less-efficient (opposite of my goal) or potentially imploding the universe?
Yes, using a single audio file for both reading and writing may, as you put it, implode the universe, or at least lead to other nastiness. I think that the key to solving this problem is in step 4, where you should write the output to a new file instead of trying to "recycle" the initial write file. After your processing is complete, you can simply scrap the intermediate write file.
Or have I misunderstood the problem?
Oh, and also, you should use ExtAudioFileWriteAsync instead of ExtAudioFileWrite for your writes if you are doing this in realtime. Otherwise the I/O load will cause audio dropouts.