I have two pcapng files. Each one is a traffic capture that occurred at the same router but on different interfaces.
Since I want to study the behavior of the router's protocols globally I thought on merging these two files into one, so it would be easier to study the different protocols.
I've used the tool mergcap, such as this:
mergecap -w new_file.pcapng file1.pcapng file2.pcapng
According to the manual of mergecap, the files will be merged chronologically, based on the timestamp of each packet within each file1.pcapng and file2.pcapng.
The problem I'm facing now is that after the merge has taken place, packets that I had in file1.pcapng are not found with the same timestamp on new_file.pcapng.
Has anyone done something like this before? I'm using mergecap 2.0.2.
Thanks!
Lucas
By default wireshark orders the packets chronologically starting from the first captured packet. Since you merged two capture files you have two packets that were the start of the capture but only one of them is the first packet in the file. Aligning packets by time based on the first captured packet does not make sense in case of a merged capture.
To be fair, it could make sense if wireshark ordered all packets in chronological order before picking which packet was captured first. Currently, the first packet in the file is the time reference (see time references) by default.
Thankfully wireshark stores the packet time as a timestamp since EPOCH. This allows to align the packets in a merged file chronologically using the several options in View > Time Display Format.
Captures from different machines
The above has one limitation: Since the timestamps are based on EPOCH, if you capture packets from different machines you need to make sure that the clocks of these machines are aligned.
In the case that your capture files originate from different machines and the clocks on these machines are not aligned, you need to shift the timestamps on one of the captures before merging. That, in turn, can be accomplished with wiresharks Edit > Time Shift.
Related
Let's say I have a car with different sensors: several cameras, LIDAR and so on, the data from this sensors are going to be send to some host over 5G network (omnetpp + inet + simu5g). For video it is like 5000 packets 1400 bytes each, for lidar 7500 packets 1240 bytes and so on. Each flow is encoded in UDP packets.
So in omnetpp module in handleMessage method I have two sentTo calls, each is scheduled "as soon as possible", i.e., with no delay - that corresponds to the idea of multiple parallel streaming. How does omnetpp handle situations, when it needs to send two different packets at the same time from the same module to the same module (some client, which receives sensor data streams)? Does it create some inner buffer on the sender or receiver side, therefore allowing really only one packet sending per handleMessage call or is it wrong? I want to optimize data transmission and play with packet sizes and maybe with sending intervals, so I want to know, how omnetpp handles multiple streaming at the same time, because if it actually buffers, maybe than it makes sense to form a single package from multiple streams, each such package will consist of a certain amount of data from each stream.
There is some confusion here that needs to be clarified first:
OMNeT++ is a discrete event simulator framework. An OMNeT++ model contains modules that communicate with each other, using OMNeT++ API calls like sendTo() and handleMessage(). Any call of the sendTo() method just queues the provided message into the future event queue (an internal, time ordered queue). So if you send more than one packet in a single handleMessage() method, they will be queued in that order. The packets will be delivered one by one to the requested destination modules when the requested simulation time is reached. So you can send as many packets as you wish and those packets will be delivered one by one to the destination's handleMessage() method. But beware! Even if the different packets will be delivered one by one sequentially in the program's logic, they can still be delivered simultaneously considering the simulation time. There are two time concepts here: real-time that describes the execution order of the code and simulation-time which describes the time passes from the point of the simulated system. That's why, while OMNeT++ is a single threaded application that runs each events sequentially it still can simulate infinite number of parallel running systems.
BUT:
You are not modeling directly with OMNeT++ modules, but rather using INET Framework which is a model directly created to simulate internet protocols and networks. INET's core entity is a node which is something that has network interface(s) (and queues belonging to them). Transmission between nodes are properly modeled and only a single packet can travel on an ethernet line at a time. Other packets must queue in the network interface queue and wait for an opportunity to be delivered from there.
This is actually the core of the problem for Time Sensitive Networks: given a lot of pre-defined data streams in a network, how the various packets interfere and affect each other and how they change the delay and jitter statistics of various streams at the destination, Plus, how you can configure the source and network gate scheduling to achieve some desired upper bounds on those statistics.
The INET master branch (to be released as INET 4.4) contains a lot TSN code, so I highly recommend to try to use it if you want to model in vehicle networks.
If you are not interested in the in-vehicle communication, bit rather want to stream some data over 5G, then TSN is not your interest, but you should NOT start to multiplex/demultiplex data streams at application level. The communication layers below your UDP application will fragment/defragment and queue the packets exactly how it is done in the real world. You will not gain anything by doing mux/demux at application layer.
I have a question regarding the TCPStream package in Rust. I want to read data from a server. The problem is that it is not guaranteed that the data is sent in one TCP package.
And here comes my question:
Is the read message capable of reading more than one package, or do I have to call it more than one? Is there any "best practice"?
From the user space TCP packets are not visible and their boundaries don't matter. Instead user space reads only a byte stream and writes only to a byte stream. Packetizing is done at a lower level in a way to be optimal for latency and bandwidth. It might well happen that multiple write from user space end up in the same packet and it might also happen that a single write will result in multiple packets. And the same is true with read: it might get part of a packet, it might get the payload taken from multiple consecutive packets ...
Any packet boundaries from the underlying transport are no longer visible from user space. Thus protocols using TCP must implement their own message semantic on top of the byte stream.
All of this is not specific to Rust, but applies to other programming languages too.
I have a REST API that also serves SSE's to send events to clients. The expected load can be anywhere up to 10k concurrent. Luckily since the client never sends data we don't have to worry about polling the connections, however our next bottleneck becomes sending data to the FDs.
We broadcast a static payload out to either all the connections, or a subset of them. Is there a good way to do this without without spending over 60% of our profiled CPU time inside of syscall?
I've seen some talk around tcp splicing such that we vmsplice our payload into a pipe then tee it into n many pipes, then splice into n many socket. But is this the ideal way of achieving it?
Could something like memfd_create + sendfile work?
Is going to these lengths even worth it to save copying the payload ~10k times?
I'm using Mido for python, working on parsing midi files into <start_time, duration, program, pitch> tuples and met some problems.
Some files that I parse has multiple note_on, resulting in notes at the same pitch and same program being opened more than once.
Some files contains multiple note_off resulting in trying to close notes that is no longer on due to being closed before (assuming only one note at the same program and same pitch can be on).
Some tracks does not have a program_change in the beginning of the track (or even worse, not even having one in the whole track).
Some files has more than one track containing set_tempo messages.
What should I do in each of these cases to ensure I get the correct interpretation?
In general, to get a correct MIDI message stream, you have to merge all tracks in a type 1 file. What matters for a synthesizer are not tracks, but channels.
The MIDI specification says:
ASSIGNMENT OF NOTE ON/OFF COMMANDS
If an instrument receives two or more Note On messages with the same key number and MIDI channel, it must make a determination of how to handle the additional Note Ons. It is up to the receiver as to whether the same voice or another voice will be sounded, or if the messages will be ignored. The transmitter, however, must send a corresponding Note Off message for every Note On sent. If the transmitter were to send only one Note Off message, and if the receiver in fact assigned the two Note On messages to different voices, then one note would linger. Since there is no harm or negative side effect in sending redundant Note Off messages this is the recommended practice.
The General MIDI System Level 1 Developer Guidelines say that in response to a “GM System On” message, a device should set Program Change to 0. So you can assume this to be the initial value for channels that have notes without a preceding Program Change.
The Standard MIDI Files specification says that
tempo information should always be stored in the first MTrk chunk.
But "should" is not "must".
I am using latest version of veins. I have been playing it with for a while and understand the basics now. I followed tictoc tutorial for omentpp, but I still couldn't figure out how to solve the following probelm:
I want Vehicles and RSU to send messages to each other. I want these messages to be sent in all the four catagories. When a message is received I want to measure the time it took to travel from source to destination.
By default, veins, can send data, and based on this post, I know that I have to change someparts in TraCIDemo11p, but I couldn't figure out what. It would be great if someone could provide an answer.
To answer my own question. I modified BaseWaveAppLayer.cc to accomplish my goal(though it is not right way to do it. The right way would be to extend this class and make your changes in that class. But since I just wanted to make changes quickly I chose this quicker way). I modified the method for sending beacons. Since beacons will be scheduled to be sent based on the time that the user can specify in .ini file. Now every time a beacon is scheduled to be sent, I randomly generate a priority from the range [0-4) and assign it to the packet. In this way I get to send beacons with different priorities over the network.
Also as I had a requirement of sending each packet in a different rate. To achieve this I implemented the random generation function in such a way that certain numbers of the range gets generated more than others. It's sorta biased. So as an example, in .ini file I would specify that priorities 0-2 should be sent at rate of 0.2 while priority 4 should be sent at rate of 0.4(it can interpreted as the sending rate for each priority). The random generation function would then generate 4 twice more than any other number, while numbers 0,1,2 would get generated the same number of times.