What is difference of the two lengths in pcap's packet header? - pcap

Here is the structure of the packet header in pcap:
struct pcap_pkthdr {
struct timeval ts; /* time stamp */
bpf_u_int32 caplen; /* length of portion present */
bpf_u_int32 len; /* length this packet (off wire)*/
};
I wonder what is the real difference between caplen and len? And where are they used?

len is the actual length of the packet on the wire. caplen is the length which is captured and thus present in the pcap file. caplen can be the same but also smaller than len.
How many bytes of a packet will be captured can be specified for example in tcpdump with -s size. While on many system tcpdump will capture up to 64k by default for example on OpenBSD it will only capture 116 bytes by default.

Related

UDP header is 16 bit but there's actually 24bits

I am looking to understand UDP header and I see that it's actually 24 bits seen as
struct sockaddr_in {
short sin_family; // e.g. AF_INET //4 bytes
unsigned short sin_port; // e.g. htons(3490) //4 bytes
struct in_addr sin_addr; // see struct in_addr, below //8 bytes
char sin_zero[8]; // zero this if you want to //8 bytes
};
struct in_addr {
unsigned long s_addr; // load with inet_aton()
};
According to this explanation it's 16 bytes. Since sin_zero[8] isn't used anywhere it is 16 bytes ? UDP HEADER The struct size is still 24 bytes. Am I missing something ?
Thanks!
What you have in your question are the C structures for expressing a socket address.
It is a different animal than what actually gets sent on the wire for a UDP header.
Basically the IPv4 header is 20 bytes and the UDP header on top of that is 8 bytes as is also explained by the Geeks for Geeks reference in your question.
I recommend looking at https://en.wikipedia.org/wiki/User_Datagram_Protocol, or installing Wireshark and capturing a UDP packet to see how it looks like.

How to handle fields inside network headers whose length are not multiple of 8 bits

I just started learning socket programming, and I'm trying to implement TCP/UDP protocols using raw sockets.
IP Header
0 7 8 15 16 23 24 31
+--------+--------+--------+--------+
|Ver.|IHL|DSCP|ECN| Total length |
+--------+--------+--------+--------+
| Identification |Flags| Offset |
+--------+--------+--------+--------+
| TTL |Protocol| Header Checksum |
+--------+--------+--------+--------+
| Source IP address |
+--------+--------+--------+--------+
| Destination IP address |
+--------+--------+--------+--------+
When writing IP header, the Flags and Offset part, the length of Offset is not multiple of 8 bit. So I take Flags and Offset together as a whole.
uint8 flags = 0;
uint16 offset = htons(6000); // more than 1 byte, so we need to use htons
// in c, we can left(since it's in big endianness) shift offset 3 bit,
// and then convert flags to uint16, and then merge them together
// in some other languages, for example, Haskell,
// htons like functions may return a bytestring which is not an instance of Bit,
// we need to unpack it back into a list of uint8 in order to use bitwise operations.
This method is not very clean, I'm wondering what's the usual way to construct bytestring when its components are of length more than 1 byte and their endianness also needs to be considered.
In C, the usual way would be to declare a uint16_t, uint32_t, or uint64_t temporary variable, use bitwise operators to assemble the bits within that variable, and then use htons() or htonl() to convert the bits into network (aka big-endian) order.
For example, the Flags and Offset fields, taken together, constitute a 16-bit word. So:
uint8_t flags = /* some 3-bit value */;
uint16_t offset = /* some 13-bit value */;
uint16_t flagsAndOffsetBigEndian = htons(flags | (offset << 3));
memcpy(&header[16], &flagsAndOffsetBigEndian, sizeof(uint16_t));

Variable sized i2c reads Raspberry

I am trying to interface A71CH with raspberry PI 3 over i2c, the device requires repeated starts and when a read request is made the first byte the device sends, is always the length of the whole message. When I am trying to make a read, instead of reading a fixed sized message , I want to read the first byte then send NACK signal to the slave after certain amount of bytes have been received that is indicated with the first byte. I used to following code but could not get the results I expected because it only read one byte than sends a NACK signal as you can see below.
struct i2c_rdwr_ioctl_data packets;
struct i2c_msg messages[2];
int r = 0;
int i = 0;
if (bus != I2C_BUS_0) // change if bus 0 is not the correct bus
{
printf("axI2CWriteRead on wrong bus %x (addr %x)\n", bus, addr);
}
messages[0].addr = axSmDevice_addr;
messages[0].flags = 0;
messages[0].len = txLen;
messages[0].buf = pTx;
// NOTE:
// By setting the 'I2C_M_RECV_LEN' bit in 'messages[1].flags' one ensures
// the I2C Block Read feature is used.
messages[1].addr = axSmDevice_addr;
messages[1].flags = I2C_M_RD | I2C_M_RECV_LEN|I2C_M_IGNORE_NAK;
messages[1].len = 256;
messages[1].buf = pRx;
messages[1].buf[0] = 1;
// NOTE:
// By passing the two message structures via the packets structure as
// a parameter to the ioctl call one ensures a Repeated Start is triggered.
packets.msgs = messages;
packets.nmsgs = 2;
// Send the request to the kernel and get the result back
r = ioctl(axSmDevice, I2C_RDWR, &packets);
Is there any way that allows me to make variable sized i2c reads ? What can I do to make it work ? Thanks for looking.
Raspbery doesn't support SMBUS Block Reads, only way to overcome this is to do bitbanging on GPIO pins. As #Ian Abbott mentioned above, I managed to modify bbI2CZip function to fit my need by checking the first byte of the received message and updating the read length afterwards.
I had a similar issue with the rpi3. I wanted to read exactly 32 bytes of data from a register on a slave device, but i2c_smbus_read_block_data() was returning -71 and errno 71 EPROTO.
The solution was to use i2c_smbus_read_i2c_block_data() instead of i2c_smbus_read_block_data().
/* Until kernel 2.6.22, the length is hardcoded to 32 bytes. If you
ask for less than 32 bytes, your code will only work with kernels
2.6.23 and later. */
extern __s32 i2c_smbus_read_i2c_block_data(int file, __u8 command, __u8 length,
__u8 *values);

using FFmpeg, how to decode H264 packets

I'm new to FFmpeg and struggling to decode H264 packets which can be obtained as an array of uint8_t.
After many of investigations, I think it should be able to just put the array into an AVPacket like the below
AVPacket *avpkt = (AVPacket *)malloc(sizeof(AVPacket) * 1);
av_init_packet(avpkt);
avpkt->data = ct; // ct is the array
avpkt->length =....
and decode by avcodec_decode_video2().
A part of the code is like:
...
codec = avcodec_find_decoder(CODEC_ID_H264);
gVideoCodecCtx = avcodec_alloc_context();
gFrame = avcodec_alloc_frame();
avcodec_decode_video2(gVideoCodecCtx, gFrame, &frameFinished, packet);
...
I guess I set all required properties properly but this function returns only -1
I just found the -1 is coming from
ret = avctx->codec->decode(avctx, picture, got_picture_ptr, avpkt);
in the avcodec_decode_video2();
Actually, what I'm wondering is how can I decode H264 packets (without RTP header) by avcodec_decode_video2()?
Updated:
OK, I'm still trying to find a solution. What I'm doing now is the below
** The H264 stream in this RTP stream is encoded by FU-A
Receive an RTP packet
Look if the second byte of the RTP header is > 0 which means it's the first packet (and possibly will be followed)
See if the next RTP packet has > 0 at its second byte also, then it means the previous frame was a complete NAL or if this is < 0, the packet should be appended to the previous packet.
Remove all RTP header of the packets so it has only like FU indicator | FU header | NAL
Try play it with avcodec_decode_video2()
But it's only returning -1.....
Am I supposed to remove FU indicator and header too??
Any suggestion will be very appreciated
Actually, what I'm wondering is if I can decode H264 packets (without RTP header) by avcodec_decode_video2().
You may need to pre-process the RTP payload(s) (re-assemble fragmented NALUs, split aggregated NALUs) before passing NAL units to the decoder if you use packetization modes other than single NAL unit mode. The NAL unit types (STAP, MTAP, FU) allowed in the stream depends on the packetization mode. Read RFC 6184 for more info on packetization modes.
Secondly, while I am not that familiar with FFMPEG, it could be more of a general H.264 decoding issue: you must always initialise the decoder with the H.264 sequence (SPS) and picture parameter sets (PPS) before you will be able to decode other frames. Have you done that?
I don't think that you will be able to decode H264 packets without RTP header as quite a bit of video stream information is embedded in the RTP headers. At the same time, I guess it is possible that all the video stream information can be duplicated in the RTP video packets. So it also depends how the stream is generated.
Vibgyor
This is my working code
bool VideoDecoder::decode(const QByteArray &encoded)
{
AVPacket packet;
av_new_packet(&packet, encoded.size());
memcpy(packet.data, encoded.data(), encoded.size());
//TODO: use AVPacket directly instead of Packet?
//TODO: some decoders might in addition need other fields like flags&AV_PKT_FLAG_KEY
int ret = avcodec_decode_video2(d->codec_ctx, d->frame, &d->got_frame_ptr, &packet);
av_free_packet(&packet);
if ((ret < 0) || (!d->got_frame_ptr))
return false;
d->sws_ctx = sws_getCachedContext(d->sws_ctx
, d->codec_ctx->width, d->codec_ctx->height, d->codec_ctx->pix_fmt
, d->width, d->height, d->pix_fmt
, (d->width == d->codec_ctx->width && d->height == d->codec_ctx->height) ? SWS_POINT : SWS_BICUBIC
, NULL, NULL, NULL
);
int v_scale_result = sws_scale(
d->sws_ctx,
d->frame->data,
d->frame->linesize,
0,
d->codec_ctx->height,
d->picture.data,
d->picture.linesize
);
Q_UNUSED(v_scale_result);
if (d->frame->interlaced_frame)
avpicture_deinterlace(&d->picture, &d->picture, d->pix_fmt, d->width, d->height);
return true;
}

Decoding ima4 audio format

To reduce the download size of an iPhone application I'm compressing some audio files. Specifically I'm using afconvert on the command line to change .wav format to .caf format w/ ima4 compression.
I've read this (wooji-juice.com) awesome post about this exact topic. I'm having trouble w/ the "decoding ima4 packets" step. I've looked at their sample code and I'm stuck. Please help w/ some pseudo code or sample code that can guide me in the right direction.
Thanks!
Additional info:
Here is what I've completed and where I'm having trouble...
I can play .wav files in both the simulator and on the phone.
I can compress .wav files to .caf w/ ima4 compression using afconvert on the command line. I'm using the SoundEngine that came w/ CrashLanding (I fixed one memory leak).
I modified the SoundEngine code to look for the mFormatID 'ima4'.
I don't understand the blog post linked above starting w/ "Calculating the size of the unpacked data". Why do I need to do this? Also, what does the term "packet" refer to? I'm very new to any sort of audio programming.
After gathering all the data from Wooji-Juice, Multimedia Wiki and Apple, here is my proposal (may need some experiment):
File structure
Apple IMA4 file are made of packet of 34 bytes. This is the packet unit used to build the file.
Each 34 bytes packet has two parts:
the first 2 bytes contain the preamble: an initial predictor and a step index
the 32 bytes left contain the sound nibbles (a nibble of 4 bits is used to retrieve a 16 bits sample)
Each packet has 32 bytes of compressed data, that represent 64 samples of 16 bits.
If the sound file is stereo, the packets are interleaved (one for the left, one for the right); there must be an even number of packets.
Decoding
Each packet of 34 bytes will lead to the decompression of 64 samples of 16 bits. So the size of the uncompressed data is 128 bytes per packet.
The decoding pseudo code looks like:
int[] ima_index_table = ... // Index table from [Multimedia Wiki][2]
int[] step_table = ... // Step table from [Multimedia Wiki][2]
byte[] packet = ... // A packet of 34 bytes compressed
short[] output = ... // The output buffer of 128 bytes
int preamble = (packet[0] << 8) | packet[1];
int predictor = preamble && 0xFF80; // See [Multimedia Wiki][2]
int step_index = preamble && 0x007F; // See [Multimedia Wiki][2]
int i;
int j = 0;
for(i = 2; i < 34; i++) {
byte data = packet[i];
int lower_nibble = data && 0x0F;
int upper_nibble = (data && 0xF0) >> 4;
// Decode the lower nibble
step_index += ima_index_table[lower_nibble];
diff = ((signed)nibble + 0.5f) * step / 4;
predictor += diff;
step = ima_step_table[step index];
// Clamp the predictor value to stay in range
if (predictor > 65535)
output[j++] = 65535;
else if (predictor < -65536)
output[j++] = -65536;
else
output[j++] = (short) predictor;
// Decode the uppper nibble
step_index += ima_index_table[upper_nibble];
diff = ((signed)nibble + 0.5f) * step / 4;
predictor += diff;
step = ima_step_table[step index];
// Clamp the predictor value to stay in range
if (predictor > 65535)
output[j++] = 65535;
else if (predictor < -65536)
output[j++] = -65536;
else
output[j++] = (short) predictor;
}
The term "packet" refers to a group of compressed audio samples with a header. You need the header to decode the data immediately following. If you consider your ima4 file to be a book, then each packet is a page. At the top are the values needed to decode that page, followed by the compressed audio.
That's why you need to calculate the size of the unpacked data (and then make space for it) -- since it's compressed, you need to convert data from compressed audio to uncompressed audio before you can output it. In order to allocate an output buffer, you need to know how big it has to be (note: you may need to output in chunks that are larger than a single packet at a time).
It looks like the typical structure, per the earlier "Overview" section, is that sets of 64 samples, each 16 bits (so 128 bytes) are translated to a 2-byte header and a 32-byte set of compressed samples (34 bytes in all). So, in the typical case, you can produce your expected output datasize by taking the input data size, dividing by 34 to get the number of packets, then multiplying by 128 bytes for the uncompressed audio per packet.
You shouldn't do that, though. It looks like you should instead query kAudioFilePropertyDataFormat to get the mBytesPerPacket -- this is the "34" value above, and mFramesPerPacket -- this is the 64, above, that gets multiplied by 2 (for 16-byte samples) to make 128 bytes of output.
Then, for each packet, you will need to run through the decoding described in the post. In somewhat longer pseudo C-code, assuming you are getting arrays of bytes, to handle the header:
packet = GetPacket();
Header = (packet[0] << 8) | packet[1]; //Big-endian 16-bit value
step_index = Header & 0x007f; //Lower seven bits
predictor = Header & 0xff80; //Upper nine bits
for (i = 2; i < mBytesPerPacket; i++)
{
nibble = packet[i] & 0x0f; //Low Nibble
process that nibble, per the blogpost -- be careful on sign-extension!
nibble = (packet[i] & 0xf0) >> 4; //High Nibble
process that nibble, per the blogpost -- be careful on sign-extension!
}
The sign-extension above refers to the fact that the post involves handling each nibble both in an unsigned and a signed way. If the high bit of a nibble (bit 3) is a 1, then it is negative; additionally the bit-shift may do sign-extension. This is not handled in the above pseudocode.