I'm new to FFmpeg and struggling to decode H264 packets which can be obtained as an array of uint8_t.
After many of investigations, I think it should be able to just put the array into an AVPacket like the below
AVPacket *avpkt = (AVPacket *)malloc(sizeof(AVPacket) * 1);
av_init_packet(avpkt);
avpkt->data = ct; // ct is the array
avpkt->length =....
and decode by avcodec_decode_video2().
A part of the code is like:
...
codec = avcodec_find_decoder(CODEC_ID_H264);
gVideoCodecCtx = avcodec_alloc_context();
gFrame = avcodec_alloc_frame();
avcodec_decode_video2(gVideoCodecCtx, gFrame, &frameFinished, packet);
...
I guess I set all required properties properly but this function returns only -1
I just found the -1 is coming from
ret = avctx->codec->decode(avctx, picture, got_picture_ptr, avpkt);
in the avcodec_decode_video2();
Actually, what I'm wondering is how can I decode H264 packets (without RTP header) by avcodec_decode_video2()?
Updated:
OK, I'm still trying to find a solution. What I'm doing now is the below
** The H264 stream in this RTP stream is encoded by FU-A
Receive an RTP packet
Look if the second byte of the RTP header is > 0 which means it's the first packet (and possibly will be followed)
See if the next RTP packet has > 0 at its second byte also, then it means the previous frame was a complete NAL or if this is < 0, the packet should be appended to the previous packet.
Remove all RTP header of the packets so it has only like FU indicator | FU header | NAL
Try play it with avcodec_decode_video2()
But it's only returning -1.....
Am I supposed to remove FU indicator and header too??
Any suggestion will be very appreciated
Actually, what I'm wondering is if I can decode H264 packets (without RTP header) by avcodec_decode_video2().
You may need to pre-process the RTP payload(s) (re-assemble fragmented NALUs, split aggregated NALUs) before passing NAL units to the decoder if you use packetization modes other than single NAL unit mode. The NAL unit types (STAP, MTAP, FU) allowed in the stream depends on the packetization mode. Read RFC 6184 for more info on packetization modes.
Secondly, while I am not that familiar with FFMPEG, it could be more of a general H.264 decoding issue: you must always initialise the decoder with the H.264 sequence (SPS) and picture parameter sets (PPS) before you will be able to decode other frames. Have you done that?
I don't think that you will be able to decode H264 packets without RTP header as quite a bit of video stream information is embedded in the RTP headers. At the same time, I guess it is possible that all the video stream information can be duplicated in the RTP video packets. So it also depends how the stream is generated.
Vibgyor
This is my working code
bool VideoDecoder::decode(const QByteArray &encoded)
{
AVPacket packet;
av_new_packet(&packet, encoded.size());
memcpy(packet.data, encoded.data(), encoded.size());
//TODO: use AVPacket directly instead of Packet?
//TODO: some decoders might in addition need other fields like flags&AV_PKT_FLAG_KEY
int ret = avcodec_decode_video2(d->codec_ctx, d->frame, &d->got_frame_ptr, &packet);
av_free_packet(&packet);
if ((ret < 0) || (!d->got_frame_ptr))
return false;
d->sws_ctx = sws_getCachedContext(d->sws_ctx
, d->codec_ctx->width, d->codec_ctx->height, d->codec_ctx->pix_fmt
, d->width, d->height, d->pix_fmt
, (d->width == d->codec_ctx->width && d->height == d->codec_ctx->height) ? SWS_POINT : SWS_BICUBIC
, NULL, NULL, NULL
);
int v_scale_result = sws_scale(
d->sws_ctx,
d->frame->data,
d->frame->linesize,
0,
d->codec_ctx->height,
d->picture.data,
d->picture.linesize
);
Q_UNUSED(v_scale_result);
if (d->frame->interlaced_frame)
avpicture_deinterlace(&d->picture, &d->picture, d->pix_fmt, d->width, d->height);
return true;
}
Related
Here is the structure of the packet header in pcap:
struct pcap_pkthdr {
struct timeval ts; /* time stamp */
bpf_u_int32 caplen; /* length of portion present */
bpf_u_int32 len; /* length this packet (off wire)*/
};
I wonder what is the real difference between caplen and len? And where are they used?
len is the actual length of the packet on the wire. caplen is the length which is captured and thus present in the pcap file. caplen can be the same but also smaller than len.
How many bytes of a packet will be captured can be specified for example in tcpdump with -s size. While on many system tcpdump will capture up to 64k by default for example on OpenBSD it will only capture 116 bytes by default.
I'm using arduino for encoding the massage, i have tried for required and success for encoding and decoding back, but for repeated, after i encode it, the size of buffer is 0, so i cant send my buffer to other arduino
here is my code
file.ino
{
for(int i=0;i<7;i++)
message.header[i]=i+1;
//this is my variabel, i declare in .proto = repeated int32 header = 4 [(nanopb).max_count = 10, (nanopb).fixed_length = true];
stream = pb_ostream_from_buffer(buffer, sizeof(buffer));
bool status = pb_encode(&stream, Message_fields, &message);
Serial.println(stream.bytes_written);
//when i print this after encode, the data is loss, but when the field type is required, it will show some data bytes
}
Your header variable is fixed-length array of 10 entries. That should be ok. If it was not a fixed-length one there would be separate header_count field that you would have to set to the actual number of entries. You can look inside generated .pb.h to double-check that there is no header_count field.
Your code does not show the length of buffer you have allocated. Is it perhaps too short? Though that message should take only about 14 bytes.
You could also check whether status is true, i.e. whether encoding was successful. If it was not, you can find more information from stream.errmsg.
I am trying to interface A71CH with raspberry PI 3 over i2c, the device requires repeated starts and when a read request is made the first byte the device sends, is always the length of the whole message. When I am trying to make a read, instead of reading a fixed sized message , I want to read the first byte then send NACK signal to the slave after certain amount of bytes have been received that is indicated with the first byte. I used to following code but could not get the results I expected because it only read one byte than sends a NACK signal as you can see below.
struct i2c_rdwr_ioctl_data packets;
struct i2c_msg messages[2];
int r = 0;
int i = 0;
if (bus != I2C_BUS_0) // change if bus 0 is not the correct bus
{
printf("axI2CWriteRead on wrong bus %x (addr %x)\n", bus, addr);
}
messages[0].addr = axSmDevice_addr;
messages[0].flags = 0;
messages[0].len = txLen;
messages[0].buf = pTx;
// NOTE:
// By setting the 'I2C_M_RECV_LEN' bit in 'messages[1].flags' one ensures
// the I2C Block Read feature is used.
messages[1].addr = axSmDevice_addr;
messages[1].flags = I2C_M_RD | I2C_M_RECV_LEN|I2C_M_IGNORE_NAK;
messages[1].len = 256;
messages[1].buf = pRx;
messages[1].buf[0] = 1;
// NOTE:
// By passing the two message structures via the packets structure as
// a parameter to the ioctl call one ensures a Repeated Start is triggered.
packets.msgs = messages;
packets.nmsgs = 2;
// Send the request to the kernel and get the result back
r = ioctl(axSmDevice, I2C_RDWR, &packets);
Is there any way that allows me to make variable sized i2c reads ? What can I do to make it work ? Thanks for looking.
Raspbery doesn't support SMBUS Block Reads, only way to overcome this is to do bitbanging on GPIO pins. As #Ian Abbott mentioned above, I managed to modify bbI2CZip function to fit my need by checking the first byte of the received message and updating the read length afterwards.
I had a similar issue with the rpi3. I wanted to read exactly 32 bytes of data from a register on a slave device, but i2c_smbus_read_block_data() was returning -71 and errno 71 EPROTO.
The solution was to use i2c_smbus_read_i2c_block_data() instead of i2c_smbus_read_block_data().
/* Until kernel 2.6.22, the length is hardcoded to 32 bytes. If you
ask for less than 32 bytes, your code will only work with kernels
2.6.23 and later. */
extern __s32 i2c_smbus_read_i2c_block_data(int file, __u8 command, __u8 length,
__u8 *values);
After having a detail review of WWDC2014,Session513, I try to write my app on IOS8.0 to decode and display one live H.264 stream. First of all, I construct a H264 parameter set successfully. When I get one I frame with a 4 bit start code,just like"0x00 0x00 0x00 0x01 0x65 ...", I put it into a CMblockBuffer. Then I construct a CMSampleBuffer using previews CMBlockBuffer. After that,I put the CMSampleBuffer into a AVSampleBufferDisplayLayer. Everything is OK(I checked the value returned ) except the AVSampleBufferDisplayLayer does not show any video image. Since these APIs are fairly new to everyone, I couldn't find any body who can resolve this problem.
I'll give the key codes as follows,and I do really appreciate it if you can help to figure out why the vide image can't be displayed. Thanks a lot.
(1) AVSampleBufferDisplayLayer initialised.
dsplayer is a objc instance of my main view controller.
#property(nonatomic,strong)AVSampleBufferDisplayLayer *dspLayer;
if(!_dspLayer)
{
_dspLayer = [[AVSampleBufferDisplayLayer alloc]init];
[_dspLayer setFrame:CGRectMake(90,551,557,389)];
_dspLayer.videoGravity = AVLayerVideoGravityResizeAspect;
_dspLayer.backgroundColor = [UIColor grayColor].CGColor;
CMTimebaseRef tmBase = nil;
CMTimebaseCreateWithMasterClock(NULL,CMClockGetHostTimeClock(),&tmBase);
_dspLayer.controlTimebase = tmBase;
CMTimebaseSetTime(_dspLayer.controlTimebase, kCMTimeZero);
CMTimebaseSetRate(_dspLayer.controlTimebase, 1.0);
[self.view.layer addSublayer:_dspLayer];
}
(2)In another thread, I get one H.264 I frame.
//construct h.264 parameter set ok
CMVideoFormatDescriptionRef formatDesc;
OSStatus formatCreateResult =
CMVideoFormatDescriptionCreateFromH264ParameterSets(NULL, ppsNum+1, props, sizes, 4, &formatDesc);
NSLog([NSString stringWithFormat:#"construct h264 param set:%ld",formatCreateResult]);
//construct cmBlockbuffer .
//databuf points to H.264 data. starts with "0x00 0x00 0x00 0x01 0x65 ........"
CMBlockBufferRef blockBufferOut = nil;
CMBlockBufferCreateEmpty (0,0,kCMBlockBufferAlwaysCopyDataFlag, &blockBufferOut);
CMBlockBufferAppendMemoryBlock(blockBufferOut,
dataBuf,
dataLen,
NULL,
NULL,
0,
dataLen,
kCMBlockBufferAlwaysCopyDataFlag);
//construct cmsamplebuffer ok
size_t sampleSizeArray[1] = {0};
sampleSizeArray[0] = CMBlockBufferGetDataLength(blockBufferOut);
CMSampleTiminginfo tmInfos[1] = {
{CMTimeMake(5,1), CMTimeMake(5,1), CMTimeMake(5,1)}
};
CMSampleBufferRef sampBuf = nil;
formatCreateResult = CMSampleBufferCreate(kCFAllocatorDefault,
blockBufferOut,
YES,
NULL,
NULL,
formatDesc,
1,
1,
tmInfos,
1,
sampleSizeArray,
&sampBuf);
//put to AVSampleBufferdisplayLayer,just one frame. But I can't see any video frame in my view
if([self.dspLayer isReadyForMoreMediaData])
{
[self.dspLayer enqueueSampleBuffer:sampBuf];
}
[self.dspLayer setNeedsDisplay];
Your NAL unit start codes 0x00 0x00 0x01 or 0x00 0x00 0x00 0x01 need to be replaced by a length header.
This was clearly stated in the WWDC session you are referring to that the Annex B start code needs to be replaced by a AVCC conform lengh header. You are basically remuxing to MP4 file format from Annex B stream format on the fly here (simplified description of course).
Your call when creating the Parameter Set is "4" for this, so you need to prefix your VCL NAL units with a 4 byte length prefix. That's why you specifiy it as in AVCC format the length header can be shorter.
Whatever you put inside CMSampleBuffer will be OK, there is no sanity check if the contents can be decoded, just that you met the required parameters for being arbitrary data combined with timing information and a parameter set.
Basically with the data you put in you said the the VCL NAL units are 1 byte long. The decoder doesn't get the full NAL unit and bails out on an error.
Also make sure that when you use create the parameter set that the PPS/SPS do not have a length byted added and that the Annex B start code is also stripped.
Also I recommend not to use AVSampleBufferDisplayLayer but go through a VTDecompressionSession, so you can do stuff like color correction or other things that are needed inside a pixel shader.
It might be an idea to use DecompressionSessionDecode Frame initially as this will give you some feedback on the success of the decoding. If there is an issue with the decoding the AVSampleBufferDisplay layer doesn't tell you it just doesn't display anything. I can give you some code to help with this if required, let me know how you get on as I am attempting the same thing :)
I know this should be easy but...
I'm trying to get the MIDI channel number from a midiStatus message.
I have MIDI information coming in:
MIDIPacket *packet = (MIDIPacket*)pktList->packet;
for(int i = 0; i<pktList->numPackets; i++){
Byte midiStatus = packet->data[0];
Byte midiCommand = midiStatus>>4;
if(midiCommand == 0x80){} ///note off
if(midiCommand == 0x90){} ///note on
}
I tried
Byte midiChannel = midiStatus - midiCommand
but that did not seem to give me the correct values.
First of all, not all MIDI messages have channels in them. (For instance, clock messages and sysex messages don't.) Messages with channels are called "voice" messages.
In order to determine whether an arbitrary MIDI message is a voice message, you need to check the top 4 bits of the first byte. Then, once you know you have a voice message, the channel is in the low 4 bits of the first byte.
Voice messages are between 0x8n and 0xEn, where n is the channel.
Byte midiStatus = packet->data[0];
Byte midiCommand = midiStatus & 0xF0; // mask off all but top 4 bits
if (midiCommand >= 0x80 && midiCommand <= 0xE0) {
// it's a voice message
// find the channel by masking off all but the low 4 bits
Byte midiChannel = midiStatus & 0x0F;
// now you can look at the particular midiCommand and decide what to do
}
Also note that MIDI channels are between 0-15 in the message, but are normally presented to users as being between 1-16. You'll have to add 1 before you show the channel to the user, or subtract 1 if you take values from the user.