V4l2 : difference between : Enque, Deque and Queue(ing) of the buffer? - android-camera

I am a noob in v4l2 and tryign to find out the difference between the various ioctl calls made during the camera image capture. I am following this pdf from the linuxtv.org site
I wanted to know the difference between the following :
Query, Enque, Deque and Queue(ing) of the buffer. Is there a particular sequence in fetching the raw data from the camera? Does the sequence varies in case of streaming and capture mode?
Can any one plz explain.

Following state machine describes a V4L2-buffer's life-cycle:
The sequence is the same for both streaming as well as capture.
Its just that during capture one does the Q/DQ just once to obtain one buffer (i.e. a single "frame"). Streaming does this repeatedly.
Detailed info in this series of V4L2 articles...
Part 1: The Video4Linux2 API
Part 2: registration and open()
Part 3: Basic ioctl() handling
Part 4: Inputs and Outputs
Part 5a: Colors and formats
Part 5b: Format negotiation
Part 6a: Basic frame I/O
Part 6b: Streaming I/O
Part 7: Controls

Related

MFCreateFMPEG4MediaSink does not generate MSE-compatible MP4

I'm attempting to stream a H.264 video feed to a web browser. Media Foundation is used for encoding a fragmented MPEG4 stream (MFCreateFMPEG4MediaSink with MFTranscodeContainerType_FMPEG4, MF_LOW_LATENCY and MF_READWRITE_ENABLE_HARDWARE_TRANSFORMS enabled). The stream is then connected to a web server through IMFByteStream.
Streaming of the H.264 video works fine when it's being consumed by a <video src=".."/> tag. However, the resulting latency is ~2sec, which is too much for the application in question. My suspicion is that client-side buffering causes most of the latency. Therefore, I'm experimenting with Media Source Extensions (MSE) for programmatic control over the in-browser streaming. Chrome does, however, fail with the following error when consuming the same MPEG4 stream through MSE:
Failure parsing MP4: TFHD base-data-offset not allowed by MSE. See
https://www.w3.org/TR/mse-byte-stream-format-isobmff/#movie-fragment-relative-addressing
mp4dump of a moof/mdat fragment in the MPEG4 stream. This clearly shows that the TFHD contains an "illegal" base data offset parameter:
[moof] size=8+200
[mfhd] size=12+4
sequence number = 3
[traf] size=8+176
[tfhd] size=12+16, flags=1
track ID = 1
base data offset = 36690
[trun] size=12+136, version=1, flags=f01
sample count = 8
data offset = 0
[mdat] size=8+1624
I'm using Chrome 65.0.3325.181 (Official Build) (32-bit), running on Win10 version 1709 (16299.309).
Is there any way of generating a MSE-compatible H.264/MPEG4 video stream using Media Foundation?
Status Update:
Based on roman-r advise, I managed to fix the problem myself by intercepting the generated MPEG4 stream and perform the following modifications:
Modify Track Fragment Header Box (tfhd):
remove base_data_offset parameter (reduces stream size by 8bytes)
set default-base-is-moof flag
Add missing Track Fragment Decode Time (tfdt) (increases stream size by 20bytes)
set baseMediaDecodeTime parameter
Modify Track fragment Run box (trun):
adjust data_offset parameter
The field descriptions are documented in https://www.iso.org/standard/68960.html (free download).
Switching to MSE-based video streaming reduced the latency from ~2.0 to 0.7 sec. The latency was furthermore reduced to 0-1 frames by calling IMFSinkWriter::NotifyEndOfSegment after each IMFSinkWriter::WriteSample call.
There's a sample implementation available on https://github.com/forderud/AppWebStream
I was getting the same error (Failure parsing MP4: TFHD base-data-offset not allowed by MSE) when trying to play a fmp4 via MSE. The fmp4 had been created from a mp4 using the following ffmpeg comand:
ffmpeg -i myvideo.mp4 -g 52 -vcodec copy -f mp4 -movflags frag_keyframe+empty_moov myfmp4video.mp4
Based on this question I was able to find out that to have the fmp4 working in Chrome I had to add the "default_base_moof" flag. So, after creating the fmp4 with the following command:
ffmpeg -i myvideo.mp4 -g 52 -vcodec copy -f mp4 -movflags frag_keyframe+empty_moov+default_base_moof myfmp4video.mp4
I was able to play successfully the video using Media Source Extensions.
This Mozilla article helped to find out that missing flag:
https://developer.mozilla.org/en-US/docs/Web/API/Media_Source_Extensions_API/Transcoding_assets_for_MSE
The mentioned 0.7 sec latency (in your Status Update) is caused by the Media Foundation's MFTranscodeContainerType_FMPEG4 containterizer which gathers and outputs each roughly 1/3 seconds (from unknown reason) of frames in one MP4 moof/mdat box pair. This means that you need to wait 19 frames before getting any output from MFTranscodeContainerType_FMPEG4 at 60 FPS.
To output single MP4 moof/mdat per each frame, simply lie that MF_MT_FRAME_RATE is 1 FPS (or anything higher than 1/3 sec). To play the video at the correct speed, use Media Source Extensions' <video>.playbackRate or rather update timescale (i.e. multiply by real FPS) of mvhd and mdhd boxes in your MP4 stream interceptor to get the correctly timed MP4 stream.
Doing that, the latency can be squeezed to under 20 ms. This is barely recognizable when you see the output side by side on localhost in chains such as Unity (research) -> NvEnc -> MFTranscodeContainerType_FMPEG4 -> WebSocket -> Chrome Media Source Extensions display.
Note that MFTranscodeContainerType_FMPEG4 still introduces 1 frame delay (1st frame in, no output, 2nd frame in, 1st frame out, ...), hence the 20 ms latency at 60 FPS. The only solution to that seems to be writing own FMPEG4 containerizer. But that is order of magnitude more complex than intercepting of Media Foundation's MP4 streams.
The problem was solved by following roman-r's advise, and modifying the generated MPEG4 stream. See answer above.
Another way to do this is again using the same code #Fredrik mentioned but I write my own IMFByteStream and and I check the chunks written to the IMFByteStream.
FFMpeg writes the atoms almost once at a time. So you can check the atom name and do the mods. It is the same thing. I wish there was an MSE compliant windows sinker.
Is there one that can generate .ts files for HLS?

File types used for MISB KLV Encoding

I'm curious as to what file types are used for Motion Industry Standards Board KLV (Key Length Value). I've read the documentation at the MISB site, which is quite huge. It indicates, to my understanding, that MPEG-2, is usually used so I tried to get an idea as to what to look for in file extensions to recognise files that have the capability for embeding KLV metadata.
My question is: If a file has an extension like these - *.TS *.mpg, does that indicate potential KLV embedding? Are there any more types ? Can an active video stream from a camera contain KLV?
Any resonse or elaboration is appreciated. Thanks ahead !
As MISB docs state, only MPEG video streams for full motion video and NITF images for high resolution wide area coverage.
Mostly MPEG-TS streams, AFAIK.
You can scan your files for strings like "KLVA" or some of the keys defined in the standard to quickly detect such files with high confidence.
By scanning the titles of documents on the MISB site, one can glean that KLV can be encoded in AES3 Serial Digital Audio Streams, SMPTE 291 Ancillary Data Packets, and SDP (Session Description Protocol) [streams]. Reading a quick bit of the guidance .pdf also reveals that:
...MPEG-2 Transport Streams are a common multiplexing device for multiple data streams...
...but this does not limit KLV to only the MPEG-2 TS.
As a direct answer to your question: if the KLV stream is contained within an MPEG-2 Transport Stream, then yes - either *.ts or *.mpg files would qualify. (As would any other file capable of storing/containing an MPEG-2 Transport Stream.)
If you have further questions, reach out to me.

RTSP Source Filter with GDCL MP4 Muxer incompatibility

I'm trying to use GDCL MP4 Muxer with my RTSP Source Filter. They work fine together except after stopping the graph, muxer doesn't finilize the file and write the reqiured tables to the end of file via file writer (some parts are written starting from moov but not the time table values). When I try another RTSP source filter (which I don't have its source codes), table values are created with GDCL MP4 Muxer.
But when I try Elecard's MP4 Muxer, it works fine with my RTSP Source Filter. So, there is an incompatibility. I examined GDCL's source codes but couldn't find what it was expecting from me. I already calculate and set timestamp values to samples using SetTime method. But GDCL still doesn't finilaze file. Is it caused by missing information or missing signal when graph stops? What can be the problem, any ideas?
One thing you should be aware of regarding Geraint's MP4 Mux is that it is checking incoming media samples to have both start and stop time. You might be having only .tStart/AM_SAMPLE_TIMEVALID which still makes sense for video, but this would be a problem.
So the samples have to have stop time, or you need to fix this in multiplexer code.
A typical symptom for the problem is that generated files are empty or of zero duration.

Issues with iPhone Http Streaming with concatenated video files

We are seeing this when "tying" two video files together.
Example we have Ad video that is segmented and content file which is also segmented.
We create a new file which has both Ad and content segment information together. However we are seeing an issue where either the Ad content is truncated or the content starts having A/V sync issues.
Both ad and content are segmented the same way , 5 sec segmentation. however since Ads are variable length the result file may have left over segment something like:
#EXTM3U
#EXT-X-TARGETDURATION:5
#EXT-X-MEDIA-SEQUENCE:0
#EXTINF:5,
fileSequence6.ts
#EXTINF:5,
fileSequence7.ts
#EXTINF:4,
fileSequence8.ts
#EXTINF:5,
fileSequence0.ts
#EXTINF:5,
fileSequence1.ts
#EXTINF:5,
fileSequence2.ts
#EXTINF:3,
fileSequence3.ts
Is this the proper way to play 2 files one after the other without rebuffering?
should generate-variant-plist be used to a play list of 2 files?
When you have a break in the stream to switch to a commercial, ad, or alternate video source then you want to introduce the discontinuity tag before the start of the next segment, for example:
#EXTM3U
#EXT-X-TARGETDURATION:5
#EXT-X-MEDIA-SEQUENCE:0
#EXTINF:5,
movie0.ts
#EXTINF:2,
movie1.ts
#EXT-X-DISCONTINUITY
#EXTINF:5,
commercial0.ts
#EXTINF:5,
commercial1.ts
#EXTINF:3,
commercial2.ts
This gets a little more complicated if you encrypt the streams because they use progressive encryption based on the prior segments encryption state and the sequence number which come together to form an "Initialization Vector". If you break the stream you have to reset the initialization vector so that the encryption/decryption can continue uninterrupted. This is an involved process so best to just search on Initialization Vector in Apple's docs.

Playback skipping/seeking in an MP4 file

I'm trying to figure out the proper technique for performing skipping ahead or seeking within an mp4 (or m4a) audio file while playing it using the AudioFileStream and AudioQueue APIs on the iPhone.
If I pass the complete mp4 header (up to the mdat box) to an open AudioFileStream, the underlying audio file type is properly identified (in my case, AAC) and when I then pass the actual mdat data portion of the file, the AudioFileStream correctly begins generating audio packets and these can be sent to the AudioQueue and playback works.
However, if I try a random access approach to the playing back the file, I can't seem to get it to work properly, unless I always send the first frame of the mdat box to the AudioFileStream. If instead, after sending the mp4 header to the AudioFileStream, I then attempt to initially skip ahead to a later frame in the mdat by first calling AudioFileStreamSeek() and then passing the data for the associated packets, the AudioFileStream appears to generate audio packets, but when I pass these on to the AudioQueue and call AudioQueuePrime(), I always get an error of 'nope' returned.
My question is this: am I always required to at least pass in the first packet of the mdat box before attempting to do random playback of other packets in the mp4 file?
I can't seem to find any documentation on doing random playback of sections of an mp4 file while using an AudioFileStream and an AudioQueue. I've found Apple's QuickTime File Format pdf which describes the technique of randomly seeking within an mp4 file, but it's just a high level description and doesn't have any mention of using specific APIs (such as AudioFileStream).
Thanks for any insights.
It turns out the approach I was using with AudioFileStreamSeek() is valid, I just wasn't sending the full initial mp4 header to the AudioFileStreamParseBytes() routine.
The problem was I had assumed the packets began immediately after the mdat box tag. By examining the data offset value (kAudioFileStreamProperty_DataOffset) returned by the AudioFileStream Property Listener callback, I discovered the true start of the packet data was 18 bytes later.
These 18 bytes are considered part of the initial mp4 header that must be sent to the AudioFileStream parser before sending the data of arbitrary packets after calls to AudioFileStreamSeek().
If these extra bytes are left out, then the AudioQueuePrime() call will always fail with a 'nope' error even though you may have sent valid parsed audio packets to the AudioQueue.