x264 IDR access unit with a SPS and a PPS - iphone

I am trying to encode video in h.264 that when split with Apples HTTP Live Streaming tools media file segmenter will pass the media file validator I am getting two errors on the split MPEG-TS file
WARNING: Media segment contains a video track but does not contain any IDR access unit with a SPS and a PPS.
WARNING: 7 samples (17.073 %) do not have timestamps in track 257 (avc1).
After hours of research I think the "IDR" warning relates to not having keyframes in the right place on the segmented MPEG-TS file so in my ffmpeg command I set -keyint_min 1 to ensure keyframes where at every frame, but this didn't work.
Although it would be great to get an answer, if anyone can shed any light on what a "IDR access unit with a SPS and a PPS" is or what the timestamps warning means I would be very grateful, thanks.

Fix can be found on this thread https://devforums.apple.com/thread/45830?tstart=15

Related

MFCreateFMPEG4MediaSink does not generate MSE-compatible MP4

I'm attempting to stream a H.264 video feed to a web browser. Media Foundation is used for encoding a fragmented MPEG4 stream (MFCreateFMPEG4MediaSink with MFTranscodeContainerType_FMPEG4, MF_LOW_LATENCY and MF_READWRITE_ENABLE_HARDWARE_TRANSFORMS enabled). The stream is then connected to a web server through IMFByteStream.
Streaming of the H.264 video works fine when it's being consumed by a <video src=".."/> tag. However, the resulting latency is ~2sec, which is too much for the application in question. My suspicion is that client-side buffering causes most of the latency. Therefore, I'm experimenting with Media Source Extensions (MSE) for programmatic control over the in-browser streaming. Chrome does, however, fail with the following error when consuming the same MPEG4 stream through MSE:
Failure parsing MP4: TFHD base-data-offset not allowed by MSE. See
https://www.w3.org/TR/mse-byte-stream-format-isobmff/#movie-fragment-relative-addressing
mp4dump of a moof/mdat fragment in the MPEG4 stream. This clearly shows that the TFHD contains an "illegal" base data offset parameter:
[moof] size=8+200
[mfhd] size=12+4
sequence number = 3
[traf] size=8+176
[tfhd] size=12+16, flags=1
track ID = 1
base data offset = 36690
[trun] size=12+136, version=1, flags=f01
sample count = 8
data offset = 0
[mdat] size=8+1624
I'm using Chrome 65.0.3325.181 (Official Build) (32-bit), running on Win10 version 1709 (16299.309).
Is there any way of generating a MSE-compatible H.264/MPEG4 video stream using Media Foundation?
Status Update:
Based on roman-r advise, I managed to fix the problem myself by intercepting the generated MPEG4 stream and perform the following modifications:
Modify Track Fragment Header Box (tfhd):
remove base_data_offset parameter (reduces stream size by 8bytes)
set default-base-is-moof flag
Add missing Track Fragment Decode Time (tfdt) (increases stream size by 20bytes)
set baseMediaDecodeTime parameter
Modify Track fragment Run box (trun):
adjust data_offset parameter
The field descriptions are documented in https://www.iso.org/standard/68960.html (free download).
Switching to MSE-based video streaming reduced the latency from ~2.0 to 0.7 sec. The latency was furthermore reduced to 0-1 frames by calling IMFSinkWriter::NotifyEndOfSegment after each IMFSinkWriter::WriteSample call.
There's a sample implementation available on https://github.com/forderud/AppWebStream
I was getting the same error (Failure parsing MP4: TFHD base-data-offset not allowed by MSE) when trying to play a fmp4 via MSE. The fmp4 had been created from a mp4 using the following ffmpeg comand:
ffmpeg -i myvideo.mp4 -g 52 -vcodec copy -f mp4 -movflags frag_keyframe+empty_moov myfmp4video.mp4
Based on this question I was able to find out that to have the fmp4 working in Chrome I had to add the "default_base_moof" flag. So, after creating the fmp4 with the following command:
ffmpeg -i myvideo.mp4 -g 52 -vcodec copy -f mp4 -movflags frag_keyframe+empty_moov+default_base_moof myfmp4video.mp4
I was able to play successfully the video using Media Source Extensions.
This Mozilla article helped to find out that missing flag:
https://developer.mozilla.org/en-US/docs/Web/API/Media_Source_Extensions_API/Transcoding_assets_for_MSE
The mentioned 0.7 sec latency (in your Status Update) is caused by the Media Foundation's MFTranscodeContainerType_FMPEG4 containterizer which gathers and outputs each roughly 1/3 seconds (from unknown reason) of frames in one MP4 moof/mdat box pair. This means that you need to wait 19 frames before getting any output from MFTranscodeContainerType_FMPEG4 at 60 FPS.
To output single MP4 moof/mdat per each frame, simply lie that MF_MT_FRAME_RATE is 1 FPS (or anything higher than 1/3 sec). To play the video at the correct speed, use Media Source Extensions' <video>.playbackRate or rather update timescale (i.e. multiply by real FPS) of mvhd and mdhd boxes in your MP4 stream interceptor to get the correctly timed MP4 stream.
Doing that, the latency can be squeezed to under 20 ms. This is barely recognizable when you see the output side by side on localhost in chains such as Unity (research) -> NvEnc -> MFTranscodeContainerType_FMPEG4 -> WebSocket -> Chrome Media Source Extensions display.
Note that MFTranscodeContainerType_FMPEG4 still introduces 1 frame delay (1st frame in, no output, 2nd frame in, 1st frame out, ...), hence the 20 ms latency at 60 FPS. The only solution to that seems to be writing own FMPEG4 containerizer. But that is order of magnitude more complex than intercepting of Media Foundation's MP4 streams.
The problem was solved by following roman-r's advise, and modifying the generated MPEG4 stream. See answer above.
Another way to do this is again using the same code #Fredrik mentioned but I write my own IMFByteStream and and I check the chunks written to the IMFByteStream.
FFMpeg writes the atoms almost once at a time. So you can check the atom name and do the mods. It is the same thing. I wish there was an MSE compliant windows sinker.
Is there one that can generate .ts files for HLS?

What's difference result when I setup -frag and -dash with MP4Box?

I have read MP4Box Doc about Mpeg-Dash, but I don't clearly understand about "MP4Box -dash 10000 -frag 2000 largeFile.mp4" and MP4Box -dash 10000 -frag 1000 largeFile.mp4. When I open the *.mpd File I found the duration of SegmentList is 10023(about 10 sec). If the -frag 2000 or 1000 is no used?
I'm designing a HTML5 Video Player(like this sample), and I using MP4Box tool to create DASH Video.
But I don't clearly understand what's the difference when I convert my video with -frag 2000 and 1000. For example: I don't the mean about my video with 10 second segments and 1 second fragments. maybe My Video Player do not need to set this option?
GPAC contributor here. It is difficult to help you without a full example. I stronlgy recommend to describe bugs on our bug tracker (https://github.com/gpac/gpac/issues).
When I open the *.mpd File I found the duration of SegmentList is 10023(about 10 sec). If the -frag 2000 or 1000 is no used?
Three points:
1) You probably get 10023 ms (instead of 10000 ms) because you may use an old version of MP4Box. Please consider using the latest version.
2) Fragments are an MP4 feature and is not seen at the MPEG-DASH level. Segments is also an MP4 feature (basically a segment contains fragments) that is seen by MPEG-DASH. Therefore you can't see it in the MP4 but it may have consequences on your playback.
3) The blog article you mention (http://gpac.io/2011/02/02/mp4box-fragmentation-segmentation-splitting-and-interleaving/ contains all the information you need. If you think we can improve it, please leave a message there. Thanks!

File types used for MISB KLV Encoding

I'm curious as to what file types are used for Motion Industry Standards Board KLV (Key Length Value). I've read the documentation at the MISB site, which is quite huge. It indicates, to my understanding, that MPEG-2, is usually used so I tried to get an idea as to what to look for in file extensions to recognise files that have the capability for embeding KLV metadata.
My question is: If a file has an extension like these - *.TS *.mpg, does that indicate potential KLV embedding? Are there any more types ? Can an active video stream from a camera contain KLV?
Any resonse or elaboration is appreciated. Thanks ahead !
As MISB docs state, only MPEG video streams for full motion video and NITF images for high resolution wide area coverage.
Mostly MPEG-TS streams, AFAIK.
You can scan your files for strings like "KLVA" or some of the keys defined in the standard to quickly detect such files with high confidence.
By scanning the titles of documents on the MISB site, one can glean that KLV can be encoded in AES3 Serial Digital Audio Streams, SMPTE 291 Ancillary Data Packets, and SDP (Session Description Protocol) [streams]. Reading a quick bit of the guidance .pdf also reveals that:
...MPEG-2 Transport Streams are a common multiplexing device for multiple data streams...
...but this does not limit KLV to only the MPEG-2 TS.
As a direct answer to your question: if the KLV stream is contained within an MPEG-2 Transport Stream, then yes - either *.ts or *.mpg files would qualify. (As would any other file capable of storing/containing an MPEG-2 Transport Stream.)
If you have further questions, reach out to me.

RTSP Source Filter with GDCL MP4 Muxer incompatibility

I'm trying to use GDCL MP4 Muxer with my RTSP Source Filter. They work fine together except after stopping the graph, muxer doesn't finilize the file and write the reqiured tables to the end of file via file writer (some parts are written starting from moov but not the time table values). When I try another RTSP source filter (which I don't have its source codes), table values are created with GDCL MP4 Muxer.
But when I try Elecard's MP4 Muxer, it works fine with my RTSP Source Filter. So, there is an incompatibility. I examined GDCL's source codes but couldn't find what it was expecting from me. I already calculate and set timestamp values to samples using SetTime method. But GDCL still doesn't finilaze file. Is it caused by missing information or missing signal when graph stops? What can be the problem, any ideas?
One thing you should be aware of regarding Geraint's MP4 Mux is that it is checking incoming media samples to have both start and stop time. You might be having only .tStart/AM_SAMPLE_TIMEVALID which still makes sense for video, but this would be a problem.
So the samples have to have stop time, or you need to fix this in multiplexer code.
A typical symptom for the problem is that generated files are empty or of zero duration.

Issues with iPhone Http Streaming with concatenated video files

We are seeing this when "tying" two video files together.
Example we have Ad video that is segmented and content file which is also segmented.
We create a new file which has both Ad and content segment information together. However we are seeing an issue where either the Ad content is truncated or the content starts having A/V sync issues.
Both ad and content are segmented the same way , 5 sec segmentation. however since Ads are variable length the result file may have left over segment something like:
#EXTM3U
#EXT-X-TARGETDURATION:5
#EXT-X-MEDIA-SEQUENCE:0
#EXTINF:5,
fileSequence6.ts
#EXTINF:5,
fileSequence7.ts
#EXTINF:4,
fileSequence8.ts
#EXTINF:5,
fileSequence0.ts
#EXTINF:5,
fileSequence1.ts
#EXTINF:5,
fileSequence2.ts
#EXTINF:3,
fileSequence3.ts
Is this the proper way to play 2 files one after the other without rebuffering?
should generate-variant-plist be used to a play list of 2 files?
When you have a break in the stream to switch to a commercial, ad, or alternate video source then you want to introduce the discontinuity tag before the start of the next segment, for example:
#EXTM3U
#EXT-X-TARGETDURATION:5
#EXT-X-MEDIA-SEQUENCE:0
#EXTINF:5,
movie0.ts
#EXTINF:2,
movie1.ts
#EXT-X-DISCONTINUITY
#EXTINF:5,
commercial0.ts
#EXTINF:5,
commercial1.ts
#EXTINF:3,
commercial2.ts
This gets a little more complicated if you encrypt the streams because they use progressive encryption based on the prior segments encryption state and the sequence number which come together to form an "Initialization Vector". If you break the stream you have to reset the initialization vector so that the encryption/decryption can continue uninterrupted. This is an involved process so best to just search on Initialization Vector in Apple's docs.