Issues with iPhone Http Streaming with concatenated video files - iphone

We are seeing this when "tying" two video files together.
Example we have Ad video that is segmented and content file which is also segmented.
We create a new file which has both Ad and content segment information together. However we are seeing an issue where either the Ad content is truncated or the content starts having A/V sync issues.
Both ad and content are segmented the same way , 5 sec segmentation. however since Ads are variable length the result file may have left over segment something like:
#EXTM3U
#EXT-X-TARGETDURATION:5
#EXT-X-MEDIA-SEQUENCE:0
#EXTINF:5,
fileSequence6.ts
#EXTINF:5,
fileSequence7.ts
#EXTINF:4,
fileSequence8.ts
#EXTINF:5,
fileSequence0.ts
#EXTINF:5,
fileSequence1.ts
#EXTINF:5,
fileSequence2.ts
#EXTINF:3,
fileSequence3.ts
Is this the proper way to play 2 files one after the other without rebuffering?
should generate-variant-plist be used to a play list of 2 files?

When you have a break in the stream to switch to a commercial, ad, or alternate video source then you want to introduce the discontinuity tag before the start of the next segment, for example:
#EXTM3U
#EXT-X-TARGETDURATION:5
#EXT-X-MEDIA-SEQUENCE:0
#EXTINF:5,
movie0.ts
#EXTINF:2,
movie1.ts
#EXT-X-DISCONTINUITY
#EXTINF:5,
commercial0.ts
#EXTINF:5,
commercial1.ts
#EXTINF:3,
commercial2.ts
This gets a little more complicated if you encrypt the streams because they use progressive encryption based on the prior segments encryption state and the sequence number which come together to form an "Initialization Vector". If you break the stream you have to reset the initialization vector so that the encryption/decryption can continue uninterrupted. This is an involved process so best to just search on Initialization Vector in Apple's docs.

Related

Is it possible to reorder and recombine fragments in fmp4?

I have fragmented mp4 which I want to send users through HLS. It's ok if I just send it as is. But I need opportunity to reorder fragments in this video.
For example initial video, which looks like this:
original video format
I want reorganize fragments and get this:
expected video format
I try make it locally, and it's work in VLS player (HLS). For this I modified sequence number for fragments in moof (mfhd). But when I try play it remotely (HLS) it does not work. I think, that some players (js) expect some additional information from each fragment, probably for example time offset. But I can not find which atom (box) contain this information. I spent a lot of time searching and I'm still at the very beginning of the problem.
I tried to modify the fragment sequence number, but it doesn't work.
The "Track Fragment Media Decode Time Box" (tfdt) stores the baseMediaDecodeTime which is the accumulative decode time.
Consider the following...
baseMediaDecodeTime must increase monotonically for each chunk.
This means you must update (replace) the tfdt entry of chunk with expected next tftd entry.
When you naively reorder the chunks, the baseMediaDecodeTime will be invalid.
The "Track Fragment Media Decode Time Box" (tfdt) is located inside each moof header at:
moof --> traf --> tfdt

in web-audio api how to obtain an array(eg. FLOAT32 array) from a stream (eg a microphone stream) for several seconds

I would like to fill an array from a stream for around ten seconds.{I wish to do some processing on the data)So far I can:
(a) obtain the microphone stream using mediaRecorder
(b) use analyser and analyser.getFloatTimeDomainData(dataArray) to obtain an array but it is size limited to only a little over half a second of data.I can also successfully output the data after processing back onto a stream and to outDestination.
(c) I have also experimented with obtaining a 'chunks' array from mediaRecorder directly but the problem then is that I can't find any mime type that would give me a simple array of values - ie an uncompressed sample by sample single channel set of value - ie a longer version of 'dataArray' in (b).
I am wondering if I am missing a simple way round this problem?
Solutions I have seen tend to use step (b) and do regular polls then reassemble a longer array - however it seems the timing is a bit tricky ..
I'v also seen suggestions to use audio workouts - I might have to do this but would prefer a simpler solution!
Or again, if someone knows how to drive mediaRecorder to output the chunks array in a simple array format FLOAT32.of one channel.That would do the trick.
Or maybe I'm missing something simpler?
I have code showing those steps that have been successful and will upload if anyone requests.

RTSP Source Filter with GDCL MP4 Muxer incompatibility

I'm trying to use GDCL MP4 Muxer with my RTSP Source Filter. They work fine together except after stopping the graph, muxer doesn't finilize the file and write the reqiured tables to the end of file via file writer (some parts are written starting from moov but not the time table values). When I try another RTSP source filter (which I don't have its source codes), table values are created with GDCL MP4 Muxer.
But when I try Elecard's MP4 Muxer, it works fine with my RTSP Source Filter. So, there is an incompatibility. I examined GDCL's source codes but couldn't find what it was expecting from me. I already calculate and set timestamp values to samples using SetTime method. But GDCL still doesn't finilaze file. Is it caused by missing information or missing signal when graph stops? What can be the problem, any ideas?
One thing you should be aware of regarding Geraint's MP4 Mux is that it is checking incoming media samples to have both start and stop time. You might be having only .tStart/AM_SAMPLE_TIMEVALID which still makes sense for video, but this would be a problem.
So the samples have to have stop time, or you need to fix this in multiplexer code.
A typical symptom for the problem is that generated files are empty or of zero duration.

x264 IDR access unit with a SPS and a PPS

I am trying to encode video in h.264 that when split with Apples HTTP Live Streaming tools media file segmenter will pass the media file validator I am getting two errors on the split MPEG-TS file
WARNING: Media segment contains a video track but does not contain any IDR access unit with a SPS and a PPS.
WARNING: 7 samples (17.073 %) do not have timestamps in track 257 (avc1).
After hours of research I think the "IDR" warning relates to not having keyframes in the right place on the segmented MPEG-TS file so in my ffmpeg command I set -keyint_min 1 to ensure keyframes where at every frame, but this didn't work.
Although it would be great to get an answer, if anyone can shed any light on what a "IDR access unit with a SPS and a PPS" is or what the timestamps warning means I would be very grateful, thanks.
Fix can be found on this thread https://devforums.apple.com/thread/45830?tstart=15

need for tool for video processing

I have a 2giga mpeg file of people runnig,jogging,walking etc. in it. I will use it in a image classification project but I need to segmentate the video depending on per person an per action.
for example;
there are 25 people in video which repeat these actions in order
1st person
-runs
-walks
2nd person
-runs
-walks
and goes on....
and what I want is to have 2 different mpeg file for each person
such as;
firstperson_runs.mpeg
firstperson_waves.mpeg
so I need a tool to split big file into these files. Splitting shall be due to time.
such as;
pick t1:start of action
pick t2:end of action
create a new video from big file for the interval t1 and t2
of course I will select time intervals for each video.
OS:Winxp pro
if it can be done by matlab ,can you describe it?
any help???
I imagine there are a number of tools available to do this without MATLAB, but if you really want to use MATLAB I would check out these submissions on The MathWorks File Exchange:
Gerald Dalley's videoIO Toolbox for Matlab
Micah Richert's mmread
David Foti's mpgread and mpgwrite
EDIT:
As mentioned by M456, you can also use the built-in function MMREADER for creating a multimedia reader object for your movie file (and subsequently reading selected movie frames from it with the READ method). However, I don't know which version of MATLAB this function was introduced in. It is in versions 7.7 and 7.8 (R2008b and R2009a, respectively), but it is not in version 7.1.
Matlab can do such video split operations. There are two built in functions (aviread and mmreader) for reading video files. Both will create objects which contain the individual frames of the video. You can save these as separate frames or make a new video out of by using avifile.