Using hls.js to play a live stream with a rolling live window, how do you keep captions in webvtt aligned? - live

I can play a live stream with the hls.js player using playlists master.m3u8, video.m3u8 and metadata.m3u8. The video is created using ffmpeg hls commands and is using a rolling live window with the args:
-hls_list_size 20 -hls_time 2 -hls_flags delete_segments
This creates video fragments starting with video0.ts to video19.ts, then starts removing the first fragments as it adds new ones. The video.m3u8 eventually looks like...
#EXTM3U
#EXT-X-VERSION:3
#EXT-X-TARGETDURATION:2
#EXT-X-MEDIA-SEQUENCE:25
#EXTINF:2.002133,
video25.ts
#EXTINF:2.002133,
video26.ts
...
My metadata.m3u8 playlist looks similar though I am creating that from a separate source. The video and metadata playlist are kept in sync and play fine from the start of my live.
#EXTM3U
#EXT-X-VERSION:6
#EXT-X-TARGETDURATION:2
#EXT-X-MEDIA-SEQUENCE:25
#EXTINF:2.000
sub_25.vtt
#EXTINF:2.000
sub_26.vtt
...
The problem starts when I reload the player page. On a reload, the player loads the playlists and will play correctly at the current live point.
I see it load fragments around video45.ts and sub_45.vtt. This is seemingly correct as the media sequence in video.m3u8 is at 25. Add in the 20 fragment playlist size and the live position is around 45th fragment. This is around 90 seconds into the live.
However, the media time in the player shows 40 seconds. It seems to use the number of fragments in the playlist only to come up with 40 seconds, even though the real live time is 90 seconds.
The final resulting problem is that the 40 second media time is being used to reference the text track cues and showing captions for 40 seconds.. not 90 seconds mark where the video actually is from.
Is there a way to get the player to correctly reflect the 'real' time, despite the rolling window of live, so the captions (which are correctly loaded) to display at the correct time?
Or is the rolling window live playback not going to work with subtitles in vtt?
If I disable the rolling window support I can reload the live many times and the 'full' live time loads and the captions line up ok.

Related

How to get duration of specific audio in AudioSourse just_audio Flutter?

I am building a project can play audios with just_audio.
I have a list of audios put in AudioSource and I need to create a control dash (a play button and a progress bar) for each of audio in the list, instead of using a common for the playlist as usual.
But I have no idea how to get duration/duration stream of each audio in the list and every time I click the play button of a specific audio, only that audio will be played?
enter image description here
In the picture, I can only get duration of current state, using player.durationStream, and when I click the play button, only current audio of sequenceState be played.
Please help, tkx a lot!!!
This feature isn't yet supported, but there is an open feature request for it:
https://github.com/ryanheise/just_audio/issues/141
However, this may not be what you actually want. Be aware that querying the duration directly from the media file can be inefficient in some cases, and so if you actually know the duration in advance, it may be better for the app to maintain its own local database of metadata which includes the durations:
If the audio is coming from a podcast, note that the podcast feed should report a medium-fidelity duration for each item in the XML file, measured in seconds(*).
If the audio was recorded by your app, you could save the duration metadata into your database at the same time the recording is made.
If the audio is on the device, you can query it using flutter_audio_query.
If the audio is an asset packaged with the app, then the durations are known by implication and can also be packaged with the app (i.e. hard coded).
(*) If the podcast feed omitted the duration field, you can still query it by extracting just enough of the audio file to read its duration and then disposing of the temporary player:
final disposablePlayer = Player();
final duration = await disposablePlayer.setAudioSource(...);
disposablePlayer.dispose();

iphone html 5 video - how to start from different time

What is the correct way to begin playback of a video from a specific time?
Currently, the approach we use is to check at an interval whether it's possible to seek via currentTime and then seek. The problem with this is, when the video fullscreen view pops up, it begins playback from the beginning for up to a second before seeking.
I've tried events such as onloadmetadata and canplay, but those seem to happen too early.
Added information:
It seems the very best I can do is to set a timer that tries to set currentTime repeatedly as soon as play() is called, however, this is not immediate enough. The video loads from the beginning, and after about a second, depending on the device, jumps. This is a problem for me as it provides an unsatisfactory experience to the user.
It seems like there can be no solution which does better, but I'm trying to see if there is either:
a) something clever/undocumented which I have missed which allows you to either seek before loading or otherwise indicate that the video needs to start not from 00:00 but from an arbitrary point
b) something clever which allows you to hide the video while it's playing and not display it until it has seeked (So you would see a longer delay on the phone before the fullscreen video window pops up, but it would start immediately where I need it to instead of seeking)
do something like this;
var video = document.getElementsById("video");
video.currentTime = starttimeoffset;
more Information can be found on this page dedicated to video time offset howtos
For desktop browser Chrome/Safari, you can append #t=starttimeoffsetinseconds to your video src url to make it start from certain position.
For iOS device, the best we can do is to listen for the timeupdated event, and do the seek in there. I guess this is the same as your original approach of using a timer.
-R

record video in cocos2d iOS game, low resolution for video and high resolution for normal cases

I am using cocos2d's CCRenderTexture to record video of my game. But if recording video in retina display resolution will cost lot of CPU and memory, so I want to use low resolution for video record but keep retina-resolution for normal game play. is it possible?
I've tried "[[CCDirector sharedDirector] enableRetinaDisplay:NO];" during record video, but it seems not work. the generated output totally wrong.
This is not feasible.
You'd have to render each frame twice, once on the screen, then onto the render texture. A serious drop in framerate is inevitable even if you lower the resolution of the render texture somehow.
The reason is simply that you'll also have to write each render texture as an image to flash memory. This is extremely slow. You'll also end up with a huge amount of data. If each (PNG/JPG) image file ends up being a reasonably small 50 KB then one second of recorded data at 60 fps will consume 3 Megabytes of flash memory. One minute would be around 180 Megabytes.
To record a demo of your game, most games follow the simple principle of recording the user input, and then playing back the user input as if the user had issued these commands. This requires careful planning, no breaking changes when updating the app (or invalidating old demos), and no use of non-deterministic randomizers (ie seeded with time).
If you need to record a demo for making a trailer video, there's plenty of screengrabbing solutions around. Some even specialize in grabbing iPhone video, either from the device (usually requires a source code/library component) or from the Simulator.
You should check out Kamcord SDK for recording game play. Check at http://kamcord.com/
Kamcord has a built-in gameplay video and audio recording technology for iOS. It allows you, the game developer, to capture gameplay videos with an API. Your users can then replay and share these gameplay videos via YouTube, Facebook, Twitter, and email.

Playing a real time video stream from iPhone camera on a 20 second delay

I am trying to see if it is possible to record a video from the iPhone's camera and write this to a file. I then want the video to start playing on the screen a set time after. This all needs to happen continuously. For example, I want the video on the screen to always be 20 seconds behind what the camera is recording.
Some background:
I have a friend who is a coach and would like for his players to be able to see their last play. This could be accomplished by a feed going to a TV from an iPad always 20 seconds behind what is recorded. This needs to continually run until practice is over. (I would connect the iPad to the TV either with a cable or AirPlay to an Apple TV). The video would never need to be saved and should just be discarded after playing.
Is this even possible with the APIs AVFoundation offers? Will the iPhone let you write to a file and read from a file at the same time to accomplish this? Any other better way to accomplish this?
Thanks for your time.
Instead of writing to a file, how about saving your frames in a circular buffer big enough to hold X seconds of video?
The way I would start to do this would be to look at what's provided in AVCaptureVideoDataOutput and its delegate methods (where you can get the frame data from).

Why does my MPMoviePlayerController suddenly change load state to unknown?

My app is a digital magazine consisting of many pages. Some pages have video embedded in them.
I am using MPMoviePlayerViewController for the video playback.
The problem is that once I get to any page where the next page also contains a video then the video appears briefly then blanks out.
I've switched out video files, so I know it isn't the files themselves.
When the page loads offscreen ready to be scrolled into view the load state changes to 3 = MPMovieLoadStatePlayable + MPMovieLoadStatePlaythroughOK (all good). Then when a second MPMoviePlayerController is loaded, the load state of the original suddenly changes to 0 (MPMovieLoadStateUnknown).
In the docs for MPMoviePlayerController it says
Note: Although you can create multiple
MPMoviePlayerController objects and
present their views in your interface,
only one movie player at a time can
play its movie.
I read this to mean I can't play more than one at a time but does this it also mean I can't even load up more than one movie at a time?
It turns out you can't even load more than one MPMoviePlayerController at a time. You can create the object but if you supply a URL to more than one then one will get unloaded.