How to use Media Segmenter for split video? - iphone

I have read many documents still very confused about HTTP Live Streaming.
But i am still trying for solution.. and i have convert my video in .ts format with ffmpeg.
Now i know that i have to split my video and have to create playlist with the use of mediasegmenter.
But i don't know where is mediasegmenter and how to use it to split video.
I am very new to this so sorry for this silly Question..
Any help would be appreciated..!!
Thanks in advance..!!

Here: 35703_streamingtools_beta.dmg or go to http://connect.apple.com/ and search for "HTTP Live Streaming", or download from https://developer.apple.com/streaming/. Usage:
mediafilesegmenter -t 10 myvideo-iphone.ts
This will generate one .ts file for each 10 seconds of the video plus a .m3u8 file pointing to all of them.

If you use FFMpeg, it's very easy to split files with it.
Don't use Media Segmenter.
Simply write something like this:
ffmpeg.exe -i YourFile.mp4 -ss 00:10:00 -t 00:05:00 OutFile.mp4
where -ss 00:10:00 is time offset , -t 00:05:00 is duration of OutFile.mp4.
This will create OutFile.mp4 which contains 5 minute video(-t 00:05:00) of YourFile.mp4
(from 00:10:00 to 00:15:00 of YourFile.mp4).
Useful ?)
And also you can create .ASX playlist which is able to cast streams and is very simple.

Related

Is there a way to include a transition video when using ffmpeg's select filter?

I wanted to take some segments of an input video and make a highlight reel automatically using ffmpeg. I was originally experimenting with trimming and concating but it was difficult because the audio was falling out of sync and also I have a variable number of highlights that I want.
I discovered from the top comment here that the 'select' filter is very powerful. I have a python program that inputs all the parts I want selected into the command and run it in the terminal and it works perfect. Only issue is that I want a quick transition video to play in between each of these highlights. Is this not possible with select? Do I have to return to using trim and concat? Thank you!
Edit: for reference, here is an example ffmpeg command I am running
ffmpeg -y -i video.mp4 -i audio.mp4 -vf "select='between(t,56,69)+between(t,60,135)+between(t,73,132)+between(t,152,163)+between(t,251,278)+between(t,600,700)+between(t,774,872)', setpts=N/FRAME_RATE/TB " -af "aselect='between(t,56,69)+between(t,60,135)+between(t,73,132)+between(t,152,163)+between(t,251,278)+between(t,600,700)+between(t,774,872)',asetpts=N/SR/TB" output.mp4

MP4Box MP4 concatenation not working

I download lectures in mp4 format from Udacity, but they're often broken down into 2-5 minute chunks. I'd like to combine the videos for each lecture into one continuous stream, which I've had success with on Windows using AnyVideo Converter. I'm trying to do the same thing on Ubuntu 15, and most of my web search results suggest MP4Box, whose documentation and all the online examples I can find offer the following syntax:
MP4Box -cat vid1.mp4 -cat vid2.mp4 -cat vid3.mp4 -new combinedfile.mp4
This creates a new file with working audio, but the video doesn't work. When I open with Ubuntu's native video player, I get the error "No valid frames decoded before end of stream." When I open with VLC, I get the error "Codec not supported: VLC could not decode the format 'avc3' (No description for this codec." I've tried using the -keepsys switch, as well, but I get the same results.
All the documentation and online discussion makes it sound as though what I'm trying to do is and should be really simple, but I can't seem to find info relevant to the specific errors I'm getting. What am I missing?
Use the -force-cat option.
For example,
MP4Box -force-cat -add in1.mp4 -cat in2.mp4 -cat in3.mp4 ... -new out.mp4
From the MP4Box documentation:
-force-cat
skips media configuration check when concatenating file.
It looks, by the presence of 'avc3', that these videos are encoded with h.264|avc. There are several modes for the concatenation of such streams. Either the video streams have compatible encoder configurations (frame size, ...) in which case only one configuration description is used in the file (signaled by 'avc1'). If the configurations are not fully compatible, MP4Box uses the 'inband' storage of those configurations (signaled by 'avc3'). The other way would be to use multiple sample description entries (stream configurations) but that is not well supported by players and not yet possible with MP4Box. There is no other way unless you want to reencode your videos. On Ubuntu, you should be able to play 'avc3' streams with the player that goes with MP4Box: MP4Client.

How do I strip initial offsets from OGG files?

=== BACKGROUND ===
Some time ago I ripped a lot of music from an internet radio station. Unfortunately something seems to have went wrong, since the length of most files is displayed as being several hours, but they started playing at the correct position.
Example: If a file is really 3 minutes long and it would be displayed as 3 hours, playback would start at 2 hours and 57 minutes.
Before I upgraded my system, gstreamer was in an older version and its behaviour would be as described above, so I didn't pay too much attention. Now I have a new version of gstreamer which cannot handle these files correctly: It "plays" the whole initial offset.
=== /BACKGROUND ===
So here is my question: How is it possible to modify an OGG/Vorbis file in order to get rid of useless initial offsets? Although I tried several tag-edit programs, none of them would allow me to edit these values. (Interestingly enough easytag will display me both times, but write the wrong one...)
I finally found a solution! Although it wasn't quite what I expected...
After trying several other options I ended up with the following code:
#!/bin/sh
cd "${1}"
OUTDIR="../`basename "${1}"`.new"
IFS="
"
find . -wholename '*.ogg' | while read filepath;
do
# Create destination directory
mkdir -p "${OUTDIR}/`dirname "${filepath}"`"
# Convert OGG to OGG
avconv -i "${filepath}" -f ogg -acodec libvorbis -vn "${OUTDIR}/${filepath}"
# Copy tags
vorbiscomment -el "${filepath}" | vorbiscomment -ew "${OUTDIR}/${filepath}"
done
This code recursively reencodes all OGG files and then copies all vorbis comments. It's not a very efficient solution, but it works nevertheless...
What the problem was: I guess it has something to do with the output of ogginfo:
...
New logical stream (#1, serial: 74a4ca90): type vorbis
WARNING: Vorbis stream 1 does not have headers correctly framed. Terminal header page contains additional packets or has non-zero granulepos
Vorbis headers parsed for stream 1, information follows...
Version: 0
Vendor: Xiph.Org libVorbis I 20101101 (Schaufenugget)
...
Which disappears after reencoding the file...
At the rate at which I'm currently encoding it will probably take several hours until my whole media library will be completely reencoded... but at least I verified with several samples that it works :)

FFMPEG RTMP streaming to FMS without stop?

I have some .mov files want to stream to Flash media server. i have already tried to stream a single .mov by FFMPEG command in terminal and it works, the FMS can display the thing i streaming in live.
ffmpeg -re -i file1.mov -vcodec libx264 -f flv rtmp://localhost/livepkgr/livestream
Now i want to stream multiple files,
i tried to use above command one by one,
but it seems Flash media server stop the streaming when file1 is finished,
then start the stream with file2.
It makes the stream player stopped when file1 is finish, and have to refresh the page in order to continue on file2.
i am calling the FFMPEG command by a C program in linux, i wonder is there any method that i can prevent the FMS stopped when i switch the file source in FFMPEG? or is that possible to let FFMPEG constantly deliver the stream by multiple files source without stopped when a file finish?
Reformat your source file to a TS or MPEG or other "concatable" file. Then you can either use ffmpeg's concat protocol or just "cat" by yourself.
I found something like this it will be useful for you :
I managed to stream a static playlist of videos by using for each video a pipe (ex vid1.mp4 -> pipe1, vid2.mp4 -> pipe2 etc). Then i write into a single stream named pipe "stream" this way cat pipe1 pipe2 pipe3 > stream, and i use the stream pipe as input in FFMPEG to publish my stream

Can we segment more than one movie file using mediafilesegmenter tool - HTTP Live Streaming

Is there any way to achieve segmenting from more than one movie file using mediafilesegmenter. I want to create one prog_index.m3u8 file from multiple movie files.
If mediafilesegmenter doesn't support, can anyone suggest alternate approach to achieve this.
Thanks in advance to all the viewers who takes time to look into this query .
Thanks
Sudheer
I couldn't find any help for segmenting more than one video file using mediafielsegmenter but found a solution to my issue.
As we know that mediafilesegmenter tool will generate a prog_index.m3u8 file by default after segmenting the movie file, here I'm creating a new index file with contents appended from prog_index.m3u8 and updating the new index file when a new movie file is segmented. This has solved my issue.
MediaFileSegmenter is not meant for combining more than one files. It's used for segmenting video files.
If you want to combine multiple files you can use ffmpeg. It's a very simple and efficient tool for performing various operations on video files.
From ffmpeg documentation,
Create a file mylist.txt with all the files you want to have concatenated in the following form:
file '/path/to/file1'
file '/path/to/file2'
file '/path/to/file3'
Note that these can be either relative or absolute paths. Then you can stream copy or re-encode your files:
ffmpeg -f concat -safe 0 -i mylist.txt -c copy output
The -safe 0 above is not required if the paths are relative.
you can find more on concatenation here.
Once you have combined the files. Segment them using mediafilesegmenter, you don't need to manually append index files within prog_index.m3u8.