Generating 64kbps audio-only mpegts for HTTP Live segmenter to meet 64kbps audio only requirement - iphone

I am trying to convert our mp4 files into mpeg-ts and segment it into .ts files for my iphone app to play. I am using Carson McDonalds's HTTP-Live-Video-Stream-Segmenter-and-Distributor to do that.
I got his stuff complied and working correctly. I am currently trying to meet Apple's requirement where I need to provide a baseline 64 kbps audio only stream to my m3u8 playlist.
Carson doesn't seem to have a profile for that.
I need to be able to generate 64kbps audio-only stream from mp4, and turn that into mpeg-ts for the segmenter into ts. I am trying to find the right ffmpeg command that will validate without problem using Apple's mediastreamvalidator.
So far I modified an existing encoding profile to try to achieve 64kbps total:
ffmpeg -er 4 -i %s -f mpegts -acodec libmp3lame -ar 22050 -ab 32k -s 240x180 -vcodec libx264 -b 16k -flags +loop+mv4 -cmp 256 -partitions +parti4x4+partp8x8+partb8x8 -subq 7 -trellis 1 -refs 5 -coder 0 -me_range 16 -keyint_min 25 -sc_threshold 40 -i_qfactor 0.71 -bt 64k -maxrate 16k -bufsize 16k -rc_eq 'blurCplx^(1-qComp)' -qcomp 0.6 -qmin 10 -qmax 51 -qdiff 4 -level 30 -aspect 4:3 -r 10 -g 30 -async 2 - | %s %s %s %s %s
but then when I try to validate it using mediastreamvalidator, it gives error after few ts:
Playlist Validation: OK
Segments:
sample_cell_4x3_64k-00001.ts:
WARNING: Media segment exceeds target duration of 10.00 seconds by 1.30 seconds (segment duration is 11.30 seconds)
sample_cell_4x3_64k-00002.ts:
WARNING: Media segment exceeds target duration of 10.00 seconds by 1.40 seconds (segment duration is 11.40 seconds)
....
....
sample_cell_4x3_64k-00006.ts:
ERROR: (-1) Unknown video codec: 1836069494 (program 0, track 0)
ERROR: (-1) failed to parse segment as either an MPEG-2 TS or an ES
sample_cell_4x3_64k-00007.ts:
ERROR: (-1) Unknown video codec: 1836069494 (program 0, track 0)
ERROR: (-1) failed to parse segment as either an MPEG-2 TS or an ES
....
....
Average segment duration: 10.26 seconds
Average segment bitrate: 376797.92 bps
Average segment structural overhead: 349242.17 bps (92.69 %)
Is there someway I can generate this correctly with just audio which totals 64kbps and turn it into mpeg-ts ready to be segmented and validated correctly?
Am I approaching the problem right?

I don't remember all the details of Carson's ruby scripts, but the first thing I would do to get an audio-only stream is to stop the video processing (-vn). So something like this:
ffmpeg -er 4 -i %s -f mpegts -acodec libmp3lame -ar 22050 -ab 32k -vn - | %s %s %s %s %s

Related

Streaming from Icecast to Facebook Live with ffmpeg on Ubuntu 16.04

I have a webradio streamed by Liquidsoap+Icecast on a DigitalOcean droplet (Ubuntu 16.04), and I want to combine this audio stream with a simple jpeg image with ffmpeg, transform it to a video stream and send it to Facebook live.
Facebook Live specifications :
Video Format :
We accept video in maximum 720p (1280 x 720) resolution, at 30 frames
per second. (or 1 key frame every 2 seconds). You must send an I-frame
(keyframe) at least once every two seconds throughout the stream..
Recommended max bit rate is 4000 Kbps. Titles must be less than 255
characters otherwise the stream will fail. The Live API accepts H264
encoded video and AAC encoded audio only.
Video Length :
240 minute maximum length, with the exception of continuous live (see
above). 240 minute maximum length for preview streams (either through
Live dialog or publisher tools). After 240 minutes, a new stream key
must be generated.
Advanced Settings :
Pixel Aspect Ratio: Square. Frame Types: Progressive Scan. Audio
Sample Rate: 44.1 KHz. Audio Bitrate: 128 Kbps stereo. Bitrate
Encoding: CBR.
And the ffmpeg command I tried :
ffmpeg -loop 1 -i radio-background.jpg -thread_queue_size 20480 -i http://localhost:8000/radio -framerate 30 -r 30 -acodec aac -strict -2 -c:v libx264 -strict experimental -b:a 128k -pix_fmt yuvj444p -x264-params keyint=60 -b:v 256k -minrate 128k -maxrate 512k -bufsize 768k -f flv 'rtmp://rtmp-api.facebook.com:80/rtmp/<fb-streaming-key>'
This is actually working, as Facebook receives the live video and allows me to publish it. But I can't figured out why there is a lag almost every 2 or 3 seconds. I asked different people to watch the test video, and everyone gets the same problem : every 2 or 3 seconds the playing "freezes" for half a second and seems to load the video, I even can see the loading icon spinning on the screen.
I tried different combinations of values for the following options : -thread_queue_size / -b:v / -minrate / -maxrate / -bufsize. Nothing seems to produce any change.
Video streaming is new for me, I'm not really confortable with the options listed before, so I think I'm missing something here...
Also, note that the icecast audio stream perfectly works, and according to DigitalOcean graphs, the server is not overloaded. So I think my ffmpeg command is wrong.
What ffmpeg parameters would be working for that case?
specify a frame rate for the image. this would go before the input item.
-r 30 -loop 1 -i radio-background.jpg
if your radio stream is is already aac you can just stream copy, there is no need to re-encode the audio. you can use -c:a copy.
-c:a copy
if you still want to use aac you should switch to using libfdk_aac. ffmpeg by default uses 128k bitrate for audio so there is no need to specify -b:a
-c:a libfdk_aac
ffmpeg will use the input framerate of the first item for the output by default so you dont need to specify anymore frame rates. (you have the output frame rate specified twice. -framerate 30 and -r 30 are the same)
ultrafast preset for better CPU performance, tune, and pixel format. you can also use -g for the keyent.
-c:v h264 -preset ultrafast -tune stillimage -pix_fmt yuvj444p -g 60
set the profile and profile level, bframes
-profile:v high444 -level 4.2
use either -b:v or -minrate -maxrate -bufsize but not both.
-b:v 768k
and out we go
-f flv rtmp://rtmp-api.facebook.com:80/rtmp/streamkey
now to put it all together
ffmpeg -r 30 -loop 1 -i radio-background.jpg \
-i http://localhost:port/mount -c:a libfdk_aac -c:v h264 -b:v 768k \
-preset ultrafast -tune stillimage -pix_fmt yuvj444p -g 60 \
-profile:v high444 -level 4.2 -f flv rtmp://rtmp-api.facebook.com:80/rtmp/streamkey

FFMPEG: Videos converted from FLV to MP4 does not play in iPod but works in iPhone

I used below command to convert videos from FLV,M4V to MP4.
ffmpeg -y -i video_1336406262.flv -vcodec libx264 -vpre slow -vpre
ipod640 -b 250k -bt 50k -acodec libfaac -ac 2 -ar 48000 -ab 64k -s
480x320 video_1336406262.mp4
The videos converted from M4V to MP4 are playing very well in both iPhone and iPod but the videos converted from FLV to MP4 does not work in iPod but does in iPhone.
In the video area of HTML5 page iPod even does not show the play symbol.
Could someone help here?
I am using the same command to convert from both FLV and M4V to MP4.
Thanks
I would recommend using HandBrakeCLI in order to convert videos to MP4.
Handbrake has a few built-in presets that allow precise compatibility targetting, see https://trac.handbrake.fr/wiki/BuiltInPresets
The built-in ipod preset has a few differences with the format you require, so your invocation can be translated to a handrake call in the following way:
HandBrakeCLI -i video_1336406262.flv -e x264 -a 1 -E faac -6 dpl2 -R Auto -D 0.0 -f mp4 -I -m -x level=30:bframes=0:weightp=0:cabac=0:ref=1:vbv-maxrate=768:vbv-bufsize=2000:analyse=all:me=umh:no-fast-pskip=1:subme=6:8x8dct=0:trellis=0 -b 250 -B 64 -R 48 -X 480 -w 480 -l 320 -2 -o video_1336406262.mp4
I can't certify this is exactly what you need, but that should be close enough.
The conversion m4v -> mp4 makes no sense. m4v is just another extension for mp4.
Mp4 is not a video format but a wrapper for audio/videos/subtitles/metadatas.
In my opinion the problem comes from the profile. Depending on the iPhone generation, not all h264 profiles are supported.
Try adding -coder 0 to your command and it should work.
I think you get your command from here and I noticed there is another command that should do what you want :
iPod-iPhone 640 width, without presset :
ffmpeg -i INPUT -s 640x480 -r 30000/1001 -b 200k -bt 240k -vcodec libx264 coder 0 -bf 0 -refs 1 -flags2 -wpred-dct8x8 -level 30 -maxrate 10M -bufsize 10M-acodec libfaac -ac 2 -ar 48000 -ab 192k output.mp4
According to my experience, if you want a good quality/size ratio, you should prefer 2-pass encoding :
ffmpeg -y -i input -r 30000/1001 -s 480x272 -aspect 480:272 -vcodec libx264 -b 512k -bt 1024k -maxrate 4M -flags +loop -cmp +chroma -me_range 16 -g 300 -keyint_min 25 -sc_threshold 40 -i_qfactor 0.71 -rc_eq "blurCplx^(1-qComp)" -qcomp 0.6 -qmin 10 -qmax 51 -qdiff 4 -coder 0 -refs 1 -bufsize 4M -level 21 -partitions parti4x4+partp8x8+partb8x8 -subq 5 -f mp4 -pass 1 -an -title "Title" output.mp4
The encoding process is longer but worth the time!
One last thing, instead of using ffmpeg directly, I prefer to use Mencoder which is a wrapper for ffmpeg (more codecs support). A nice GUI for Mencoder should be MeGUI for windows, it really make the encoding process easier!

MPEG-TS Segments HTTP Live Streaming

I'm trying to interleave MPEG-TS segments but failing. One set of segments was actually captured using the built in camera in the laptop, then encoded using FFMPEG with the following command:
ffmpeg -er 4 -y -f video4linux2 -s 640x480 -r 30 -i %s -isync -f mpegts -acodec libmp3lame -ar 48000 -ab 64k -s 640x480 -vcodec libx264 -fflags +genpts -b 386k -coder 0 -me_range 16 -keyint_min 25 -i_qfactor 0.71 -bt 386k -maxrate 386k -bufsize 386k -qcomp 0.6 -qmin 10 -qmax 51 -qdiff 4 -aspect 640:480
And the other one is an avi file that was encoded using the following command:
fmpeg -er 4 -y -f avi -s 640x480 -r 30 -i ./DSCF2021.AVI -vbsf dump_extra -f mpegts -acodec libmp3lame -ar 48000 -ab 64k -s 640x480 -vcodec libx264 -fflags +genpts -b 386k -coder 0 -me_range 16 -keyint_min 25 -i_qfactor 0.71 -bt 386k -maxrate 386k -bufsize 386k -qcomp 0.6 -qmin 10 -qmax 51 -qdiff 4 -aspect 640:480
Then the output is segmented into ts segments using an open source segmenter.
If both come from the same source (both from the camera) they work fine. However in this case, the second set of segments freeze. Time passes, but the video does not move..
So i think it's an encoding problem. So my question is, how should i change the ffmpeg command for this to work?
By interleave I mean, having a playlist with the first set of segments, and another playlist with the other set of segments, and having the client call one then the other (HTTP Live Streaming)
The ffprobe output of one of the first set of segments:
Input #0, mpegts, from 'live1.ts':
Duration: 00:00:09.76, start: 1.400000, bitrate: 281 kb/s
Program 1 Service01
Metadata:
name : Service01
provider_name : FFmpeg
Stream #0.0[0x100]: Video: h264, yuv420p, 640x480 [PAR 1:1 DAR 4:3], 29.92 fps, 29.92 tbr, 90k tbn, 59.83 tbc
Stream #0.1[0x101]: Audio: aac, 48000 Hz, stereo, s16, 111 kb/s
The ffprobe output of one of the second set of segments:
Input #0, mpegts, from 'ad1.ts':
Duration: 00:00:09.64, start: 1.400000, bitrate: 578 kb/s
Program 1 Service01
Metadata:
name : Service01
provider_name : FFmpeg
Stream #0.0[0x100]: Video: h264, yuv420p, 640x480 [PAR 1:1 DAR 4:3], 25 fps, 25 tbr, 90k tbn, 50 tbc
Stream #0.1[0x101]: Audio: aac, 48000 Hz, stereo, s16, 22 kb/s
Thank you,
I have seen quite a few questions in the subject - See:
HTTP Live Streaming MPEG TS segment and
Update .m3u8 playlist file for HTTP Live streaming?
I am not sure exact problem - but i think most people complain that when you mix content from both sources are different then there is a freezing.
I think this situation may arise if PTS and/or PCR is discontinuous and the player is not recognizing this or flushing it. Probably, you can identify the sequence of Timestamps and see if that being fixed solves the problems.
Also, see 3.3.11. of https://datatracker.ietf.org/doc/html/draft-pantos-http-live-streaming-07 : EXT-X-DISCONTINUITY
The EXT-X-DISCONTINUITY tag indicates an encoding discontinuity
between the media segment that follows it and the one that preceded
it. The set of characteristics that MAY change is:
o file format
o number and type of tracks
o encoding parameters
o encoding sequence
o timestamp sequence
So a discontinuity flag in the playlist file might just help if the problem is any of the above. Please try some of this, and put more details. I guess, this will help lot of other people as well.

What h.264 format loads on android AND IOS?

Theoretically both IOS and ANDROID will play h.264 files, but I can't figure out a setting to encode them so they actually work cross platform. Does anybody know how to encode for both Android and IOS using one file?
p.s. I know all about html5 video and the fallback sources, I just don't want to encode and host a new video for every device that comes down the pike.
Here's the ffmpeg command line we use to transcode to MPEG-4 h.264 in our production environment. We've tested the output on several Android devices, as well as iOS. You can use this as a starting point, just tweaking things like frame size/frame rate and qfactor.
ffmpeg -y
-i #{input_file}
-s 432x320
-b 384k
-vcodec libx264
-flags +loop+mv4
-cmp 256
-partitions +parti4x4+parti8x8+partp4x4+partp8x8
-subq 6
-trellis 0
-refs 5
-bf 0
-flags2 +mixed_refs
-coder 0
-me_range 16
-g 250
-keyint_min 25
-sc_threshold 40
-i_qfactor 0.71
-qmin 10 -qmax 51
-qdiff 4
-acodec libfaac
-ac 1
-ar 16000
-r 13
-ab 32000
-aspect 3:2
#{output_file}
Some of the important options affecting Android compatibility are:
-coder 0 Uses CAVLAC rather than CABAC entropy encoding (CABAC not supported on Android)
-trellis 0 Should be shut off, requires CABAC
-bf 0 Turns off B-frames, not supported on Android or other h.264 Baseline Profile devices
-subq 6 Determines what algorithms are used for subpixel motion searching. 7 applies to B-frames, not supported on Android.
-refs 5 Determines how many frames are referenced prior to the current frame. Increasing this number could affect compatibility
After we encode our video with this ffmpeg recipe, we also pass the video through qt-faststart. This step rechunks the video for streaming. We stream it over HTTP to an embedded VideoView within our Android app. No problems streaming to any Android device we're aware of.
Update 2013-06-17: I just wanted to add a note that it's best to stick with "baseline" profile for H.264 encoding for maximum compatibility across all Android devices. The above command line doesn't explicitly specify an H.264 profile, but ffmpeg does have a -profile command line flag that is useful if you are using its presets. You probably shouldn't mess with -profile. I have encoded videos for my ASUS Transformer 300 tablet (Android 4.2) using "main" rather than "baseline" profile (via Handbrake). The "main" profile gave problems with audio getting out of sync with video on playback.
I used this to make an Android and iOS app with embedded videos. The videos played in both versions. (Android example) (iOS example)
Supplemental answer
This answer is a supplement to the accepted answer explaining some of the parameters.
ffmpeg
-y # Overwrite output files without asking.
-i input_filename # input file name
-s 432x320 # size of output file
-b:v 384k # bitrate for video
-vcodec libx264 # use H.264 video codec
-flags +loop+mv4 # use loop filter and four motion vector by macroblock
-cmp 256 # ??? Full pel motion estimation compare function
-partitions +parti4x4+parti8x8+partp4x4+partp8x8 #???
-subq 6 # determines algorythms for subpixel motion searching and partition decision
-trellis 0 # optimal rounding choices
-refs 5 # number of frames referenced prior to current frame
-bf 0 # turn of B-frames, something to do with H.264 and Baseline Profile
-flags2 +mixed_refs # ??? gave me an error so I just deleted it
-coder 0 # turn of the CABAC entropy encoder
-me_range 16 # max range of the motion search
-g 250 # GOP length (250 is the recommended default)
-keyint_min 25 # Minimum GOP length (25 is the recommended default)
-sc_threshold 40 # adjusts sensitivity of x264's scenecut detection (default is 40)
-i_qfactor 0.71 # Qscale difference between I-frames and P-frames (0.71 is the recommended default)
-qmin 10 -qmax 51 # min and max quantizer (10 and 51 are the recommended defaults)
-qdiff 4 # max QP step (4 is recommended default)
-c:a aac # Set the audio codec to use AAC
-ac 1 # number of audio channels
-ar 16000 # audio sampling frequency
-r 13 # frames per second
-ab 32000 # audio bitrate
-aspect 3:2 # sample aspect ratio
output_filename # name of the output file
Feel free to edit this if you can fill in some of the details I wasn't sure about.
Here it is again in a cut-and-paste format. (I also had to add the -strict -2 parameter to get aac to work on my computer.)
ffmpeg -y -i input_file.avi -s 432x320 -b:v 384k -vcodec libx264 -flags +loop+mv4 -cmp 256 -partitions +parti4x4+parti8x8+partp4x4+partp8x8 -subq 6 -trellis 0 -refs 5 -bf 0 -coder 0 -me_range 16 -g 250 -keyint_min 25 -sc_threshold 40 -i_qfactor 0.71 -qmin 10 -qmax 51 -qdiff 4 -c:a aac -ac 1 -ar 16000 -r 13 -ab 32000 -aspect 3:2 -strict -2 output_file.mp4
Further Study
Most of this information I found at the following links:
ffmpeg Documentation
x264 FFmpeg Options Guide
FFMPEG An Intermediate Guide/Flags Flags
See also
Android VideoView example
See Android Supported Media Formats, which states that h.264 is only supported in Android 3.0+. Earlier versions of Android support h.263. EDIT: As mportuesisf mentions below, I misinterpreted the linked table. Ignore this answer.

iPhone HTTP Streaming .m3u8 and .ts files - how to create using ffmpeg

I'm trying to get apple-validated http media streams using ffmpeg and am getting errors. Here are some error examples:
WARNING: Playlist Content-Type is 'application/x-mpegurl', but should
be one of 'application/vnd.apple.mpegurl', 'audio/x-mpegurl' or
'audio/mpegurl'.
WARNING: 258 samples (88.966 %) do not have timestamps in track 256
(avc1). 4: us2-1.ts
~~~~~~~~
WARNING: Media segment duration outside of expected duration by 47.733
% (5.23 vs. 10.00 seconds, limit is 20 %). 40: us2-19.ts
~~~~~~~~~
Average segment duration: 10.16 seconds
Average segment bitrate: 320.12 kbit/s
Average segment structural overhead: 175.89 kbit/s (54.94 %)
Video codec: avc1
Video resolution: 320x320 pixels
Video frame rate: 29.72, 29.78, 29.82, 30.00, 29.64 fps
Average video bitrate: 100.66 kbit/s
H.264 profile: Baseline
H.264 level: 3.0
Audio codec: aac
Audio sample rate: 48000 Hz
Average audio bitrate: 43.57 kbit/s
Here is the end file I've been submitting: http://files.chesscomfiles.com/images_users/using/us2.m3u8
Here is the file I used to create this: http://files.chesscomfiles.com/images_users/using/using-computers-1.mp4
I've tried these commands, among others:
ffmpeg -i using-computers-1.mp4 -f mpegts -acodec libfaac -ar 48000
-ab 64k -s 320x320 -vcodec libx264 -vbsf h264_mp4toannexb -b 96k -flags +loop -cmp +chroma -partitions +parti4x4+partp8x8+partb8x8 -subq 5 -trellis 1 -refs 1 -coder 0 -me_range 16 -keyint_min 2 -sc_threshold 40 -i_qfactor 0.71 -bt 200k -maxrate 96k -bufsize 96k -rc_eq 'blurCplx^(1-qComp)' -qcomp 0.6 -qmin 10 -qmax 51 -qdiff 4 -level 30 -g 30 -async 2 us2.ts
ffmpeg -i using-computers-1.mp4 -f mpegts -acodec libfaac -ar 48000
-ab 64k -s 320x320 -vcodec libx264 -vbsf h264_mp4toannexb -b 96k -flags +loop -cmp +chroma -partitions +parti4x4+partp8x8+partb8x8 -subq 5 -trellis 1 -refs 1 -coder 0 -me_range 16 -keyint_min 25 -sc_threshold 40 -i_qfactor 0.71 -bt 200k -maxrate 96k -bufsize 96k -rc_eq 'blurCplx^(1-qComp)' -qcomp 0.6 -qmin 10 -qmax 51 -qdiff 4 -level 30 -g 30 -async 2 us1.ts
ffmpeg -i using-computers-1.mp4 -vbsf h264_mp4toannexb -acodec copy -vcodec copy -f mpegts output.ts
If someone can help me figure out what ffmpeg commands I should be running I'd really appreciate it!
Regarding the first warning:
WARNING: Playlist Content-Type is
'application/x-mpegurl', but should be
one of
'application/vnd.apple.mpegurl',
'audio/x-mpegurl' or 'audio/mpegurl'.
It could be from the server setup. Follow the instructions from Step 4 of this Ion Cannon post:
Prepare the HTTP server Upload a set
of files that represent the stream and
a stream definition file (ts and
m3u8). Those files can be uploaded to
a web server at this point but there
is another important step to take that
ensures they will be download
correctly and that is setting up mime
types. There are two mime types that
are important for the streaming
content:
.m3u8 application/x-mpegURL
.ts video/MP2T
If you are using Apache you
would want to add the following to
your httpd.conf file:
AddType application/x-mpegURL .m3u8
AddType video/MP2T .ts
If you are
using lighttpd you would want to put
this in your configuration file (if
you have other mime types defined make
sure you just add these and don't set
them):
mimetype.assign = ( ".m3u8" =>
"application/x-mpegURL", ".ts" =>
"video/MP2T" )
Regarding the third warning:
WARNING: Media segment duration
outside of expected duration by 47.733
% (5.23 vs. 10.00 seconds, limit is 20
%). 40: us2-19.ts ~~~~~~~~~
This usually happens if a segment is a different duration than how the duration listed for that segment in the playlist (m3u8). For example, the below playlist has one segment and is listed by the playlist to be 10 seconds. If the actual duration of this segment is different by too much (more than 20%), than the validator will complain.
#EXTM3U
#EXT-X-TARGETDURATION:10
#EXT-X-MEDIA-SEQUENCE:0
#EXTINF:10,
med0.ts
#EXT-X-ENDLIST
Usually the last segment in a playlist will differ a bit from the target, and this warning can be ignored.
And, as a general rule these "WARNING" messages can be ignored, but "ERROR" messages need to be taken seriously.
However, the second warning looks more serious, and could possibly lead to a rejection from Apple. It could be your segmenter commands (are you using mediastreamsegmenter?).
Also, I'm not using "-vbsf h264_mp4toannexb". And, I'm using "-async 50".
Btw, the link to your playlist is invalid.