MPEG-TS Segments HTTP Live Streaming - encoding

I'm trying to interleave MPEG-TS segments but failing. One set of segments was actually captured using the built in camera in the laptop, then encoded using FFMPEG with the following command:
ffmpeg -er 4 -y -f video4linux2 -s 640x480 -r 30 -i %s -isync -f mpegts -acodec libmp3lame -ar 48000 -ab 64k -s 640x480 -vcodec libx264 -fflags +genpts -b 386k -coder 0 -me_range 16 -keyint_min 25 -i_qfactor 0.71 -bt 386k -maxrate 386k -bufsize 386k -qcomp 0.6 -qmin 10 -qmax 51 -qdiff 4 -aspect 640:480
And the other one is an avi file that was encoded using the following command:
fmpeg -er 4 -y -f avi -s 640x480 -r 30 -i ./DSCF2021.AVI -vbsf dump_extra -f mpegts -acodec libmp3lame -ar 48000 -ab 64k -s 640x480 -vcodec libx264 -fflags +genpts -b 386k -coder 0 -me_range 16 -keyint_min 25 -i_qfactor 0.71 -bt 386k -maxrate 386k -bufsize 386k -qcomp 0.6 -qmin 10 -qmax 51 -qdiff 4 -aspect 640:480
Then the output is segmented into ts segments using an open source segmenter.
If both come from the same source (both from the camera) they work fine. However in this case, the second set of segments freeze. Time passes, but the video does not move..
So i think it's an encoding problem. So my question is, how should i change the ffmpeg command for this to work?
By interleave I mean, having a playlist with the first set of segments, and another playlist with the other set of segments, and having the client call one then the other (HTTP Live Streaming)
The ffprobe output of one of the first set of segments:
Input #0, mpegts, from 'live1.ts':
Duration: 00:00:09.76, start: 1.400000, bitrate: 281 kb/s
Program 1 Service01
Metadata:
name : Service01
provider_name : FFmpeg
Stream #0.0[0x100]: Video: h264, yuv420p, 640x480 [PAR 1:1 DAR 4:3], 29.92 fps, 29.92 tbr, 90k tbn, 59.83 tbc
Stream #0.1[0x101]: Audio: aac, 48000 Hz, stereo, s16, 111 kb/s
The ffprobe output of one of the second set of segments:
Input #0, mpegts, from 'ad1.ts':
Duration: 00:00:09.64, start: 1.400000, bitrate: 578 kb/s
Program 1 Service01
Metadata:
name : Service01
provider_name : FFmpeg
Stream #0.0[0x100]: Video: h264, yuv420p, 640x480 [PAR 1:1 DAR 4:3], 25 fps, 25 tbr, 90k tbn, 50 tbc
Stream #0.1[0x101]: Audio: aac, 48000 Hz, stereo, s16, 22 kb/s
Thank you,

I have seen quite a few questions in the subject - See:
HTTP Live Streaming MPEG TS segment and
Update .m3u8 playlist file for HTTP Live streaming?
I am not sure exact problem - but i think most people complain that when you mix content from both sources are different then there is a freezing.
I think this situation may arise if PTS and/or PCR is discontinuous and the player is not recognizing this or flushing it. Probably, you can identify the sequence of Timestamps and see if that being fixed solves the problems.
Also, see 3.3.11. of https://datatracker.ietf.org/doc/html/draft-pantos-http-live-streaming-07 : EXT-X-DISCONTINUITY
The EXT-X-DISCONTINUITY tag indicates an encoding discontinuity
between the media segment that follows it and the one that preceded
it. The set of characteristics that MAY change is:
o file format
o number and type of tracks
o encoding parameters
o encoding sequence
o timestamp sequence
So a discontinuity flag in the playlist file might just help if the problem is any of the above. Please try some of this, and put more details. I guess, this will help lot of other people as well.

Related

Can stream to Facebook using ffmpeg but not when using tee

I'm trying to stream to YouTube and Facebook simultaneously using ffmpeg.
I can do them individually, but I want to use "tee" to send the results of encoding to two places.
If I do:
ffmpeg -re -i pipe:0 -acodec libfdk_aac -bsf:a aac_adtstoasc \
-ar 44100 -b:a 128k -pix_fmt yuv420p -profile:v baseline \
-s 720x480 -bufsize 2048k -vb 1300k -maxrate 4000k -deinterlace \
-vcodec libx264 -g 25 -r 25 \
-f flv "rtmp://rtmp-api.facebook.com:80/rtmp/key"
It works just fine.
But if I do:
ffmpeg -re -i pipe:0 -acodec libfdk_aac -bsf:a aac_adtstoasc \
-ar 44100 -b:a 128k -pix_fmt yuv420p -profile:v baseline \
-s 720x480 -bufsize 2048k -vb 1300k -maxrate 4000k -deinterlace \
-vcodec libx264 -g 25 -r 25 \
-f tee -map 0:v -map 0:a \
"[f=flv]rtmp://rtmp-api.facebook.com:80/rtmp/key"
Then I get a rtmp 104 error.
If that would work then I could just do:
"[f=flv]rtmp://rtmp-api.facebook.com:80/rtmp/key|[f=flv]rtmp://youtube.etc"
And that would stream to both.
I did find out that I needed "-bsf:a aac_adtstoasc" otherwise the encoder broke, complaining about malformed bits.
Any ideas?
The error is only with Facebook. YouTube works fine.
Console output:
Metadata:
encoder : Lavf57.72.101
Stream #0:0: Video: h264 (libx264), yuv420p, 720x480 [SAR 8:9 DAR 4:3], q=-1--1, 1300 kb/s, 29.97 fps, 29.97 tbn, 29.97 tbc
Metadata:
encoder : Lavc57.95.101 libx264
Side data:
cpb: bitrate max/min/avg: 2000000/0/1300000 buffer size: 2048000 vbv_delay: -1
Stream #0:1: Audio: aac (libfdk_aac), 44100 Hz, stereo, s16, 128 kb/s
Metadata:
encoder : Lavc57.95.101 libfdk_aac
frame= 61 fps= 30 q=25.0 size=N/A time=00:00:01.97 bitrate=N/A speed=0.961x
WriteN, RTMP send error 104 (136 bytes)
The FLV format requires global headers. When ffmpeg outputs to FLV directly -f flv, the encoder is signaled to produce global headers. But when -f tee is the primary/parent muxer, that flag isn't set. So, it has to be manually set via -flags +global_header.

Streaming from Icecast to Facebook Live with ffmpeg on Ubuntu 16.04

I have a webradio streamed by Liquidsoap+Icecast on a DigitalOcean droplet (Ubuntu 16.04), and I want to combine this audio stream with a simple jpeg image with ffmpeg, transform it to a video stream and send it to Facebook live.
Facebook Live specifications :
Video Format :
We accept video in maximum 720p (1280 x 720) resolution, at 30 frames
per second. (or 1 key frame every 2 seconds). You must send an I-frame
(keyframe) at least once every two seconds throughout the stream..
Recommended max bit rate is 4000 Kbps. Titles must be less than 255
characters otherwise the stream will fail. The Live API accepts H264
encoded video and AAC encoded audio only.
Video Length :
240 minute maximum length, with the exception of continuous live (see
above). 240 minute maximum length for preview streams (either through
Live dialog or publisher tools). After 240 minutes, a new stream key
must be generated.
Advanced Settings :
Pixel Aspect Ratio: Square. Frame Types: Progressive Scan. Audio
Sample Rate: 44.1 KHz. Audio Bitrate: 128 Kbps stereo. Bitrate
Encoding: CBR.
And the ffmpeg command I tried :
ffmpeg -loop 1 -i radio-background.jpg -thread_queue_size 20480 -i http://localhost:8000/radio -framerate 30 -r 30 -acodec aac -strict -2 -c:v libx264 -strict experimental -b:a 128k -pix_fmt yuvj444p -x264-params keyint=60 -b:v 256k -minrate 128k -maxrate 512k -bufsize 768k -f flv 'rtmp://rtmp-api.facebook.com:80/rtmp/<fb-streaming-key>'
This is actually working, as Facebook receives the live video and allows me to publish it. But I can't figured out why there is a lag almost every 2 or 3 seconds. I asked different people to watch the test video, and everyone gets the same problem : every 2 or 3 seconds the playing "freezes" for half a second and seems to load the video, I even can see the loading icon spinning on the screen.
I tried different combinations of values for the following options : -thread_queue_size / -b:v / -minrate / -maxrate / -bufsize. Nothing seems to produce any change.
Video streaming is new for me, I'm not really confortable with the options listed before, so I think I'm missing something here...
Also, note that the icecast audio stream perfectly works, and according to DigitalOcean graphs, the server is not overloaded. So I think my ffmpeg command is wrong.
What ffmpeg parameters would be working for that case?
specify a frame rate for the image. this would go before the input item.
-r 30 -loop 1 -i radio-background.jpg
if your radio stream is is already aac you can just stream copy, there is no need to re-encode the audio. you can use -c:a copy.
-c:a copy
if you still want to use aac you should switch to using libfdk_aac. ffmpeg by default uses 128k bitrate for audio so there is no need to specify -b:a
-c:a libfdk_aac
ffmpeg will use the input framerate of the first item for the output by default so you dont need to specify anymore frame rates. (you have the output frame rate specified twice. -framerate 30 and -r 30 are the same)
ultrafast preset for better CPU performance, tune, and pixel format. you can also use -g for the keyent.
-c:v h264 -preset ultrafast -tune stillimage -pix_fmt yuvj444p -g 60
set the profile and profile level, bframes
-profile:v high444 -level 4.2
use either -b:v or -minrate -maxrate -bufsize but not both.
-b:v 768k
and out we go
-f flv rtmp://rtmp-api.facebook.com:80/rtmp/streamkey
now to put it all together
ffmpeg -r 30 -loop 1 -i radio-background.jpg \
-i http://localhost:port/mount -c:a libfdk_aac -c:v h264 -b:v 768k \
-preset ultrafast -tune stillimage -pix_fmt yuvj444p -g 60 \
-profile:v high444 -level 4.2 -f flv rtmp://rtmp-api.facebook.com:80/rtmp/streamkey

FFMPEG Transcode so that I can view from iPhone Safari

I've been searching all day for a way to transcode files that are being uploaded to something that iPhone's can handle (in Safari) without any success. I've read that it's best to use Quicktime for iPhone with the h.264 codec but I am struggling to find either the correct dependencies or the correct syntax for this. I have already managed to convert to mp4 and webm .
Mp4:
'ffmpeg -i '.$input.' -strict experimental -s 1024x760 -ab 128k -vcodec libx264 -mbd 2 -flags +mv4+aic -trellis 2 -cmp 2 -subcmp -2 '.$filepath.'/'.$filename.'.mp4'
Webm
'ffmpeg -i '.$input.' -b 600 -s 1024x760 -ab 128k -vcodec libvpx -ab 128k -acodec libvorbis '.$filepath.'/'.$filenamewithoutext.'.webm'
Anyone know how to get these videos available for Safari (on iPhone/Pad)?
In fact, there are much more options which can be set for the input file as well for the output file.
However, I have found this german site : http://www.quadhead.de/videos-mit-ffmpeg-fur-das-iphone-konvertieren-und-streamen/ with this command :
ffmpeg.exe -i "%~1" -r 29.97 -vcodec libx264 -s 480x320 -flags +loop -cmp +chroma -deblockalpha 0 -deblockbeta 0 -b 400k -bufsize 4M -bt 256k -refs 1 -coder 0 -me_range 16 -subq 4 -partitions +parti4x4+parti8x8+partp8x8 -g 250 -keyint_min 25 -level 30 -qmin 10 -qmax 51 -qcomp 0.6 -trellis 2 -sc_threshold 40 -i_qfactor 0.71 -acodec aac -ab 80k -ar 48000 -ac 2 -strict experimental -y "%~1".mp4
Yes, I'm german ;) that's the reason for my bad english. So feel free to correct my posts. But hey... I like ffmpeg too.
Have a nice day ;)
According to the official ffmpeg documentation on http://trac.ffmpeg.org/wiki/x264EncodingGuide, this is my suggestion to encode a video file to at least Apple Quicktime compatibility :
ffmpeg -i INPUT -c:v libx264 -movflags +faststart -profile:v normal -pix_fmt yuv420p -c:a aac -cutoff 15000 -b:a 128k OUTPUT.mp4
Have a nice day ;)

Generating 64kbps audio-only mpegts for HTTP Live segmenter to meet 64kbps audio only requirement

I am trying to convert our mp4 files into mpeg-ts and segment it into .ts files for my iphone app to play. I am using Carson McDonalds's HTTP-Live-Video-Stream-Segmenter-and-Distributor to do that.
I got his stuff complied and working correctly. I am currently trying to meet Apple's requirement where I need to provide a baseline 64 kbps audio only stream to my m3u8 playlist.
Carson doesn't seem to have a profile for that.
I need to be able to generate 64kbps audio-only stream from mp4, and turn that into mpeg-ts for the segmenter into ts. I am trying to find the right ffmpeg command that will validate without problem using Apple's mediastreamvalidator.
So far I modified an existing encoding profile to try to achieve 64kbps total:
ffmpeg -er 4 -i %s -f mpegts -acodec libmp3lame -ar 22050 -ab 32k -s 240x180 -vcodec libx264 -b 16k -flags +loop+mv4 -cmp 256 -partitions +parti4x4+partp8x8+partb8x8 -subq 7 -trellis 1 -refs 5 -coder 0 -me_range 16 -keyint_min 25 -sc_threshold 40 -i_qfactor 0.71 -bt 64k -maxrate 16k -bufsize 16k -rc_eq 'blurCplx^(1-qComp)' -qcomp 0.6 -qmin 10 -qmax 51 -qdiff 4 -level 30 -aspect 4:3 -r 10 -g 30 -async 2 - | %s %s %s %s %s
but then when I try to validate it using mediastreamvalidator, it gives error after few ts:
Playlist Validation: OK
Segments:
sample_cell_4x3_64k-00001.ts:
WARNING: Media segment exceeds target duration of 10.00 seconds by 1.30 seconds (segment duration is 11.30 seconds)
sample_cell_4x3_64k-00002.ts:
WARNING: Media segment exceeds target duration of 10.00 seconds by 1.40 seconds (segment duration is 11.40 seconds)
....
....
sample_cell_4x3_64k-00006.ts:
ERROR: (-1) Unknown video codec: 1836069494 (program 0, track 0)
ERROR: (-1) failed to parse segment as either an MPEG-2 TS or an ES
sample_cell_4x3_64k-00007.ts:
ERROR: (-1) Unknown video codec: 1836069494 (program 0, track 0)
ERROR: (-1) failed to parse segment as either an MPEG-2 TS or an ES
....
....
Average segment duration: 10.26 seconds
Average segment bitrate: 376797.92 bps
Average segment structural overhead: 349242.17 bps (92.69 %)
Is there someway I can generate this correctly with just audio which totals 64kbps and turn it into mpeg-ts ready to be segmented and validated correctly?
Am I approaching the problem right?
I don't remember all the details of Carson's ruby scripts, but the first thing I would do to get an audio-only stream is to stop the video processing (-vn). So something like this:
ffmpeg -er 4 -i %s -f mpegts -acodec libmp3lame -ar 22050 -ab 32k -vn - | %s %s %s %s %s

iPhone HTTP Streaming .m3u8 and .ts files - how to create using ffmpeg

I'm trying to get apple-validated http media streams using ffmpeg and am getting errors. Here are some error examples:
WARNING: Playlist Content-Type is 'application/x-mpegurl', but should
be one of 'application/vnd.apple.mpegurl', 'audio/x-mpegurl' or
'audio/mpegurl'.
WARNING: 258 samples (88.966 %) do not have timestamps in track 256
(avc1). 4: us2-1.ts
~~~~~~~~
WARNING: Media segment duration outside of expected duration by 47.733
% (5.23 vs. 10.00 seconds, limit is 20 %). 40: us2-19.ts
~~~~~~~~~
Average segment duration: 10.16 seconds
Average segment bitrate: 320.12 kbit/s
Average segment structural overhead: 175.89 kbit/s (54.94 %)
Video codec: avc1
Video resolution: 320x320 pixels
Video frame rate: 29.72, 29.78, 29.82, 30.00, 29.64 fps
Average video bitrate: 100.66 kbit/s
H.264 profile: Baseline
H.264 level: 3.0
Audio codec: aac
Audio sample rate: 48000 Hz
Average audio bitrate: 43.57 kbit/s
Here is the end file I've been submitting: http://files.chesscomfiles.com/images_users/using/us2.m3u8
Here is the file I used to create this: http://files.chesscomfiles.com/images_users/using/using-computers-1.mp4
I've tried these commands, among others:
ffmpeg -i using-computers-1.mp4 -f mpegts -acodec libfaac -ar 48000
-ab 64k -s 320x320 -vcodec libx264 -vbsf h264_mp4toannexb -b 96k -flags +loop -cmp +chroma -partitions +parti4x4+partp8x8+partb8x8 -subq 5 -trellis 1 -refs 1 -coder 0 -me_range 16 -keyint_min 2 -sc_threshold 40 -i_qfactor 0.71 -bt 200k -maxrate 96k -bufsize 96k -rc_eq 'blurCplx^(1-qComp)' -qcomp 0.6 -qmin 10 -qmax 51 -qdiff 4 -level 30 -g 30 -async 2 us2.ts
ffmpeg -i using-computers-1.mp4 -f mpegts -acodec libfaac -ar 48000
-ab 64k -s 320x320 -vcodec libx264 -vbsf h264_mp4toannexb -b 96k -flags +loop -cmp +chroma -partitions +parti4x4+partp8x8+partb8x8 -subq 5 -trellis 1 -refs 1 -coder 0 -me_range 16 -keyint_min 25 -sc_threshold 40 -i_qfactor 0.71 -bt 200k -maxrate 96k -bufsize 96k -rc_eq 'blurCplx^(1-qComp)' -qcomp 0.6 -qmin 10 -qmax 51 -qdiff 4 -level 30 -g 30 -async 2 us1.ts
ffmpeg -i using-computers-1.mp4 -vbsf h264_mp4toannexb -acodec copy -vcodec copy -f mpegts output.ts
If someone can help me figure out what ffmpeg commands I should be running I'd really appreciate it!
Regarding the first warning:
WARNING: Playlist Content-Type is
'application/x-mpegurl', but should be
one of
'application/vnd.apple.mpegurl',
'audio/x-mpegurl' or 'audio/mpegurl'.
It could be from the server setup. Follow the instructions from Step 4 of this Ion Cannon post:
Prepare the HTTP server Upload a set
of files that represent the stream and
a stream definition file (ts and
m3u8). Those files can be uploaded to
a web server at this point but there
is another important step to take that
ensures they will be download
correctly and that is setting up mime
types. There are two mime types that
are important for the streaming
content:
.m3u8 application/x-mpegURL
.ts video/MP2T
If you are using Apache you
would want to add the following to
your httpd.conf file:
AddType application/x-mpegURL .m3u8
AddType video/MP2T .ts
If you are
using lighttpd you would want to put
this in your configuration file (if
you have other mime types defined make
sure you just add these and don't set
them):
mimetype.assign = ( ".m3u8" =>
"application/x-mpegURL", ".ts" =>
"video/MP2T" )
Regarding the third warning:
WARNING: Media segment duration
outside of expected duration by 47.733
% (5.23 vs. 10.00 seconds, limit is 20
%). 40: us2-19.ts ~~~~~~~~~
This usually happens if a segment is a different duration than how the duration listed for that segment in the playlist (m3u8). For example, the below playlist has one segment and is listed by the playlist to be 10 seconds. If the actual duration of this segment is different by too much (more than 20%), than the validator will complain.
#EXTM3U
#EXT-X-TARGETDURATION:10
#EXT-X-MEDIA-SEQUENCE:0
#EXTINF:10,
med0.ts
#EXT-X-ENDLIST
Usually the last segment in a playlist will differ a bit from the target, and this warning can be ignored.
And, as a general rule these "WARNING" messages can be ignored, but "ERROR" messages need to be taken seriously.
However, the second warning looks more serious, and could possibly lead to a rejection from Apple. It could be your segmenter commands (are you using mediastreamsegmenter?).
Also, I'm not using "-vbsf h264_mp4toannexb". And, I'm using "-async 50".
Btw, the link to your playlist is invalid.