ffmpeg rtmp webcam live stream iphone/pad segment size too big - iphone

I'm transcoding a rtmp stream from a red5 server for use to live stream on a iphone or ipad device. I built latest ffmpeg version from git repo using the built in segmenter to create .ts files and m3u8 playlist file using the following:
ffmpeg -probesize 50k -i "rtmp://localhost/oflaDemo/red5StreamDemo live=1" \
-c:v libx264 -b:v 128k -vpre ipod320 -flags -global_header -map 0 \
-f segment -segment_time 3 -segment_list foo.m3u8 -segment_list_flags +live \
-segment_list_type m3u8 -segment_list_size 5 -segment_format mpegts foo%d.ts
This works fine, but I can't get the segment size smaller than about 12 sec even set to 3 (-segment_time 3). It seems to be caused by libx264 vcodec.
Am I missing any flag?
By the way, you can simple run the ffmpeg command above successfully by starting red5 SimpleBroadcaster example.

i suspect it is because of GOP size. segmenter needs I-frame boundary to be able to create segments.
ffmpeg -probesize 50k -i "rtmp://localhost/oflaDemo/red5StreamDemo live=1" \
-c:v libx264 -b:v 128k -g 90 -vpre ipod320 -flags -global_header -map 0 \
-f segment -segment_time 3 -segment_list foo.m3u8 -segment_list_flags +live \
-segment_list_type m3u8 -segment_list_size 5 -segment_format mpegts foo%d.ts
added -g 90. could help.

Related

ffmpeg monochrome rawvideo

I am trying to generate a raw video stream with luma only (monochrome, YUV400) 8bit pixel data using the following command:
ffmpeg -i input.mp4 -vcodec rawvideo -pix_fmt raw.yuv
After that I want to h.264 encode the raw stream with the profiles that support monochrome pixel data (eg: high)
ffmpeg -f rawvideo -vcodec rawvideo -pix_fmt gray -s 640x512 -r 60 -i raw.yuv -codec:v libx264 -profile:v high -c:a copy out.mp4
However, i always get the following error, which indicates that the raw stream is not in the monochrome format that I expected:
x264 [error]: high profile doesn't support 4:4:4
I am new to ffmpeg and video formats in general. Can somebody please point out what I am missing?
Thank you!
Edit:
I also tried to use the following filter to extract only the luma channel. Unfortunately, the end result was the same.
ffmpeg -i input.mp4 -vcodec rawvideo -pix_fmt gray -filter_complex 'extractplanes=y[y]' -map '[y]' raw.yuv
The ffmpeg version installed was quite old (3.4.7). After installing 4.2.3 everything worked fine.

Text streaming with RTMP?

I'm trying to get the output of a bash file to an RTMP stream.
I've successfully done it with FFMPEG using a filter, but the stream stops at Random intervals.
I assume that it's FFMPEG reading NULL data from the file.
I already write another file "output.txt", delete " input.txt" (which FFMPEG is reading) and rename "output.txt" to "input.txt".
Is there any way to do it more atomic in bash so it will work? Or is there a more elegant way to turn a changing text (max one time per second) to an FFMPEG stream?
Here is my current script:
ffmpeg -s 1920x1080 -f rawvideo -pix_fmt rgb24 -r 10 -i /dev/zero -f lavfi -i anullsrc -vcodec h264 -pix_fmt yuv420p -r 10 -b:v 2500k -qscale:v 3 -b:a 712000 -bufsize 512k -vf "drawtext=fontcolor=0xFFFFFF:fontsize=15:fontfile=/usr/share/fonts/truetype/dejavu/DejaVuSansMono.ttf:textfile=input.txt:x=0:y=0:reload=1" -f flv "rtmp://example.com/key"

FFmpeg - Down-mix AC3 5.1 to Fraunhofer FDK ACC 2.1

I'm trying to re-encode some of my old videos to "archive" them.
I do not need to keep the audio 5.1, but I would like to down-mix it to 2.1 instead of Stereo which sounds just too dull.
This is the relevant part which takes care of the down-mix to Stereo and re-encodes the audio, I would like to adjust it to down-mix to 2.1.
-ac 2 -c:a libfdk_aac -vbr 3
I did some research and it seems that there is a -layouts switch which does support 2.1, but I don't know, how to use it. What channel should go where?
Just for illustration and for you to get the whole picture - I'm currently using this script:
#!/bin/bash
for i in *.mkv;
do
#Output new files by prepending "x265" to the names
/cygdrive/c/media-autobuild_suite/local32/bin-video/ffmpeg.exe -y -i "$i" -c:v libx265 -preset slow -b:v 512k -x265-params pass=1 -c:s copy -c:a copy -f matroska NUL && \
/cygdrive/c/media-autobuild_suite/local32/bin-video/ffmpeg.exe -i "$i" -c:v libx265 -preset slow -b:v 512k -x265-params pass=2 -c:s copy -ac 2 -c:a libfdk_aac -vbr 3 x265_"$i"
done
The FDK aac encoder does not support 2.1, but the native encoder does.
ffmpeg -i "$i" ... -c:s copy -af pan=2.1 -c:a aac x265_"$i"

Encoding 25mp video

I have a 25MP uncompressed video file of 100 frames.
I tried to encode it with ffmpeg and h264 encoder into a .mp4 file, but the encoding got stuck around the 10th frame.
This is the script:
avconv -y -i input.avi -c:v libx264 -preset medium -b:v 5000K -pass 1 -c:a libfdk_aac -b:a 5000K -f mp4 /dev/null && \
avconv -i input.avi -c:v libx264 -preset medium -b:v 5000K -pass 2 -c:a libfdk_aac -b:a 5000K output.mp4
I am running it on a jetson TK1 with nvidia gpu, is there any way to use an accelarating encoding in order to make the encoding possible?
Please, if you can, give me a sampler script of something that might work.
Right now, I dont care how much time the encoding take, as long as it will work.
Thank you in advance! :)

Dividing, processing and merging files with ffmpeg

I am trying to build an application that will divide an input video file (usually mp4) into chunks so that I can apply some processing to them concurrently and then merge them back into a single file.
To do this, I have outlined 4 steps:
Forcing keyframes at specific intervals so to make sure that each
chunk can be played on its own. For this I am using the following
command:
ffmpeg -i input.mp4 -force_key_frames
"expr:gte(t,n_forced*chunk_length)" keyframed.mp4
where chunk_length is the duration of each chunk.
Dividing keyframed.mp4 into multiple chunks.
Here is where I have my problem. I am using the following command:
`ffmpeg -i keyframed.mp4 -ss 00:00:00 -t chunk_length -vcodec copy -acodec copy test1.mp4`
to get the first chunk from my keyframed file but it isn't capturing
the output correctly, since it appears to miss the first keyframe.
On other chunks, the duration of the output is also sometimes
slightly less than chunk_length, even though I am always using the
same -t chunk_length option
Processing each chunk For this task, I am using the following
commands:
ffmpeg -y -i INPUT_FILE -threads 1 -pass 1 -s 1280x720 -preset
medium -vprofile baseline -c:v libx264 -level 3.0 -vf
"format=yuv420p" -b:v 2000k -maxrate:v 2688k -bufsize:v 2688k -r 25
-g 25 -keyint_min 50 -x264opts "keyint=50:min-keyint=50:no-scenecut" -an -f mp4 -movflags faststart /dev/null
ffmpeg -y -i INPUT_FILE -threads 1 -pass 2 -s 1280x720 -preset
medium -vprofile baseline -c:v libx264 -level 3.0 -vf
"format=yuv420p" -b:v 2000k -maxrate:v 2688k -bufsize:v 2688k -r 25
-g 25 -keyint_min 50 -x264opts "keyint=50:min-keyint=50:no-scenecut" -acodec libfaac -ac 2 -ar 48000 -ab 128k -f mp4 -movflags faststart OUTPUT_FILE.mp4
This commands are not allowed to be modified, since my goal here is
to parallelize this process.
Finally, to merge the files I am using concat and a list of the
outputs of the 2nd step, as follows:
ffmpeg -f concat -i mylist.txt -c copy final.mp4
In conclusion, I am trying to find out a way to solve the problem with step 2 and also get some opinions if there is a better way to do this.
I found a solution with the following code, which segments the file without the need to force keyframes (it cuts on the nearest keyframe) and multiple commands.
ffmpeg -i test.mp4 -f segment -segment_time chunk_length -reset_timestamps 1 -c copy test%02d.mp4