Text streaming with RTMP? - encoding

I'm trying to get the output of a bash file to an RTMP stream.
I've successfully done it with FFMPEG using a filter, but the stream stops at Random intervals.
I assume that it's FFMPEG reading NULL data from the file.
I already write another file "output.txt", delete " input.txt" (which FFMPEG is reading) and rename "output.txt" to "input.txt".
Is there any way to do it more atomic in bash so it will work? Or is there a more elegant way to turn a changing text (max one time per second) to an FFMPEG stream?
Here is my current script:
ffmpeg -s 1920x1080 -f rawvideo -pix_fmt rgb24 -r 10 -i /dev/zero -f lavfi -i anullsrc -vcodec h264 -pix_fmt yuv420p -r 10 -b:v 2500k -qscale:v 3 -b:a 712000 -bufsize 512k -vf "drawtext=fontcolor=0xFFFFFF:fontsize=15:fontfile=/usr/share/fonts/truetype/dejavu/DejaVuSansMono.ttf:textfile=input.txt:x=0:y=0:reload=1" -f flv "rtmp://example.com/key"

Related

Better way to use ffmpeg with vidstab and encoding 2 pass

I scan old 8mm films
so I have folder with set of jpeg
I transform them to films using ffmpeg ( I choose x264 2 pass encoding)
//On all folder that start by 1 I launch the pass1 for x264
for f in 1*/ ; do cd "$f"; ffmpeg -y -r 18 -i img%05d.jpg -c:v libx264 -s 1200x898 -b:v 3000k -pass 1 -an -f mp4 /dev/null; cd ..; done
//On all folder that start by 1 I launch the pass2 x264
for f in 1*/ ; do cd "$f"; ffmpeg -y -r 18 -i img%05d.jpg -c:v libx264 -s 1200x898 -b:v 3000k -pass 2 ../"`echo ${PWD##*/}`.mp4"; cd ..; done
--> Before I have set of folder with jpeg
1965-FamilyStuff01\img1111.jpg,..,img9999.jpg
1965-FamilyStuff02\img1111.jpg,..,img9999.jpg
and I get
1965-FamilyStuff01.mp4
1965-FamilyStuff02.mp4
then I discover vidstab that also need 2 pass
// Stabilize every Video of a folder
mkdir stab;for f in ./*.mp4 ; do echo "Stabilize $f" ;
ffmpeg -i "$f" -vf vidstabdetect=shakiness=5:accuracy=15:stepsize=6:mincontrast=0.3:show=2 -y -f mp4 /dev/null;
ffmpeg -i "$f" -vf vidstabtransform=smoothing=30:input="transforms.trf":interpol=linear:crop=black:zoom=0:optzoom=1,unsharp=5:5:0.8:3:3:0.4 -y "stab/$f"
; done; rm transforms.trf;
But I ask myself, that perhaps the order is not correct or perhaps there is a way to do the encoding with vidstab in less than 4 pass (2 pass for x264 encoding then 2 pass for vidstab)
or perhaps the order should be change to optimize quality of film output)
You will need to run two commands to use vidstab. But x264 does not need two-passes for best quality. Two-pass encoding is used to target a specific output file size. Just use a single pass with the -crf option.
So you only need to use two commands:
ffmpeg -i input.mp4 -vf "scale=1200:-2,vidstabdetect=shakiness=5:accuracy=15:stepsize=6:mincontrast=0.3:show=2" -f null -
ffmpeg -i input.mp4 -vf "scale=1200:-2,vidstabtransform=smoothing=30:interpol=linear:crop=black:zoom=0:optzoom=1,unsharp=5:5:0.8:3:3:0.4,format=yuv420p" -crf 23 -preset medium output.mp4
See FFmpeg Wiki: H.264 for more info on -crf and -preset.

FFmpeg - Down-mix AC3 5.1 to Fraunhofer FDK ACC 2.1

I'm trying to re-encode some of my old videos to "archive" them.
I do not need to keep the audio 5.1, but I would like to down-mix it to 2.1 instead of Stereo which sounds just too dull.
This is the relevant part which takes care of the down-mix to Stereo and re-encodes the audio, I would like to adjust it to down-mix to 2.1.
-ac 2 -c:a libfdk_aac -vbr 3
I did some research and it seems that there is a -layouts switch which does support 2.1, but I don't know, how to use it. What channel should go where?
Just for illustration and for you to get the whole picture - I'm currently using this script:
#!/bin/bash
for i in *.mkv;
do
#Output new files by prepending "x265" to the names
/cygdrive/c/media-autobuild_suite/local32/bin-video/ffmpeg.exe -y -i "$i" -c:v libx265 -preset slow -b:v 512k -x265-params pass=1 -c:s copy -c:a copy -f matroska NUL && \
/cygdrive/c/media-autobuild_suite/local32/bin-video/ffmpeg.exe -i "$i" -c:v libx265 -preset slow -b:v 512k -x265-params pass=2 -c:s copy -ac 2 -c:a libfdk_aac -vbr 3 x265_"$i"
done
The FDK aac encoder does not support 2.1, but the native encoder does.
ffmpeg -i "$i" ... -c:s copy -af pan=2.1 -c:a aac x265_"$i"

Encoding 25mp video

I have a 25MP uncompressed video file of 100 frames.
I tried to encode it with ffmpeg and h264 encoder into a .mp4 file, but the encoding got stuck around the 10th frame.
This is the script:
avconv -y -i input.avi -c:v libx264 -preset medium -b:v 5000K -pass 1 -c:a libfdk_aac -b:a 5000K -f mp4 /dev/null && \
avconv -i input.avi -c:v libx264 -preset medium -b:v 5000K -pass 2 -c:a libfdk_aac -b:a 5000K output.mp4
I am running it on a jetson TK1 with nvidia gpu, is there any way to use an accelarating encoding in order to make the encoding possible?
Please, if you can, give me a sampler script of something that might work.
Right now, I dont care how much time the encoding take, as long as it will work.
Thank you in advance! :)

Dividing, processing and merging files with ffmpeg

I am trying to build an application that will divide an input video file (usually mp4) into chunks so that I can apply some processing to them concurrently and then merge them back into a single file.
To do this, I have outlined 4 steps:
Forcing keyframes at specific intervals so to make sure that each
chunk can be played on its own. For this I am using the following
command:
ffmpeg -i input.mp4 -force_key_frames
"expr:gte(t,n_forced*chunk_length)" keyframed.mp4
where chunk_length is the duration of each chunk.
Dividing keyframed.mp4 into multiple chunks.
Here is where I have my problem. I am using the following command:
`ffmpeg -i keyframed.mp4 -ss 00:00:00 -t chunk_length -vcodec copy -acodec copy test1.mp4`
to get the first chunk from my keyframed file but it isn't capturing
the output correctly, since it appears to miss the first keyframe.
On other chunks, the duration of the output is also sometimes
slightly less than chunk_length, even though I am always using the
same -t chunk_length option
Processing each chunk For this task, I am using the following
commands:
ffmpeg -y -i INPUT_FILE -threads 1 -pass 1 -s 1280x720 -preset
medium -vprofile baseline -c:v libx264 -level 3.0 -vf
"format=yuv420p" -b:v 2000k -maxrate:v 2688k -bufsize:v 2688k -r 25
-g 25 -keyint_min 50 -x264opts "keyint=50:min-keyint=50:no-scenecut" -an -f mp4 -movflags faststart /dev/null
ffmpeg -y -i INPUT_FILE -threads 1 -pass 2 -s 1280x720 -preset
medium -vprofile baseline -c:v libx264 -level 3.0 -vf
"format=yuv420p" -b:v 2000k -maxrate:v 2688k -bufsize:v 2688k -r 25
-g 25 -keyint_min 50 -x264opts "keyint=50:min-keyint=50:no-scenecut" -acodec libfaac -ac 2 -ar 48000 -ab 128k -f mp4 -movflags faststart OUTPUT_FILE.mp4
This commands are not allowed to be modified, since my goal here is
to parallelize this process.
Finally, to merge the files I am using concat and a list of the
outputs of the 2nd step, as follows:
ffmpeg -f concat -i mylist.txt -c copy final.mp4
In conclusion, I am trying to find out a way to solve the problem with step 2 and also get some opinions if there is a better way to do this.
I found a solution with the following code, which segments the file without the need to force keyframes (it cuts on the nearest keyframe) and multiple commands.
ffmpeg -i test.mp4 -f segment -segment_time chunk_length -reset_timestamps 1 -c copy test%02d.mp4

ffmpeg rtmp webcam live stream iphone/pad segment size too big

I'm transcoding a rtmp stream from a red5 server for use to live stream on a iphone or ipad device. I built latest ffmpeg version from git repo using the built in segmenter to create .ts files and m3u8 playlist file using the following:
ffmpeg -probesize 50k -i "rtmp://localhost/oflaDemo/red5StreamDemo live=1" \
-c:v libx264 -b:v 128k -vpre ipod320 -flags -global_header -map 0 \
-f segment -segment_time 3 -segment_list foo.m3u8 -segment_list_flags +live \
-segment_list_type m3u8 -segment_list_size 5 -segment_format mpegts foo%d.ts
This works fine, but I can't get the segment size smaller than about 12 sec even set to 3 (-segment_time 3). It seems to be caused by libx264 vcodec.
Am I missing any flag?
By the way, you can simple run the ffmpeg command above successfully by starting red5 SimpleBroadcaster example.
i suspect it is because of GOP size. segmenter needs I-frame boundary to be able to create segments.
ffmpeg -probesize 50k -i "rtmp://localhost/oflaDemo/red5StreamDemo live=1" \
-c:v libx264 -b:v 128k -g 90 -vpre ipod320 -flags -global_header -map 0 \
-f segment -segment_time 3 -segment_list foo.m3u8 -segment_list_flags +live \
-segment_list_type m3u8 -segment_list_size 5 -segment_format mpegts foo%d.ts
added -g 90. could help.