FFmpeg - The two pass in VP9 generates a empty output file for the first pass - encoding

When I try to encode a video file with a two pass in ffmpeg, the output file of the first pass is empty using vp9. Hence I can not proceed with the second part.
Code for the two-pass:
1.pass:
ffmpeg -y -s:v 3840x1920 -framerate 30 -i video_framerate_resolution.yuv -c:v libvpx-vp9 -crf 20
-pass 1 -an -f avi NULL && \
2.pass
ffmpeg -s:v 3840x1920 -framerate 30 -i video_framerate_resolution.yuv -c:v libvpx-vp9
-pass 2 -b:v 1000K -f avi out.avi
Any help would be greatly appreciated. Thanks.

You don't need to generate a file for the first pass. The purpose is simply to send the frames to the encoder so that it can log stats. However, you should skip the muxer.
So, Pass 1
ffmpeg -s:v 3840x1920 -framerate 30 -i video_framerate_resolution.yuv -c:v libvpx-vp9 -b:v 1000k -pass 1 -an -f null -
Pass 2
ffmpeg -s:v 3840x1920 -framerate 30 -i video_framerate_resolution.yuv -c:v libvpx-vp9 -pass 2 -b:v 1000K out.avi

Related

Better way to use ffmpeg with vidstab and encoding 2 pass

I scan old 8mm films
so I have folder with set of jpeg
I transform them to films using ffmpeg ( I choose x264 2 pass encoding)
//On all folder that start by 1 I launch the pass1 for x264
for f in 1*/ ; do cd "$f"; ffmpeg -y -r 18 -i img%05d.jpg -c:v libx264 -s 1200x898 -b:v 3000k -pass 1 -an -f mp4 /dev/null; cd ..; done
//On all folder that start by 1 I launch the pass2 x264
for f in 1*/ ; do cd "$f"; ffmpeg -y -r 18 -i img%05d.jpg -c:v libx264 -s 1200x898 -b:v 3000k -pass 2 ../"`echo ${PWD##*/}`.mp4"; cd ..; done
--> Before I have set of folder with jpeg
1965-FamilyStuff01\img1111.jpg,..,img9999.jpg
1965-FamilyStuff02\img1111.jpg,..,img9999.jpg
and I get
1965-FamilyStuff01.mp4
1965-FamilyStuff02.mp4
then I discover vidstab that also need 2 pass
// Stabilize every Video of a folder
mkdir stab;for f in ./*.mp4 ; do echo "Stabilize $f" ;
ffmpeg -i "$f" -vf vidstabdetect=shakiness=5:accuracy=15:stepsize=6:mincontrast=0.3:show=2 -y -f mp4 /dev/null;
ffmpeg -i "$f" -vf vidstabtransform=smoothing=30:input="transforms.trf":interpol=linear:crop=black:zoom=0:optzoom=1,unsharp=5:5:0.8:3:3:0.4 -y "stab/$f"
; done; rm transforms.trf;
But I ask myself, that perhaps the order is not correct or perhaps there is a way to do the encoding with vidstab in less than 4 pass (2 pass for x264 encoding then 2 pass for vidstab)
or perhaps the order should be change to optimize quality of film output)
You will need to run two commands to use vidstab. But x264 does not need two-passes for best quality. Two-pass encoding is used to target a specific output file size. Just use a single pass with the -crf option.
So you only need to use two commands:
ffmpeg -i input.mp4 -vf "scale=1200:-2,vidstabdetect=shakiness=5:accuracy=15:stepsize=6:mincontrast=0.3:show=2" -f null -
ffmpeg -i input.mp4 -vf "scale=1200:-2,vidstabtransform=smoothing=30:interpol=linear:crop=black:zoom=0:optzoom=1,unsharp=5:5:0.8:3:3:0.4,format=yuv420p" -crf 23 -preset medium output.mp4
See FFmpeg Wiki: H.264 for more info on -crf and -preset.

Text streaming with RTMP?

I'm trying to get the output of a bash file to an RTMP stream.
I've successfully done it with FFMPEG using a filter, but the stream stops at Random intervals.
I assume that it's FFMPEG reading NULL data from the file.
I already write another file "output.txt", delete " input.txt" (which FFMPEG is reading) and rename "output.txt" to "input.txt".
Is there any way to do it more atomic in bash so it will work? Or is there a more elegant way to turn a changing text (max one time per second) to an FFMPEG stream?
Here is my current script:
ffmpeg -s 1920x1080 -f rawvideo -pix_fmt rgb24 -r 10 -i /dev/zero -f lavfi -i anullsrc -vcodec h264 -pix_fmt yuv420p -r 10 -b:v 2500k -qscale:v 3 -b:a 712000 -bufsize 512k -vf "drawtext=fontcolor=0xFFFFFF:fontsize=15:fontfile=/usr/share/fonts/truetype/dejavu/DejaVuSansMono.ttf:textfile=input.txt:x=0:y=0:reload=1" -f flv "rtmp://example.com/key"

FFmpeg VP9 - Different Quantisation Parameters but same output files

I want to encode a video with vp9 with different quantisation parameters (qp=[16,20,24,28,32]). Unfortunately the output files have the same data rate after encoding and don't show any quality differences.
This is my code for qp=20:
ffmpeg -s:v 3840x1920 -framerate 30 -i video_3840x1920_30fps_8bit_420_erp.yuv -c:v libvpx-vp9 -qp 20 -f avi out.avi
Many thanks for any pointers you can give me.
-qp only works for internal mpegvideoenc-derived encoders, such as FFmpeg's built-in MPEG-1/2/4 encoders. Libvpx, like x264/5, uses -crf to do this instead. See the Wiki for more details. You can also type ffmpeg -h encoder=libvpx-vp9:
$ ffmpeg -h encoder=libvpx-vp9
[..]
-crf <int> E..V.... Select the quality for constant quality mode (from -1 to 63) (default -1)
So for qp=20, you would use ffmpeg -s:v 3840x1920 -framerate 30 -i video_3840x1920_30fps_8bit_420_erp.yuv -c:v libvpx-vp9 -crf 20 -b:v 0 out.avi.

Encoding 25mp video

I have a 25MP uncompressed video file of 100 frames.
I tried to encode it with ffmpeg and h264 encoder into a .mp4 file, but the encoding got stuck around the 10th frame.
This is the script:
avconv -y -i input.avi -c:v libx264 -preset medium -b:v 5000K -pass 1 -c:a libfdk_aac -b:a 5000K -f mp4 /dev/null && \
avconv -i input.avi -c:v libx264 -preset medium -b:v 5000K -pass 2 -c:a libfdk_aac -b:a 5000K output.mp4
I am running it on a jetson TK1 with nvidia gpu, is there any way to use an accelarating encoding in order to make the encoding possible?
Please, if you can, give me a sampler script of something that might work.
Right now, I dont care how much time the encoding take, as long as it will work.
Thank you in advance! :)

Dividing, processing and merging files with ffmpeg

I am trying to build an application that will divide an input video file (usually mp4) into chunks so that I can apply some processing to them concurrently and then merge them back into a single file.
To do this, I have outlined 4 steps:
Forcing keyframes at specific intervals so to make sure that each
chunk can be played on its own. For this I am using the following
command:
ffmpeg -i input.mp4 -force_key_frames
"expr:gte(t,n_forced*chunk_length)" keyframed.mp4
where chunk_length is the duration of each chunk.
Dividing keyframed.mp4 into multiple chunks.
Here is where I have my problem. I am using the following command:
`ffmpeg -i keyframed.mp4 -ss 00:00:00 -t chunk_length -vcodec copy -acodec copy test1.mp4`
to get the first chunk from my keyframed file but it isn't capturing
the output correctly, since it appears to miss the first keyframe.
On other chunks, the duration of the output is also sometimes
slightly less than chunk_length, even though I am always using the
same -t chunk_length option
Processing each chunk For this task, I am using the following
commands:
ffmpeg -y -i INPUT_FILE -threads 1 -pass 1 -s 1280x720 -preset
medium -vprofile baseline -c:v libx264 -level 3.0 -vf
"format=yuv420p" -b:v 2000k -maxrate:v 2688k -bufsize:v 2688k -r 25
-g 25 -keyint_min 50 -x264opts "keyint=50:min-keyint=50:no-scenecut" -an -f mp4 -movflags faststart /dev/null
ffmpeg -y -i INPUT_FILE -threads 1 -pass 2 -s 1280x720 -preset
medium -vprofile baseline -c:v libx264 -level 3.0 -vf
"format=yuv420p" -b:v 2000k -maxrate:v 2688k -bufsize:v 2688k -r 25
-g 25 -keyint_min 50 -x264opts "keyint=50:min-keyint=50:no-scenecut" -acodec libfaac -ac 2 -ar 48000 -ab 128k -f mp4 -movflags faststart OUTPUT_FILE.mp4
This commands are not allowed to be modified, since my goal here is
to parallelize this process.
Finally, to merge the files I am using concat and a list of the
outputs of the 2nd step, as follows:
ffmpeg -f concat -i mylist.txt -c copy final.mp4
In conclusion, I am trying to find out a way to solve the problem with step 2 and also get some opinions if there is a better way to do this.
I found a solution with the following code, which segments the file without the need to force keyframes (it cuts on the nearest keyframe) and multiple commands.
ffmpeg -i test.mp4 -f segment -segment_time chunk_length -reset_timestamps 1 -c copy test%02d.mp4