I am trying to achieve partial transcode using ffmpeg.
The command I am using currently is:
ffmpeg.exe -ss start-time -i source file -t duration -y
-s 640x360 -b:v 1024k -vcodec libx264 -r 29.7 -movflags faststart -pix_fmt yuv420p outputfile
In the ffmpeg documentation, I read about -to parameter:
-to position (output) Stop writing the output at position. position may be a number in seconds, or in hh:mm:ss[.xxx] form.
-to and -t are mutually exclusive and -t has priority.
But when I tried -to in place of "-t" , the output was same, I mean the value after -to is taken as duration of out put video. I thought it would treat the value like end time. Am I missing something?
From the FFmpeg Wiki:
Note that if you specify -ss before -i only, the timestamps will be reset to zero, so -t and -to have the same effect:
ffmpeg -ss 00:01:00 -i video.mp4 -to 00:02:00 -c copy cut.mp4
ffmpeg -i video.mp4 -ss 00:01:00 -to 00:02:00 -c copy cut.mp4
Here, the first command will cut from 00:01:00 to 00:03:00 (in the original), whereas the second command would cut from 00:01:00 to 00:02:00, as intended.
So, make sure you put -ss after the input, so that the timestamps aren't reset.
Related
I scan old 8mm films
so I have folder with set of jpeg
I transform them to films using ffmpeg ( I choose x264 2 pass encoding)
//On all folder that start by 1 I launch the pass1 for x264
for f in 1*/ ; do cd "$f"; ffmpeg -y -r 18 -i img%05d.jpg -c:v libx264 -s 1200x898 -b:v 3000k -pass 1 -an -f mp4 /dev/null; cd ..; done
//On all folder that start by 1 I launch the pass2 x264
for f in 1*/ ; do cd "$f"; ffmpeg -y -r 18 -i img%05d.jpg -c:v libx264 -s 1200x898 -b:v 3000k -pass 2 ../"`echo ${PWD##*/}`.mp4"; cd ..; done
--> Before I have set of folder with jpeg
1965-FamilyStuff01\img1111.jpg,..,img9999.jpg
1965-FamilyStuff02\img1111.jpg,..,img9999.jpg
and I get
1965-FamilyStuff01.mp4
1965-FamilyStuff02.mp4
then I discover vidstab that also need 2 pass
// Stabilize every Video of a folder
mkdir stab;for f in ./*.mp4 ; do echo "Stabilize $f" ;
ffmpeg -i "$f" -vf vidstabdetect=shakiness=5:accuracy=15:stepsize=6:mincontrast=0.3:show=2 -y -f mp4 /dev/null;
ffmpeg -i "$f" -vf vidstabtransform=smoothing=30:input="transforms.trf":interpol=linear:crop=black:zoom=0:optzoom=1,unsharp=5:5:0.8:3:3:0.4 -y "stab/$f"
; done; rm transforms.trf;
But I ask myself, that perhaps the order is not correct or perhaps there is a way to do the encoding with vidstab in less than 4 pass (2 pass for x264 encoding then 2 pass for vidstab)
or perhaps the order should be change to optimize quality of film output)
You will need to run two commands to use vidstab. But x264 does not need two-passes for best quality. Two-pass encoding is used to target a specific output file size. Just use a single pass with the -crf option.
So you only need to use two commands:
ffmpeg -i input.mp4 -vf "scale=1200:-2,vidstabdetect=shakiness=5:accuracy=15:stepsize=6:mincontrast=0.3:show=2" -f null -
ffmpeg -i input.mp4 -vf "scale=1200:-2,vidstabtransform=smoothing=30:interpol=linear:crop=black:zoom=0:optzoom=1,unsharp=5:5:0.8:3:3:0.4,format=yuv420p" -crf 23 -preset medium output.mp4
See FFmpeg Wiki: H.264 for more info on -crf and -preset.
I'm trying to get the output of a bash file to an RTMP stream.
I've successfully done it with FFMPEG using a filter, but the stream stops at Random intervals.
I assume that it's FFMPEG reading NULL data from the file.
I already write another file "output.txt", delete " input.txt" (which FFMPEG is reading) and rename "output.txt" to "input.txt".
Is there any way to do it more atomic in bash so it will work? Or is there a more elegant way to turn a changing text (max one time per second) to an FFMPEG stream?
Here is my current script:
ffmpeg -s 1920x1080 -f rawvideo -pix_fmt rgb24 -r 10 -i /dev/zero -f lavfi -i anullsrc -vcodec h264 -pix_fmt yuv420p -r 10 -b:v 2500k -qscale:v 3 -b:a 712000 -bufsize 512k -vf "drawtext=fontcolor=0xFFFFFF:fontsize=15:fontfile=/usr/share/fonts/truetype/dejavu/DejaVuSansMono.ttf:textfile=input.txt:x=0:y=0:reload=1" -f flv "rtmp://example.com/key"
I made a powershell function to recode video with some extra parameters. It basically makes a get-childitem in the directory and feeds every occurrence it finds to a foreach loop. This worked well as long as I have default values inside my function which gets fed to the ffmpeg string in the loop in case I do not provide anything on the commandline (like number of passes, audio quality etc.). Now I wanted to integrate the option to use the -vf ffmpeg filter option. My problem there is, that I usualy dont need that, so there is no sane default option I could use, so I can not have something like -vf $filteroption in my command line. So I am trying to figure out how to get that "-vf" inside the variable without powershell or ffmpeg screwing me over, because atm I get either the error of a missing - in what ffmpeg sees (I guess powershell parses this away) and when I \ escape the - I see it now in the ffmpeg line, but ffmpeg does not recognize it as single parameter.
examples which work:
&$encoder -hide_banner -i $i -c:v libvpx-vp9 -b:v 0 -crf $quality -tile-columns 6 -tile-rows 2 -threads 8 -speed 2 -frame-parallel 0 -row-mt 1 -c:a libopus -b:a $bitrate -af aformat=channel_layouts=$audio -c:s copy -auto-alt-ref 1 -lag-in-frames 25 -y $outfile;
here I provide $quality, $audio etc. with powershell parameters to the function like -quality 31 -audio stereo and it all works.
But now I need to get something like "-vf scale=1920:-1" or "" inside that line and that does not work with something like just this:
&$encoder -hide_banner -i $i -c:v libvpx-vp9 -b:v 0 -crf $quality -tile-columns 6 -tile-rows 2 -threads 8 -speed 2 -frame-parallel 0 -row-mt 1 -c:a libopus -b:a $bitrate -af aformat=channel_layouts=$audio -c:s copy -auto-alt-ref 1 -lag-in-frames 25 -y $extra $outfile;
when I call the function with: "RecodeVP9 -extra -vf scale=1920:-1" powershell takes away the -, if I try it with escaping the - with - ffmpeg whines about it saying that "Unable to find a suitable output format for '-vf'". I also tried "" and "-" with similiar results. So it seems that either powershell screws me over or ffmpeg.
So to sum it up:
I need a way to get extra ffmpeg arguments WITH the parameter name itself from the powershell command line into my powershell function (like -vf scale=1920:-1).
When I try to encode a video file with a two pass in ffmpeg, the output file of the first pass is empty using vp9. Hence I can not proceed with the second part.
Code for the two-pass:
1.pass:
ffmpeg -y -s:v 3840x1920 -framerate 30 -i video_framerate_resolution.yuv -c:v libvpx-vp9 -crf 20
-pass 1 -an -f avi NULL && \
2.pass
ffmpeg -s:v 3840x1920 -framerate 30 -i video_framerate_resolution.yuv -c:v libvpx-vp9
-pass 2 -b:v 1000K -f avi out.avi
Any help would be greatly appreciated. Thanks.
You don't need to generate a file for the first pass. The purpose is simply to send the frames to the encoder so that it can log stats. However, you should skip the muxer.
So, Pass 1
ffmpeg -s:v 3840x1920 -framerate 30 -i video_framerate_resolution.yuv -c:v libvpx-vp9 -b:v 1000k -pass 1 -an -f null -
Pass 2
ffmpeg -s:v 3840x1920 -framerate 30 -i video_framerate_resolution.yuv -c:v libvpx-vp9 -pass 2 -b:v 1000K out.avi
I am trying to build an application that will divide an input video file (usually mp4) into chunks so that I can apply some processing to them concurrently and then merge them back into a single file.
To do this, I have outlined 4 steps:
Forcing keyframes at specific intervals so to make sure that each
chunk can be played on its own. For this I am using the following
command:
ffmpeg -i input.mp4 -force_key_frames
"expr:gte(t,n_forced*chunk_length)" keyframed.mp4
where chunk_length is the duration of each chunk.
Dividing keyframed.mp4 into multiple chunks.
Here is where I have my problem. I am using the following command:
`ffmpeg -i keyframed.mp4 -ss 00:00:00 -t chunk_length -vcodec copy -acodec copy test1.mp4`
to get the first chunk from my keyframed file but it isn't capturing
the output correctly, since it appears to miss the first keyframe.
On other chunks, the duration of the output is also sometimes
slightly less than chunk_length, even though I am always using the
same -t chunk_length option
Processing each chunk For this task, I am using the following
commands:
ffmpeg -y -i INPUT_FILE -threads 1 -pass 1 -s 1280x720 -preset
medium -vprofile baseline -c:v libx264 -level 3.0 -vf
"format=yuv420p" -b:v 2000k -maxrate:v 2688k -bufsize:v 2688k -r 25
-g 25 -keyint_min 50 -x264opts "keyint=50:min-keyint=50:no-scenecut" -an -f mp4 -movflags faststart /dev/null
ffmpeg -y -i INPUT_FILE -threads 1 -pass 2 -s 1280x720 -preset
medium -vprofile baseline -c:v libx264 -level 3.0 -vf
"format=yuv420p" -b:v 2000k -maxrate:v 2688k -bufsize:v 2688k -r 25
-g 25 -keyint_min 50 -x264opts "keyint=50:min-keyint=50:no-scenecut" -acodec libfaac -ac 2 -ar 48000 -ab 128k -f mp4 -movflags faststart OUTPUT_FILE.mp4
This commands are not allowed to be modified, since my goal here is
to parallelize this process.
Finally, to merge the files I am using concat and a list of the
outputs of the 2nd step, as follows:
ffmpeg -f concat -i mylist.txt -c copy final.mp4
In conclusion, I am trying to find out a way to solve the problem with step 2 and also get some opinions if there is a better way to do this.
I found a solution with the following code, which segments the file without the need to force keyframes (it cuts on the nearest keyframe) and multiple commands.
ffmpeg -i test.mp4 -f segment -segment_time chunk_length -reset_timestamps 1 -c copy test%02d.mp4