how to capture continuous Realtime progress stream of output (from ffmpeg) - powershell

I am having trouble showing the progress of ffmpeg through my script. I compile my script to exe with ps2exe and ffmpeg output on standard error instead of outputting on standard out
So I used to pipe 1 option
my script.ps1 now is:
# $nb_of_frames= #some_int
& $ffmpeg_path -progress pipe:1 -i input.mp4 -c:v libx264 -pix_fmt yuv420p -crf 25 -preset fast -an output.mp4
then I compile it with ps2exe. (to reproduce you don't need the compile, just use the above command with pipe:1 directly in cmd or PowerShell you will get the same behavior)
Normally with ffmpeg you get a progress reporting (that is interactive), one line containing the information and it keeps getting updated as 1 single line without spamming the console with 100s of lines, it looks like this.
frame= 7468 fps=115 q=22.0 size= 40704kB time=00:05:10.91 bitrate=1072.5kbits/s speed= 4.8x
But this does not appear in the compiled version of my script, so after digging I added -progress pipe:1 to get the progress to appear on std out
Now I get a continuous output every second that looks like this:
frame=778
fps=310.36
stream_0_0_q=16.0
bitrate= 855.4kbits/s
total_size=3407872
progress=continue
...
frame=1092
fps=311.04
stream_0_0_q=19.0
bitrate= 699.5kbits/s
total_size=3932160
progress=continue
I would like to print some sort of updatable percentage out of this, I can compute a percentage easily if I can capture that frame number, but in this case, I don't know how to capture a real-time output like this and how to make my progress reporting update 1 single line of percentage in real-time (or some progress bar via symbols) instead of spamming on many lines
(or if there is a way to make the default progress of FFmpeg appear in the compiled version of my script that would work too)
edit: a suggestion based on the below answer
#use the following lines instead of write-progress if using with ps2exe
#$a=($frame * 100 / $maxFrames)
#$str = "#"*$a
#$str2 = "-"*(100-$a)
#Write-Host -NoNewLine "`r$a% complete | $str $str2|"
Thanks

Here is an example how to capture current frame number from ffmpeg output, calculate percentage and pass it to Write-Progress:
$maxFrames = 12345
& $ffmpeg_path -progress pipe:1 -i input.mp4 -c:v libx264 -pix_fmt yuv420p -crf 25 -preset fast -an output.mp4 |
Select-String 'frame=(\d+)' | ForEach-Object {
$frame = [int] $_.Matches.Groups[1].Value
Write-Progress -Activity 'ffmpeg' -Status 'Converting' -PercentComplete ($frame * 100 / $maxFrames)
}
Remarks:
Select-String parameter is a regular expression that captures the frame number by the group (\d+) (where \d means a digit and + requires at least one digit). See this Regex101 demo.
ForEach-Object runs the given script block for each match of Select-String. Here $_.Matches.Groups[1].Value extracts the matched value from the first RegEx group. Then we convert it to integer to be able to use it for calculations.
Finally calculate the percentage and pass it to Write-Progress.

Related

Download a part of youtube video using a powershell script

I'm writing this Powershell script:
$URL = "https://www.youtube.com/watch?v=KbuwueqEJL0"
$from = 00:06:15
$to = 00:09:17
$cmdOutput = (youtube-dl --get-url $URL)
ffmpeg -ss $from -to $to -i <video_url> -ss $from -to $to -i <audio_url> output.mkv
This script's purpose is to download a part of a Youtube video. I've set the variable $URL to specify the Youtube URL, while $from and $to is the start and end time of the part I want to download.
$cmdOutput is used to output the stream URL. The output would have two lines: the first one is the URL for the video stream, while the second one is the audio stream URL.
Currently, I don't know how to use the output as a variable and specify the line number of $cmdOutput to put it into the correct stream. I guess <video_url> and <audio_url> would be replaced by something like $cmdOutput[line 1], and $cmdOutput[line 2], though I know that those are incorrect.
I've consulted this answer, and it is handy for me to write this script. I've also read Boris Lipschitz's answer on how to do the same thing with Python, but his answer does not work.
In that script, the -ss <start_time> flag inputs the seeking point, and the -t <duration> flag tells FFmpeg to stop encoding after the specified duration. For example, if the start time is 00:02:00 and the duration is 00:03:00, FFmpeg would download from 00:02:00 to 00:05:00, which is not the expected outcome. For some reason, his Python script skips the first 5 seconds of output, even if I replace the -t flag with -to <end_time>. I've tried to edit his script, but it does not work unless you explicitly specify the time for both video and audio stream, as well as their respective stream URL.

ffmpeg blackdetect, start_black value to mkv chapter?

I'm trying to do automatic detect chapter with blackdetect with ffmpeg.
When I use blackdetect I get result but what is the result? Its not frames? Also. Is it possible to do a script/bat-file (for windows 10, powershell or cmd) to convert the result to a "mkv xml-file" so It can be imported with mkvtoolnix?
ffmpeg -i "movie.mp4" -vf blackdetect=d=0.232:pix_th=0.1 -an -f null - 2>&1 | findstr black_duration > output.txt
result:
black_start:2457.04 black_end:2460.04 black_duration:3
black_start:3149.46 black_end:3152.88 black_duration:3.41667
black_start:3265.62 black_end:3268.83 black_duration:3.20833
black_start:3381.42 black_end:3381.92 black_duration:0.5
black_start:3386.88 black_end:3387.38 black_duration:0.5
black_start:3390.83 black_end:3391.33 black_duration:0.5
black_start:3824.29 black_end:3824.58 black_duration:0.291667
black_start:3832.71 black_end:3833.08 black_duration:0.375
black_start:3916.29 black_end:3920.29 black_duration:4
Please see the documentation on this function here. Specifically this line:
Output lines contains the time for the start, end and duration of the detected black interval expressed in seconds.
If you read further down the page you will see another function blackframe which does a similar thing but outputs the frame number rather than seconds.
If the mkvtoolnix xml file has a chapter definition facility you will be able to create a script that takes the ffmpeg output and dumps it into the correct format in any language of your choice.

Writing __fish_is_first_arg, but includes the arg's parameter

I'm writing a completions file for cwebp, Google's to-webp converter. Its help says that -preset should come before all other arguments. With that in mind, I tried restricting its availability with __fish_is_first_arg, like this:
complete -c cwebp -x -n '__fish_is_first_arg' -o preset -a 'default photo picture drawing icon text' -d 'Preset setting'
This would make it so cwebp -o -pres<Tab> would not suggest -preset, which is what I wanted.
Meanwhile, cwebp -pres<Tab> would fill out the argument to its full -preset, which is also what I wanted.
However, when I press the Tab key at cwebp -preset <Tab>, the only suggestions given are the files and directories in the current directory. This is not what I wanted.
With this in mind, I figured I had to write a "is first or second option" function. However, it's not going well. Here's what I have so far:
function __fish_cwebp_is_first_option_or_its_argument
set -l tokens (commandline -co)
# line alpha
switch (count tokens)
case 1
return 0
case 2
if test \( "$tokens[2]" = '-preset' \)
return 0
end
return 1
case '*'
# line beta
breakpoint
return 1
end
end
This function body, as far as I can tell, works the same way as return 0 ((true)). No matter what, -pres<Tab> completes to -preset, even when the line looks like cwebp -h -H -version -pres<Tab>.
When I put a breakpoint on line alpha, I can echo $tokens and see all the tokens that I've totally typed out (there needs to be at least one space between the last token and the cursor). However, when I have only a breakpoint on line beta as shown here, I can't even get the breakpoint to trigger. Not even with cwebp -h -H -version -pres<Tab> as mentioned above.
What am I doing wrong?
switch (count tokens)
should be:
switch (count $tokens)
(For others reading this: the $ activates variable expansion. count $tokens expands the variable tokens and counts its values, while count tokens counts only the single literal "tokens").

ffmpeg: edit metadata and automatically increment their name + set the value of "Title" based on "Name"

This PowerShell code divides a large audio file (sound1) in 5 minutes parts and saves them as sound100_1.mp3, sound1_002.mp3...
ffmpeg -i $file_name_complete -f segment -segment_time 300 -c copy $fileNameOnly"_"%03d$fileExtensionOnly
How can I set the metadata title to be the same than the file name?
And how can I also (on the same time) edit the Album metadata with an incremental name (it's useless, but it's to understand how that work). It should be alb_1, alb_2.
I have seen on the docs that I should use:
-metadata title="my title"
But:
should I repeat each time -metadata for each metadata? EDIT: yes according to this
how can I increment the number since the title need to be quoted (-metadata title="$fileNameOnly""_"%03d won't work since the last quote is missing)
how can I set the title field so it take the same value than the Name?
This did not work:
ffmpeg -i $file_name_complete -f segment -segment_time 300
-metadata title="$fileNameOnly""_"%03d album="test"
-c copy $fileNameOnly"_"%03d$fileExtensionOnly
I get this error:
-metadata : The term '-metadata' is not recognized as the name of a cmdlet

How to encode a series of .dpx files using X264

I am complete newbie to video encoding. I am trying to encode a series of .dpx files into one single encoded video O/P file in any of the following format. ( .mp4,.avi,.h264,.mkv etc)
I have tried 2 different approaches. The first one works and the second one does not.
I would like to know the difference between the two. Any help / input would be much appreciated.
1) using FFMPEG along with x264 library and it works well. I am able to produce desired output
ffmpeg -start_number 0 -i frame%4d.dpx -pix_fmt yuv420p -c:v libx264 -crf 28
-profile:v baseline fromdpx.h264
2) I first try to concatenate all the dpx files into a single file using concate protocol in ffmpeg and then use x264 to encode the concatenated file.
Here I see that the size of the concatenated file is the sum of all the files concatenated. But when I use x264 command to encode the concatenated file, I get a green screen (basically not the desired output) .
ffmpeg -i "concat:frame0.dpx|frame01.dpx|frame2.dpx etc" -c copy output.dpx
then
x264 --crf 28 --profile baseline -o encoded.mp4 --input-res 1920x1080 --demuxer raw
output.dpx
I also tried to encoded the concatenated file using ffmpeg as follows
ffmpeg -i output.dpx -pix_fmt yuv420p -c:v libx264 -crf 28 -profile:v baseline fromdpx.h264
This also gives me a blank video.
Could someone please point out to me what is going on here? Why does the first method work and the second does not?
Thank you.
In the second approach you specify DPX-file as raw input for x264 (--demuxer raw) but DPX is not raw format (it is more container which have headers, metadata and optionally RLE compression) and so need decoding. x264 supports only this raw formats (--input-csp): i420, yv12, nv12, i422, yv16, nv16, i444, yv24, bgr, bgra, rgb. All this formats can have 8..16 bits per component (--input-depth).
Also I doubt DPX format supports concating at all because it is image format not video format. So probably your result after concat is already broken.