ffmpeg blackdetect, start_black value to mkv chapter? - powershell

I'm trying to do automatic detect chapter with blackdetect with ffmpeg.
When I use blackdetect I get result but what is the result? Its not frames? Also. Is it possible to do a script/bat-file (for windows 10, powershell or cmd) to convert the result to a "mkv xml-file" so It can be imported with mkvtoolnix?
ffmpeg -i "movie.mp4" -vf blackdetect=d=0.232:pix_th=0.1 -an -f null - 2>&1 | findstr black_duration > output.txt
result:
black_start:2457.04 black_end:2460.04 black_duration:3
black_start:3149.46 black_end:3152.88 black_duration:3.41667
black_start:3265.62 black_end:3268.83 black_duration:3.20833
black_start:3381.42 black_end:3381.92 black_duration:0.5
black_start:3386.88 black_end:3387.38 black_duration:0.5
black_start:3390.83 black_end:3391.33 black_duration:0.5
black_start:3824.29 black_end:3824.58 black_duration:0.291667
black_start:3832.71 black_end:3833.08 black_duration:0.375
black_start:3916.29 black_end:3920.29 black_duration:4

Please see the documentation on this function here. Specifically this line:
Output lines contains the time for the start, end and duration of the detected black interval expressed in seconds.
If you read further down the page you will see another function blackframe which does a similar thing but outputs the frame number rather than seconds.
If the mkvtoolnix xml file has a chapter definition facility you will be able to create a script that takes the ffmpeg output and dumps it into the correct format in any language of your choice.

Related

how to capture continuous Realtime progress stream of output (from ffmpeg)

I am having trouble showing the progress of ffmpeg through my script. I compile my script to exe with ps2exe and ffmpeg output on standard error instead of outputting on standard out
So I used to pipe 1 option
my script.ps1 now is:
# $nb_of_frames= #some_int
& $ffmpeg_path -progress pipe:1 -i input.mp4 -c:v libx264 -pix_fmt yuv420p -crf 25 -preset fast -an output.mp4
then I compile it with ps2exe. (to reproduce you don't need the compile, just use the above command with pipe:1 directly in cmd or PowerShell you will get the same behavior)
Normally with ffmpeg you get a progress reporting (that is interactive), one line containing the information and it keeps getting updated as 1 single line without spamming the console with 100s of lines, it looks like this.
frame= 7468 fps=115 q=22.0 size= 40704kB time=00:05:10.91 bitrate=1072.5kbits/s speed= 4.8x
But this does not appear in the compiled version of my script, so after digging I added -progress pipe:1 to get the progress to appear on std out
Now I get a continuous output every second that looks like this:
frame=778
fps=310.36
stream_0_0_q=16.0
bitrate= 855.4kbits/s
total_size=3407872
progress=continue
...
frame=1092
fps=311.04
stream_0_0_q=19.0
bitrate= 699.5kbits/s
total_size=3932160
progress=continue
I would like to print some sort of updatable percentage out of this, I can compute a percentage easily if I can capture that frame number, but in this case, I don't know how to capture a real-time output like this and how to make my progress reporting update 1 single line of percentage in real-time (or some progress bar via symbols) instead of spamming on many lines
(or if there is a way to make the default progress of FFmpeg appear in the compiled version of my script that would work too)
edit: a suggestion based on the below answer
#use the following lines instead of write-progress if using with ps2exe
#$a=($frame * 100 / $maxFrames)
#$str = "#"*$a
#$str2 = "-"*(100-$a)
#Write-Host -NoNewLine "`r$a% complete | $str $str2|"
Thanks
Here is an example how to capture current frame number from ffmpeg output, calculate percentage and pass it to Write-Progress:
$maxFrames = 12345
& $ffmpeg_path -progress pipe:1 -i input.mp4 -c:v libx264 -pix_fmt yuv420p -crf 25 -preset fast -an output.mp4 |
Select-String 'frame=(\d+)' | ForEach-Object {
$frame = [int] $_.Matches.Groups[1].Value
Write-Progress -Activity 'ffmpeg' -Status 'Converting' -PercentComplete ($frame * 100 / $maxFrames)
}
Remarks:
Select-String parameter is a regular expression that captures the frame number by the group (\d+) (where \d means a digit and + requires at least one digit). See this Regex101 demo.
ForEach-Object runs the given script block for each match of Select-String. Here $_.Matches.Groups[1].Value extracts the matched value from the first RegEx group. Then we convert it to integer to be able to use it for calculations.
Finally calculate the percentage and pass it to Write-Progress.

How can I show output of a command in real time using Swift?

All the solutions I've found so far don't work with find command like these. For some reason the output doesn't get flushed immediately with even line separators '\n'.
Example for command: find ~/ ! -path '/Users/username//Library/*' -type f -size +2G
I just want to get real-time output from that command, the output can be seen in terminal in real-time but from all methods like mentioned above I've tried it waits till end of execution then flushes all output.

Perl script to copy logs with timestamp for every hour and paste into different file

First of all, I'm very new to programming and so would need your help in writing a perl script to do the following on windows.
I have a big log file with timestamp (1gb) and its difficult to read the logs as it takes a lot of time to open. so my requirement is to copy the logs from the bigger log file for the last one hour and paste it to another file and then copy the next 1 hr of data to different file(so we will have 24 files for a day). The next day the data in these files needs to be over written or delete & create a new file.
Sample log :
09092016-00:02:00,..................
09092016-00:02:08,..................
09092016-00:02:15,..................
09092016-00:02:18,..................
Please help me with this and thanks for your help in advance.
Thanks,
A simpler solution would be to use the split command to split the files into manageable sizes.
split -l 1000 logfile
Will split your logfile into smaller files of 1000 lines each.
You can then just use grep to find the files that contain the day you need.
grep 09092016 logfile*
for example:
logfile="./log"
while read -r d m y h; do
grep "^$d$m$y-$h" "$logfile" > "partial-${y}${m}{$d}-${h}.log"
done < <(sed -n 's/\(..\)\(..\)\(....\)-\(..\)\(.*\)/\1 \2 \3 \4/p' "$logfile" | sort -u)
easy, but not efficient. It reads the whole big logfile 25x for the split. (1x for gathering the existing ddmmyyyy-hh lines in the log, and again for every different found date-hour.)

Is there a way to tell if a default stream is open?

There is a line in a library that I can't take out:
put oResults format "x(80)" skip.
I have a program that is calling the library that doesn't have a default output so this line errors out.
I know I can just send output in my program somewhere but I want to fix it so you don't have to have a output. Seek function maybe?
EDIT: 10.2b
I only get an error in unix.
In a unix environment this line:
put oResults format "x(80)" skip.
errors out.
but if you put a:
if seek(output) <> ? then
put oResults format "x(80)" skip.
it doesn't error.
in a unix environment line:
put oResults format "x(80)" skip.
errors out.
but if you put a:
if seek(output) <> ? then
put oResults format "x(80)" skip.
it doesn't error.
You are running in batch mode. You should always be redirecting your output at the OS level when you run in batch mode. Something like this:
bpro -p test.p > errors.out 2>&1
Not redirecting output will pretty much always lead to the error that you are seeing.
If you are embedding the bpro, mbpro or _progres -b or whatever command in a script that needs to show that output or otherwise work with it you would typically use "cat" or "tail -f" on the output file.

Can kdb read from a named pipe?

I hope I'm doing something wrong, but it seems like kdb can't read data from named pipes (at least on Solaris). It blocks until they're written to but then returns none of the data that was written.
I can create a text file:
$ echo Mary had a little lamb > lamb.txt
and kdb will happily read it:
q) read0 `:/tmp/lamb.txt
enlist "Mary had a little lamb"
I can create a named pipe:
$ mkfifo lamb.pipe
and trying to read from it:
q) read0 `:/tmp/lamb.pipe
will cause kdb to block. Writing to the pipe:
$ cat lamb.txt > lamb.pipe
will cause kdb to return the empty list:
()
Can kdb read from named pipes? Should I just give up? I don't think it's a permissions thing (I tried setting -m 777 on my mkfifo command but that made no difference).
With release kdb+ v3.4 Q has support for named pipes: Depending on whether you want to implement a streaming algorithm or just read from the pipe use either .Q.fps or read1 on a fifo pipe:
To implement streaming you can do something like:
q).Q.fps[0N!]`:lamb.pipe
Then $ cat lamb.txt > lamb.pipe
will print
,"Mary had a little lamb"
in your q session. More meaningful algorithms can be implemented by replacing 0N! with an appropriate function.
To read the context of your file into a variable do:
q)h:hopen`:fifo://lamb.pipe
q)myText: `char$read1(h)
q)myText
"Mary had a little lamb\n"
See more about named pipes here.
when read0 fails, you can frequently fake it with system"cat ...". (i found this originally when trying to read stuff from /proc that also doesn't cooperate with read0.)
q)system"cat /tmp/lamb.pipe"
<blocks until you cat into the pipe in the other window>
"Mary had a little lamb"
q)
just be aware there's a reasonably high overhead (as such things go in q) for invoking system—it spawns a whole shell process just to run whatever your command is
you might also be able to do it directly with a custom C extension, probably calling read(2) directly…
The algorithm for read0 is not available to see what it is doing under the hood but, as far as I can tell, it expects a finite stream and not a continuous one; so it will will block until it receives an EOF signal.
Streaming from pipe is supported from v3.4
Details steps:
Check duplicated pipe file
rm -f /path/dataPipeFileName
Create named pipe
mkfifo /path/dataPipeFileName
Feed data
q).util.system[$1]; $1=command to fetch data > /path/dataPipeFileName &
Connect pipe using kdb .Q.fps
q).Q.fps[0N!]`$":/path/",dataPipeFileName;
Reference:
.Q.fps (streaming algorithm)
Syntax: .Q.fps[x;y] Where x is a unary function and y is a filepath
.Q.fs for pipes. (Since V3.4) Reads conveniently sized lumps of complete "\n" delimited records from a pipe and applies a function to each record. This enables you to implement a streaming algorithm to convert a large CSV file into an on-disk kdb+ database without holding the data in memory all at once.