I'm working on an video-processing script, which has to process large video files (100 - 500 GB) using ffmpeg.
The video file contains 9 streams:
1 video streams and 8 audio streams.
Now I like to calculate the sha256 hash for each stream. This works fine for a single stream, like this:
ffmpeg -i /my/video/file -map 0:v -f hash -hash sha256 -
This should generate the sha256 hash from stream 0 (video).
I could loop that command for all streams, which would result in reading and processing that file 9 times.
Is there a way to read that file once and process all 9 bitstreams parallel?
I like to avoid reading that file from disk againg and again.
Using a recent* git version of ffmpeg, run
ffmpeg -i in -map 0 -f streamhash -hash sha256 -
This will print a hash per stream of the form,
0,v,SHA256=84dd5b99e1b5fa8877e3365d1a24056ae37c7b3e17a7ab314ec33dbd5034687d
1,a,SHA256=c2ac1a155d451405dbedb0a999e801676e45fb2d17c8025da7c035cc1e8fff92
*( > 20 Sep 2019 )
Related
I am successfully u sing ffmpeg via powershell to compress video files, however I can't get the compression to occur in a single location, I only have success when I make separate inputs and outputs.
For example, this command will be successful:
ffmpeg -y -i \\path\$x -vf scale=1920:1080 \\diff_path\$x
this will not do anyhting or will corrupt the file:
ffmpeg -y -i \\path\$x -vf scale=1920:1080 \\path\$x
I think I understand why this doesn't work, but I'm having a hard time finding a solution. I want the script to address a file and compress it in it's current location, leaving only a single compressed video file.
Thanks all
Not possible. Not the answer you want, but FFmpeg is not able to perform in-place file editing, which means it has to make a new output file.
I have a list of .JPG files on a Mac. I want to export them to a format taking less than 500 kilobytes per image.
I know how to do that using the Preview application one image at a time; but I want to be able to do the same in batch, meaning on several files at once. Is there a command line way to do it so I could write a script and run it in the terminal?
Or some other way that I could use?
This is an example from the command line using convert (brew info imagemagick) converting all *.jpg images in one directory to .png:
$ for i in *.jpg; do
convert "$i" "${i%.jpg}.png"
done
To test before (dry-run) you could use echo instead of the <command>:
$ for i in *.jpg; do
echo "$i" "${i%.jpg}.png"
done
This will search for files within the directory having the extension .jpg then execute the command convert passing as arguments the file name $i and then using as an output the same file name removing the extension and adding the new one .png, this is done using:
"${i%.jpg}.png"
The use of double quotes " is for the case file could contain spaces, check this for more details: shell parameter expansion
For example, to just change the quality of the file you could use:
convert "$i" -quality 80% "${i%.jpg}-new.jpg"
Or if no need to keep the original:
mogrify -quality 80% *.jpg
The main difference is that ‘convert‘ tends to be for working on individual images, whereas ‘mogrify‘ is for batch processing multiple files.
Install ImageMagick. (Really.. it's lightweight and amazing) It's nice to install using Homebrew. Then...
Open terminal.
cd [FilepathWithImages] && mogrify -define jpeg:extent=60kb -resize 400 *.JPG
Wait until the process is complete (may take a few minutes if you have many images)
To check file sizes, try du -sh * to see the size of each file in the directory you're in.
NOTE: *.JPG must be uppercase for it to work
How this works:
cd [yourfilepath] will naviage to the directory you want to be in
&& is used for chaining commands
mogrify is used when you want to keep the same filename
-define jpeg:extent=60kb sets the maximum filesize to 60kb
-resize 400 will set the width
*.JPG is for all files in the directory you're in.
There are many additional commands you can use with imagemagick convert and mogrify. After installing it, you can use man mogrify to see the commands you can chain to it.
According to the docs, "Restrict the maximum JPEG file size, for example -define jpeg:extent=400KB. The JPEG encoder will search for the highest compression quality level that results in an output file that does not exceed the value. The -quality option also will be respected starting with version 6.9.2-5. Between 6.9.1-0 and 6.9.2-4, add -quality 100 in order for the jpeg:extent to work properly. Prior to 6.9.1-0, the -quality setting was ignored."
Install ImageMagick from Homebrew or MacPorts or from https://imagemagick.org/script/download.php#macosx. Then use mogrify to process all files in a folder using -define jpeg:extent=500KB saving to JPG.
I have two files in folder test1 on my desktop. Processing will put them into folder test2 on my desktop
Before Processing:
mandril.tif 3.22428MB (3.2 MB)
zelda.png 726153B (726 KB)
cd
cd desktop/test1
mogrify -path ../test2 -format jpg -define jpeg:extent=500KB *
After Processing:
mandril.jpg 358570B (359 KB)
zelda.jpg 461810B (462 KB)
See https://imagemagick.org/Usage/basics/#mogrify
The * at the end means to process all files in the folder. If you want to restrict to only jpg then change it to *.jpg. The -format means you intend the output to be jpg.
DISCLAIMER: BE CAREFUL BECAUSE THE FOLLOWING SOLUTION IS A "DESTRUCTIVE" COMMAND, FILES ARE REPLACED WITH LOWER QUALITY DIRECTLY
Now that you have read my disclaimer, I would recommend to get cwebp that you can download here.
You will also need parallel sudo apt-get install -y parallel and then I coined the following script:
parallel cwebp {} -resize 0 640 -m 6 -sns 100 -q 80 -preset photo -segments 4 -f 100 -o {} ::: *.jpg && /
find -name "*.jpg" | parallel 'f="{}" ; mv -- {} ${f:0:-3}webp'
640 is the resulting file height in pixels and 0 before means that the width will adapt to the ratio between width and height.
I reduced quality to 80% (-q 80), you will not notice much difference.
The second line find all the files that have been converted but still have the wrong extension file (.jpg), so it removes the last 3 characters (jpg) and add webp instead.
I went from 5 Mb to about 50k per file (.jpg images were 4000x4000 pixels) , just saved 20 Gb of storage. I hope you enjoy it !
If you don't want to bother with webp format you can use the following instead (you will need to install imageMagick perhaps):
parallel convert {} -resize x640 -sampling-factor 4:2:0 -strip -quality 85 \
-interlace JPEG -colorspace RGB -define jpeg:dct-method=float {} ::: *.jpg
I would like a command like for FFMPEG that will create a file which contains only uncompressed audio frames/packets from a file.
Example:
ffmpeg -i in.mp3 --no-decompress out
Something like this would be fantastic.
Does any one know how concatenate two aac audio files into one. I can do this with mp3 using ffmpeg but no luck with files with an m4a extension.
I tried the following with no luck:ffmpeg -i concat:"file1.m4a\|file2.m4a" -c copy output.m4a'
Just use cat:
cat file1.m4a file2.m4a >output.m4a
I am trying to parse large pcap files with libpcap but there is a file limitation so my files are separated at 2gb. I have 10 files of 2gb and I want to parse them at one shot. Is there a possibility to feed this data on an interface sequentially (each file separately) so that libpcap can parse them on the same run?
I am not aware of any tools that will allow you to replay more than one file at a time.
However, if you have the disk space, you can use mergecap to merge the ten files into a single file and then replay that.
Mergecap supports merging the packets according to
chronological order of each packet's timestamp in each file
ignoring the timestamps and performing what amounts to a packet version of 'cat'; write the contents of the first file to the output, then the next input file, then the next.
Mergecap is part of the Wireshark distribution.
I had multiple 2GB pcap files. Used the following one liner to go through each pcap file sequentially and with display filter. This worked without merging the pcap files (avoided using more disk space and cpu)
for i in /mnt/tmp1/tmp1-pcap-ens1f1-tcpdump* ; do tcpdump -nn -r $i host 8.8.8.8 and tcp ; done
**Explanation:**
for loop
/mnt/tmp1/tmp1-pcap-ens1f1-tcpdump* # path to files with * for wildcard
do tcpdump -nn -r $i host 8.8.8.8 and tcp # tcpdump not resolving ip or port numbers and reading each file in sequence
done #
Note: Please remember to adjust the file path and display filter according to your needs.