I have a very large JPEG image (10800x7497) that I want to resize down to 50% of its resolution. I have already reduced the image colors down to 64 colors using -define jpeg:colors=64, but when I try resizing the image, ImageMagick takes very long to process the image, probably 20 minutes or more (since I have stopped the process when no output image is saved within 20 minutes, although Task Manager shows ImageMagick processing the image). How can I speed up the resizing of this large image? I have tried the following codes but still it takes too long:
magick -define jpeg:size=10800x7500 "image1.jpg" -resize 5400x3750 "image1-resized.jpg"
magick -define jpeg:size=5400x3750 "image1.jpg" -resize 5400x3750 "image1-resized.jpg"
magick -depth 5 "image1.jpg" -resize 50% "image1-resized.jpg"
It sounds like your imagemagick is swapping to disc. You probably need to adjust your policy.xml.
This is the file containing the limits for the amount of memory and disc magick is allowed to use. The magick docs have some notes, but check /etc/ImageMagick-7/policy.xml and have a look for lines like eg.:
<policy domain="resource" name="memory" value="256MiB"/>
256mb memory use is far too small -- change it something like:
<policy domain="resource" name="memory" value="8GiB"/>
You'll see quite a few other similar lines, adjust them to fit your hardware.
I would simply use resize. The -define hint will turn on jpeg shrink-on-load, which will lose quality and cause (probably) noticeable moire fringing. Plus for a 50% shrink there's no speed benefit.
$ identify big.jpg
big.jpg JPEG 10800x7497 10800x7497+0+0 8-bit sRGB 13.4733MiB 0.000u 0:00.000
$ /usr/bin/time -f %M:%e convert big.jpg -resize 5400x3750 x.jpg
1130340:0.92
So 0.92s and 1.1gb of memory with imagemagick6. imagemagick7 is usually about half the speed and twice the memory use, so I'd expect about 2s and 2GB.
As Mark says, vipsthumbnail is likely to be quicker. I see:
$ /usr/bin/time -f %M:%e vipsthumbnail big.jpg -s 5400 -o x.jpg
295460:0.69
So 300mb of memory and 0.7s. This PC has a stupid number of cores (32!) and you really can't get that much parallelism out of basic JPEG shrinking, so you see a useful speedup and lower memory use if you turn the number of threads down:
$ /usr/bin/time -f %M:%e vipsthumbnail big.jpg -s 5400 -o x.jpg --vips-concurrency=3
77744:0.43
78mb of memory and 0.43s.
Related
I would like to have the best compression ratio of a sequence of similar grayscale images. I note that I need an absolute lossless solution (meaning I should be able to check it with an hash algorithm).
What I tried
I had the idea to convert my images into a video because there is a chronology between images. The encoding algorithm would compress using the fact that not all the scene change between 2 pictures. So I tried using ffmpeg but I had several problems due to sRGB -> YUV colorspace compression. I didn't understand all the thing but it's seems like a nightmare.
Example of code used :
ffmpeg -i %04d.png -c:v libx265 -crf 0 video.mp4 #To convert into video
ffmpeg -i video.mp4 %04d.png #To recover images
My second idea was to do it by hand with imagemagik. So I took the first image as reference and create a new image that is the difference between image1 and image2. Then I tried to add the difference image with the image 1 (trying to recover image 2) but it didn't work. Noticing the size of the recreated picture, it's clear that the image is not the same. I think there was an unwanted compression during the process.
Example of code used :
composite -compose difference 0001.png 0002.png diff.png #To create the diff image
composite -compose difference 0001.png diff.png recover.png #To recover image 2
Do you have any idea about my problem ?
And why I don't manage to do the perfect recover with iamgemagik ?
Thanks ;)
Here are 20 samples images : https://cloud.damien.gdn/d/f1a7954a557441989432/
I tried a few ideas with your dataset and summarise what I found below. My calculations and percentages assume that 578kB is a representative image size.
Method 1 - crush - 69%
I just ran pngcrush on one of your images like this:
pngcrush -bruteforce input.png crushed.png
The output size was 400kB, so your image is now only taking 69% of the original space on disk.
Method 2 - rotate and crush - 34%
I rotated your images through 90 degrees and crushed the result:
magick input.png -rotate 90 result.png
pngcrush -bruteforce result.png crushed.png
The rotated crushed image takes 34% of the original space on disk.
Method 3 - rotate and difference - 24%
I rotated your images with ImageMagick, then differenced two adjacent images in the series and saved the result. I then "pngcrushed" that which resulted in 142kB, or 24% of the original space.
Method 4 - combined to RGB - 28%
I combined three of your single channel images into a 3-channel RGB image and pngcrushed the result:
magick 000[123].png -combine result.png
pngcrush -bruteforce result.png crushed.png
That resulted in a 490kB file containing 3 images, i.e. 163kB per image or 28% of the original size.
I suspect video with "motion" estimation/detection would yield the best results if you are able to do it losslessly.
You might get some gain out of MNG, which is intended for lossless animation compression. You can use libmng to try it out.
I am searching for a faster way to blur an image than to use the GaussianBlur.
The solution I am looking for can be a command line solution, but I prefer code in perl notation.
Actually, we use the Perl image magick API to blur images:
# $image is our Perl object holding a imagemagick perl image
# level is a natural number between 1 and 10
$image->GaussianBlur('x' . $level);
This works fine, but with the level height the amount of time it consumes seems to grow exponentially.
Question: How can I improve the time used for the bluring operation?
Is there another faster approach to blur images?
I found that the suggested method of resizing image for blur imitation makes output look very pixelated for very large values of sigma like 25 or more. So I finally came to an idea of downscale-blur-enlarge, which makes very nice result (almost indistinguishable from simple blur with large sigma):
# plain slow blur
convert -blur 0x25 sample.jpg blurred_slow.jpg
# much faster
convert -scale 10% -blur 0x2.5 -resize 1000% sample.jpg blurred_fast.jpg
On my i5 2.7Ghz it shows up to 10x speed up.
The documentation speaks of the difference between Blur and GaussianBlur.
There has been some confusion as to which operator, "-blur" or the
"-gaussian-blur" is better for blurring images. First of all "-blur"
is faster, but it does this using two stage technique. First in one
axis, then in the other. The "-gaussian-blur" operator on the other
hand is more mathematically correct as it blurs in all directions
simultaneously. The speed cost between the two can be enormous, by a
factor of 10 or more, depending on the amount of bluring involved.
[...]
In summary, the two operators are slightly different, but only
minimally. As "-blur" is much faster, use it. I do in just about all
the examples involving blurring. Large
That would simply be:
$image->Blur( 'x' . $level );
But the Perl ImageMagick documentation has the same text on both Blur and GaussianBlur (emphasis mine). I can't try now, you would have to benchmark it yourself.
Blur: reduce image noise and reduce detail levels with a Gaussian operator of the given radius and standard deviation (sigma).
GaussianBlur: reduce image noise and reduce detail levels with a Gaussian operator of the given radius and standard deviation (sigma).
An alternative that the documentation also lists is resizing the image to be very tiny, and then enlarging again.
Using large sigma values for image bluring is very slow. But onw
technique can be used to speed up this process. This however is only a
rough method and could use some mathematicaly rigor to improve
results. Essentually the reason large blurs are slow is because you
need a large window or 'kernel' to merge lots of pixels together, for
each and every pixel in the image. However resize (making image
smaller) does the same thing but generates fewer pixels in the
process. The technique is basically shrink the image, then enlarge it
again to generate the heavilly blured result. The Gaussian Filter is
especially useful for this as you can directly specify a Gaussian
Sigma define.
The example command line code is this:
convert rose: -blur 0x5 rose_blur_5.png
convert rose: -filter Gaussian -resize 50% \
-define filter:sigma=2.5 -resize 200% rose_resize_5.png
Not sure if I could still help OP with this, but I recently tried the same for a blurred screenlock picture.
I found that omitting the -blur part saves even more calculation time and is still delivering great results for a 4K picture:
convert in.png -scale 2.5% -resize 4000% out.png
# real: 0.174s user: 0.144s size: 1.2MiB
convert in.png -scale 10% -blur 0x2.5 -resize 1000% out.png
# real: 0.136s user: 2.117s size: 1.2MiB
convert in.png -blur 0x25 out.png
# real: 2.425s user: 21.408s size: 1KiB
However, you couldn't go lower than 2.5% with 3840x2160. It will resize the image. I guess the eps value differs for pictures of other sizes.
It should be noted, that the resulting image sizes differ noticably!
Unable to convert a JPEG image into a 300 DPI PNG image using ImageMagick.
After conversion the PNG image is 72 DPI only. I'm using ImageMagick 6.9.0-0 Q16 x86 and Ghostscript v9.15.
Below is the line I use in my Perl script:
system("\"$imagemagick\" -set units PixelsPerInch -density 300 \"$jpg\" \"$png\"");
Adjusting the units & density will not alter the underlining image data, but updates meta info for rendering libraries. Important for vector to raster, but not very useful for raster to raster. To adjust the DPI of an image, use the -resample operation.
convert source.jpg -resample 300 out.png
You verify the DPI resolution with the following...
identify -format "%[resolution.x] %[resolution.y]\n" out.png
I'm wondering where the 72dpi is coming from. Assuming you are using X and some kind of Unix, ImageMagick defaults to using the screen resolution (72 dpi). I'm not sure what it does under OSX/XQuartz but it's likely similar. Is your screen resolution set to 72dpi (!?).
I'm with #emcconville #ikegami - just do this straight from ImageMagick on the commandline - passing the right options to be sure.
There are image manipulation modules that you can use from perl without having to resort to system commands as well such as Imager::Transformations, Image::Magick, and GD. Here's how to convert with GD.
perl -MGD -E 'my $imgjpg = GD::Image->newFromJpeg("img.jpg");
open my $imgpng, ">", "img.png" or die; print $imgpng $imgjpg->png();'
With most image manipulation packages the original resolution show be maintained during conversion - though some (including GD) will default to lower color depths (8 bit) unless passed a Truecolor flag.
e.g. GD::Image->newFromJpeg("img.jpg", 1);
I'd like to batch process several folders of 1000's of images to downsize any images with a long side greater than 1440 pixels down to 1440 while ignoring any files that are already smaller than that.
I was looking at sips and can't tell if it skips upsizing by default or if there is a way to filter it using getProperty perhaps? (I'm not the best at deciphering CLI options from man pages).
I was thinking maybe I could use a find or sips query first and then pipe it into another sips to resize, I'm not sure exactly how though and don't think find can search by image size.
(also open to something other than sips to handle this, just seemed the quickest way)
Using spotlight to filter results to to images larger than a particular size works perfectly:
mdfind -0 -onlyin . "kMDItemPixelHeight > 1440 || kMDItemPixelWidth > 1440" | xargs -0 sips -Z 1440
This find images recursively from the current directory with width OR height greater than 1440 pixels and resizes them down to 1440. Files under 1440 get left alone.
Currently I'm trying to use Perl/ImageMagick and/or Ghostscript to convert scanned text documents stored as TIFFs into an 8.5″×11″ (ANSI A “Letter” size) PDF file.
I've tried many of the ImageMagick filters with resize and still find that some files perfectly legible before are now illegible. Often these images are at 72 dpi and when converted to be 8.5″×11″, it ends up with something like 612×792 pixels. The original was 1700×2200; as you can see there are quite a bit of pixels lost in the re-size.
Should I be using something else besides resize? Could it be something like ImageMagick is reporting the image is 72 dpi when it's really something like 200 dpi? Would re-sampling the image into the highest dpi that would fit in the 8.5″×11″ area help?
Does anyone have any other options to ultimately create a PDF file with all pages being 8.5″×11″?
(Mantra: 'Use the right tool for the job...')
You possibly shouldn't use ImageMagick for the job, but rather LibTIFF's tiff2pdf commandline utility:
tiff2pdf \
-z \
-o output.pdf \
-p letter \
-F \
input.tiff
-z is for (lossless) Zip/Flate compression.
-o defines the output filename.
-p sets the media size.
-F fills the page.