Lossless compression of a sequence of similar grayscale images - encoding

I would like to have the best compression ratio of a sequence of similar grayscale images. I note that I need an absolute lossless solution (meaning I should be able to check it with an hash algorithm).
What I tried
I had the idea to convert my images into a video because there is a chronology between images. The encoding algorithm would compress using the fact that not all the scene change between 2 pictures. So I tried using ffmpeg but I had several problems due to sRGB -> YUV colorspace compression. I didn't understand all the thing but it's seems like a nightmare.
Example of code used :
ffmpeg -i %04d.png -c:v libx265 -crf 0 video.mp4 #To convert into video
ffmpeg -i video.mp4 %04d.png #To recover images
My second idea was to do it by hand with imagemagik. So I took the first image as reference and create a new image that is the difference between image1 and image2. Then I tried to add the difference image with the image 1 (trying to recover image 2) but it didn't work. Noticing the size of the recreated picture, it's clear that the image is not the same. I think there was an unwanted compression during the process.
Example of code used :
composite -compose difference 0001.png 0002.png diff.png #To create the diff image
composite -compose difference 0001.png diff.png recover.png #To recover image 2
Do you have any idea about my problem ?
And why I don't manage to do the perfect recover with iamgemagik ?
Thanks ;)
Here are 20 samples images : https://cloud.damien.gdn/d/f1a7954a557441989432/

I tried a few ideas with your dataset and summarise what I found below. My calculations and percentages assume that 578kB is a representative image size.
Method 1 - crush - 69%
I just ran pngcrush on one of your images like this:
pngcrush -bruteforce input.png crushed.png
The output size was 400kB, so your image is now only taking 69% of the original space on disk.
Method 2 - rotate and crush - 34%
I rotated your images through 90 degrees and crushed the result:
magick input.png -rotate 90 result.png
pngcrush -bruteforce result.png crushed.png
The rotated crushed image takes 34% of the original space on disk.
Method 3 - rotate and difference - 24%
I rotated your images with ImageMagick, then differenced two adjacent images in the series and saved the result. I then "pngcrushed" that which resulted in 142kB, or 24% of the original space.
Method 4 - combined to RGB - 28%
I combined three of your single channel images into a 3-channel RGB image and pngcrushed the result:
magick 000[123].png -combine result.png
pngcrush -bruteforce result.png crushed.png
That resulted in a 490kB file containing 3 images, i.e. 163kB per image or 28% of the original size.
I suspect video with "motion" estimation/detection would yield the best results if you are able to do it losslessly.

You might get some gain out of MNG, which is intended for lossless animation compression. You can use libmng to try it out.

Related

Speed up reducing size of large JPEG image using ImageMagick

I have a very large JPEG image (10800x7497) that I want to resize down to 50% of its resolution. I have already reduced the image colors down to 64 colors using -define jpeg:colors=64, but when I try resizing the image, ImageMagick takes very long to process the image, probably 20 minutes or more (since I have stopped the process when no output image is saved within 20 minutes, although Task Manager shows ImageMagick processing the image). How can I speed up the resizing of this large image? I have tried the following codes but still it takes too long:
magick -define jpeg:size=10800x7500 "image1.jpg" -resize 5400x3750 "image1-resized.jpg"
magick -define jpeg:size=5400x3750 "image1.jpg" -resize 5400x3750 "image1-resized.jpg"
magick -depth 5 "image1.jpg" -resize 50% "image1-resized.jpg"
It sounds like your imagemagick is swapping to disc. You probably need to adjust your policy.xml.
This is the file containing the limits for the amount of memory and disc magick is allowed to use. The magick docs have some notes, but check /etc/ImageMagick-7/policy.xml and have a look for lines like eg.:
<policy domain="resource" name="memory" value="256MiB"/>
256mb memory use is far too small -- change it something like:
<policy domain="resource" name="memory" value="8GiB"/>
You'll see quite a few other similar lines, adjust them to fit your hardware.
I would simply use resize. The -define hint will turn on jpeg shrink-on-load, which will lose quality and cause (probably) noticeable moire fringing. Plus for a 50% shrink there's no speed benefit.
$ identify big.jpg
big.jpg JPEG 10800x7497 10800x7497+0+0 8-bit sRGB 13.4733MiB 0.000u 0:00.000
$ /usr/bin/time -f %M:%e convert big.jpg -resize 5400x3750 x.jpg
1130340:0.92
So 0.92s and 1.1gb of memory with imagemagick6. imagemagick7 is usually about half the speed and twice the memory use, so I'd expect about 2s and 2GB.
As Mark says, vipsthumbnail is likely to be quicker. I see:
$ /usr/bin/time -f %M:%e vipsthumbnail big.jpg -s 5400 -o x.jpg
295460:0.69
So 300mb of memory and 0.7s. This PC has a stupid number of cores (32!) and you really can't get that much parallelism out of basic JPEG shrinking, so you see a useful speedup and lower memory use if you turn the number of threads down:
$ /usr/bin/time -f %M:%e vipsthumbnail big.jpg -s 5400 -o x.jpg --vips-concurrency=3
77744:0.43
78mb of memory and 0.43s.

Difference between 'display_aspect_ratio' and 'sample_aspect_ratio' in ffprobe [duplicate]

I am trying to change the dimensions of the video file through FFMPEG.
I want to convert any video file to 480*360 .
This is the command that I am using...
ffmpeg -i oldVideo.mp4 -vf scale=480:360 newVideo.mp4
After this command 1280*720 dimensions are converted to 640*360.
I have also attached video. it will take less than minute for any experts out there. Is there anything wrong ?
You can see here. (in Video, after 20 seconds, direclty jump to 1:35 , rest is just processing time).
UPDATE :
I found the command from this tutorial
Every video has a Sample Aspect Ratio associated with it. A video player will multiply the video width with this SAR to produce the display width. The height remains the same. So, a 640x720 video with a SAR of 2 will be displayed as 1280x720. The ratio of 1280 to 720 i.e. 16:9 is labelled the Display Aspect Ratio.
The scale filter preserves the input's DAR in the output, so that the output does not look distorted. It does this by adjusting the SAR of the output. The remedy is to reset the SAR after scaling.
ffmpeg -i oldVideo.mp4 -vf scale=480:360,setsar=1 newVideo.mp4
Since the DAR may no longer be the same, the output can look distorted. One way to avoid this is by scaling proportionally and then padding with black to achieve target resolution.
ffmpeg -i oldVideo.mp4 -vf scale=480:360:force_original_aspect_ratio=decrease,pad=480:360:(ow-iw)/2:(oh-ih)/2,setsar=1 newVideo.mp4

ImageMagick: Looking for a fast way to blur an image

I am searching for a faster way to blur an image than to use the GaussianBlur.
The solution I am looking for can be a command line solution, but I prefer code in perl notation.
Actually, we use the Perl image magick API to blur images:
# $image is our Perl object holding a imagemagick perl image
# level is a natural number between 1 and 10
$image->GaussianBlur('x' . $level);
This works fine, but with the level height the amount of time it consumes seems to grow exponentially.
Question: How can I improve the time used for the bluring operation?
Is there another faster approach to blur images?
I found that the suggested method of resizing image for blur imitation makes output look very pixelated for very large values of sigma like 25 or more. So I finally came to an idea of downscale-blur-enlarge, which makes very nice result (almost indistinguishable from simple blur with large sigma):
# plain slow blur
convert -blur 0x25 sample.jpg blurred_slow.jpg
# much faster
convert -scale 10% -blur 0x2.5 -resize 1000% sample.jpg blurred_fast.jpg
On my i5 2.7Ghz it shows up to 10x speed up.
The documentation speaks of the difference between Blur and GaussianBlur.
There has been some confusion as to which operator, "-blur" or the
"-gaussian-blur" is better for blurring images. First of all "-blur"
is faster, but it does this using two stage technique. First in one
axis, then in the other. The "-gaussian-blur" operator on the other
hand is more mathematically correct as it blurs in all directions
simultaneously. The speed cost between the two can be enormous, by a
factor of 10 or more, depending on the amount of bluring involved.
[...]
In summary, the two operators are slightly different, but only
minimally. As "-blur" is much faster, use it. I do in just about all
the examples involving blurring. Large
That would simply be:
$image->Blur( 'x' . $level );
But the Perl ImageMagick documentation has the same text on both Blur and GaussianBlur (emphasis mine). I can't try now, you would have to benchmark it yourself.
Blur: reduce image noise and reduce detail levels with a Gaussian operator of the given radius and standard deviation (sigma).
GaussianBlur: reduce image noise and reduce detail levels with a Gaussian operator of the given radius and standard deviation (sigma).
An alternative that the documentation also lists is resizing the image to be very tiny, and then enlarging again.
Using large sigma values for image bluring is very slow. But onw
technique can be used to speed up this process. This however is only a
rough method and could use some mathematicaly rigor to improve
results. Essentually the reason large blurs are slow is because you
need a large window or 'kernel' to merge lots of pixels together, for
each and every pixel in the image. However resize (making image
smaller) does the same thing but generates fewer pixels in the
process. The technique is basically shrink the image, then enlarge it
again to generate the heavilly blured result. The Gaussian Filter is
especially useful for this as you can directly specify a Gaussian
Sigma define.
The example command line code is this:
convert rose: -blur 0x5 rose_blur_5.png
convert rose: -filter Gaussian -resize 50% \
-define filter:sigma=2.5 -resize 200% rose_resize_5.png
Not sure if I could still help OP with this, but I recently tried the same for a blurred screenlock picture.
I found that omitting the -blur part saves even more calculation time and is still delivering great results for a 4K picture:
convert in.png -scale 2.5% -resize 4000% out.png
# real: 0.174s user: 0.144s size: 1.2MiB
convert in.png -scale 10% -blur 0x2.5 -resize 1000% out.png
# real: 0.136s user: 2.117s size: 1.2MiB
convert in.png -blur 0x25 out.png
# real: 2.425s user: 21.408s size: 1KiB
However, you couldn't go lower than 2.5% with 3840x2160. It will resize the image. I guess the eps value differs for pictures of other sizes.
It should be noted, that the resulting image sizes differ noticably!

Effective JPEG compression for HF content?

I have an image with grayscale background and 2 thin lines (1 pixel wide) drawn in color. I'm trying to use various JPEG compression table (luminance and chrominance) to get the best possible result while staying under a certain file size.
The grayscale background compresses well and looks decent. The thin vertical and horizontal color lines get mutilated / smeared. The current JPEG algorithm uses 2x2 sub-sampling on the Cb and Cr channels and the chrominance compression table is fairly aggressive (high compression).
Is there any way to embed a "BMP" type data into JPEG image. Basically specify a color for specific pixels to be applied after the JPEG is de-compressed?
Any other ways clean up how thin color lines get encoded / decoded in JPEG without increasing the overall file size by a lot.
P.S. I'm testing all this stuff in Matlab.

Why smaller PNG image takes up more space than the original after getting resized by GraphicsMagic

The original PNG image is 800x1200 and takes up about 34K. After the images is resized by GraphicsMagick to 320x480 size, the resulting images takes up approximately 37K. (For comparison, if the image is resized with Paint on Windows 7 then the resulting image is 40K.) What gives? The whole point of resizing an image was to save space. How should GraphicsMagick be used to shrink the image size?
PNG is a lossless format and compresses the image data by first performing a step called prediction and then applying the same algorithm used in zlib. The prediction step is a crucial one in order to effectively compress the file, and it is based on the values of earlier neighbors pixels.
So, suppose you have a large PNG in black & white (by that I really mean only black and white, some people confuse that by grayscale sometimes). Also suppose it is not a tiny checkerboard pattern. In many regions of this image, you will have a relatively large white region, and then a relatively large black region, and so on. When the predictor is inside one of these large regions, it has no trouble to correctly predict that the current pixel intensity is exactly equal to the last one. This makes it easier to better compress the data describing your image.
Now, let us downscale this black & white image using some resampling filter different than nearest neighbor (let's say Lanczos). This has a great chance to turn your black & white image into a grayscale one, which has a much greater intensity range. This potentially makes the job of the predictor much harder, and thus the final file size might be larger.
For instance here is a black & white 256x256 PNG image which takes 5440 bytes, a resizing of it (using 3-lobed Lanczos) to 120x120 which now takes 7658 bytes, and another resizing (using nearest neighbor) to 120x120 which occupies 2467 bytes.
PNG is a compressed format. Sometimes trying to compress a maximally compressed item actually results in a larger item. So if the 800x1200 is resized to a smaller size, but the result retains everything that was in the original, because the original is already as minimal as possible, you could see this happen. To demonstrate this, try using 7zip to compress some data with ultra compression. Then try compressing the compressed file. Often the second compressed file will be larger than the first.