JBIG2 encoder giving lower compression ratio as compared to JBIG1? - image-compression

I am currently working on the compression of binary images. There is an algorithm that I am trying to test and compare its compression ratio to JBIG2 encoded images.
I want to compute the compression ratio of specific images using my algorithm and computer them to standards JBIG1 and JBIG2. For JBIG1, I am using JBIG-kit (from Markus K) and agl's jbig2enc for JBIG2 compression. However, the size of the encoded JBIG2 file (.jb2) is similar to that of the JBIG1 (.jbg) encoder. This should not happen as JBIG2 should give at least 2-3 times smaller results.
I am giving a .pbm image as input for both encoders, and in some cases, the JBIG1 encoder is giving me a lower size.
For generating the jb2 encoded file I am using the command:
$ jbig2 -s a.pbm > a.jb2
The version that my computer has currently installed is jbig2enc 0.28.
For compression ratio, I am directly using the sizes of .jbg and .jb2 files.
So, please let me know if I am doing something wrong here.

Related

Best practice to compress bitmap with LZ4

I'm packing some image resources for my game, and since this is a typical "compress once, decompress multiple" scenario, LZ4 High Compression fits me well (LZ4HC take longer time to compress, but decompress very fast).
I compressed a bitmap from 7.7MB to 3.0MB, which looks good to me, until I found that the PNG version is only 1.9MB.
I know that LZ4 HC do not have the ratio that deflate (which is used by PNG) does, but the ratio 2.55 vs 4.05 looks not right.
I searched and find that before compressing, PNG format will perform a Filtering operation, though I don't the details, it looks like that the Filtering move manipulate the data to fits the compress algorithm better.
So my question is:
Do I need to perform a filtering move before compressing using lz4?
If yes, where can I get a library (or code snippet) to perform filtering?
If no, is there any solution to make a PNG (or other lossless image formats) compress slow but decompress fast?
The simplest filtering in PNG is just taking the difference of subsequent pixels. The first pixel is sent as is, the next pixel is sent as the difference of that pixel and the previous pixel, and so on. That would be quite fast, and provide a good bit of the compression gain of filtering.

Is it safe to compute a hash on an image compressed in a lossless format such as PNG, GIF, etc.?

I was wondering if any lossless image compression format such as PNG comes with some kind of uniqueness guarantee, i.e. that two different compressed binaries always decode to different images.
I want to compute the hash of images that are stored in a lossless compression format and am wondering if computing the hash of the compressed version would be sufficient.
(There are some good reasons to compute the hash on the uncompressed image but there are out of the scope of my question here.)
No, that's not true for PNG. The compression procedure have many parameters (filtering type used for each row, ZLIB compression level and settings), so a single raw image can result in many different PNG files. Even worse, PNG allows to include ancillary data (chunks) with miscelaneous info (for example, textual comments).

Understanding webp encoder options

I'm currently experimenting with webp encoder (no wic) on windows 64 environment. My samples are 10 jpg stock photos depicting landscapes and houses, and the photos already optimized in jpegtran. I do this because my goal is to optimize the images of a whole website where the images have already been compressed with photoshop using the save for web command with various values on quality and then optimized with jpegtran.
I found out that using values smaller than -q 85 have a visual impact on the quality of the webp images. So I'm playing with values above 90 where the difference is smaller. I also concluded that I have to use -jpeg_like because without it the output is sometimes bigger in size than the original, which is not acceptable. I also use -m 6 -f 100 -strong because I really don't mind about the time the encoder needs to produce the output and trying to achieve the smoother results. I tried several values for these and concluded that -m 6 -f 100 -strong have the best output regarding quality and size.
I also tried the -preset photo avoiding any other parameter except -q but the size of the output gets bigger.
What I don't understand from https://developers.google.com/speed/webp/docs/cwebp#options are the options -sns , -segments which seem to have a great impact on the output size. Sometimes the output is bigger and sometimes smaller in size for the same options but I haven't concluded yet what is the reason for that and how to properly use them.
I also don't understand the -sharpness option which doesn't have an impact at the output size at least for me.
My approach is far less than a scientific approach and more like a trial and error method and If anybody knows how to use those options for the specific input and explain them for optimum results I would appreciate such a feedback.
-strong and -sharpness only change the strength of the filtering in the header of the compressed bitstream. They will be used at decoding time. That's why you don't see a change in file size for these.
-sns controls the choice of filtering strength and quantization values within each segments. A segment is just a group of macroblocks in the picture, that are believed to be sharing similar properties regarding complexity and compressibility. A complex photo should likely use the maximum allowed 4 segments (which is the default).

Alternative to sws_scale

I am performing encoding of the captured windows screen with x264 using libavcodec. Since, the input is RGB, i am converting it to YUV to make it compatible with x264. I am using the sws_scale function for the same.
My question is if there is any alternate for this function since i don't need any scaling to be done in my case. Also, it would be useful if someone could throw light on the workflow of this function.
P.S: I am assuming x264 operates only in YUV color space. If this assumption is incorrect, please inform me on the same.
Thanks in advance.
I could not find an alternative to swscale and it seems except the fast bilinear algorithm (for scaling) all other algorithms used in the library provide a fairly negligible color shift.
Also, it is mathematically impossible to convert from RGB to YUV color space without any color shift (due to the approximations in the equations).
P.S: I could not use the RGB version of libx264 / libavcodec. If you have details on how to implement and how to build a corresponding version on windows, please post links/info for the same.
P.S: I am assuming x264 operates only in YUV color space. If this assumption is incorrect, please inform me on the same.
libx264 supports I420/YV12/NV12/I422/YV16/NV16/I444/YV24/BGR24/BGR32/RGB24 input colorspaces which are encoded as YUV 4:2:0/YUV 4:2:2/YUV 4:4:4/RGB (which should be specified in params). But anything except YUV 4:2:0 will need support from decoder because they are not part of High profile but newer profiles (High 4:2:2 and High 4:4:4 profiles).

image and video compression

What are similar compressors to the RAR algorithm?
I'm interested in compressing videos (for example, avi) and images (for example, jpg)
Winrar reduced an avi video (1 frame/sec) to .88% of it's original size (i.e. it was 49.8MB, and it went down to 442KB)
It finished the compression in less than 4 seconds.
So, I'm looking to a similar (open) algorithm. I don't care about decompression time.
Compressing "already compressed" formats are meaningless. Because, you can't get anything further. Even some archivers refuse to compress such files and stores as it is. If you really need to compress image and video files you need to "recompress" them. It's not meant to simply convert file format. I mean decode image or video file to some extent (not require to fully decoding), and apply your specific models instead of formats' model with a stronger entropy coder. There are several good attempts for such usages. Here is a few list:
PackJPG: Open source and fast performer JPEG recompressor.
Dell's Experimental MPEG1 and MPEG2 Compressor: Closed source and proprietry. But, you can at least test that experimental compressor strength.
Precomp: Closed source free software (but, it'll be open in near future). It recompress GIF, BZIP2, JPEG (with PackJPG) and Deflate (only generated with ZLIB library) streams.
Note that recompression is usually very time consuming process. Because, you have to ensure bit-identical restoration. Some programs even check every possible parameter to ensure stability (like Precomp). Also, their models have to be more and more complex to gain something negligible.
Compressed formats like (jpg) can't really be compressed anymore since they have reached entropy; however, uncompressed formats like bmp, wav, and avi can.
Take a look at LZMA