Which one produces the smallest 9-bit depth grayscale images using LibPNG?
16 bit Grayscale
8 bit GrayScale with alpha, having the 9th bit stored as alpha
Any other suggestion?
Also, from documentation it looks like in 8bit GRAY_ALPHA, alpha is 8 bit as well. Is it possible to have 8 bits of gray with only one bit of alpha?
If all 256 possible gray levels are present (or are potentially present), you'll have to use 16-bit G8A8 pixels. But if one or more gray levels is not present, you can use that spare level for transparency, and use 8-bit indexed pixels or grayscale plus a tRNS chunk to identify the transparent value.
Libpng doesn't provide a way of checking for whether a spare level is available or not, so you have to do it in your application. ImageMagick, for example, does that for you:
$ pngcheck -v rgba32.png im_opt.png
File: rgba32.png (178 bytes)
chunk IHDR at offset 0x0000c, length 13
64 x 64 image, 32-bit RGB+alpha, non-interlaced
chunk IDAT at offset 0x00025, length 121
zlib: deflated, 32K window, maximum compression
chunk IEND at offset 0x000aa, length 0
$ magick rgba32.png im_optimized.png
$ pngcheck -v im_optimized.png
File: im_optimized.png (260 bytes)
chunk IHDR at offset 0x0000c, length 13
64 x 64 image, 8-bit grayscale, non-interlaced
chunk tRNS at offset 0x00025, length 2
gray = 0x00ff
chunk IDAT at offset 0x00033, length 189
zlib: deflated, 8K window, maximum compression
chunk IEND at offset 0x000fc, length 0
There is no G8A1 format defined in the PNG specification. But the alpha channel, being all 0's or 255's, compresses very well, so it's nothing to worry about. Note that in this test case (a simple white-to-black gradient), the 32-bit RGBA file is actually smaller than the "optimized" 8-bit grayscale+tRNS
Which one produces the smallest 9-bit depth grayscale images using LibPNG?
16 bit Grayscale
8 bit GrayScale with alpha, having the 9th bit stored as alpha
The byte raw layout of both formats is very similar: G2-G1 G2-G1 ... in one case (most significant byte of each 16-bit value first), G-A G-A ... in the other. Because the filtering/prediction is done at the byte level, this means that little or no difference is to be expected between two alternatives. Because 16 bit Grayscale is in your scenario more natural, I'd opt for it.
If you go the other route, I'd suggest to experiment putting the most significant bit or the least significant bit on the alpha channel.
Also, from documentation it looks like in 8bit GRAY_ALPHA, alpha is 8 bit as well. Is it possible to have 8 bits of gray with only one bit of alpha
No. But 1 bit of alpha would mean totally opaque/totally transparent, hence you could opt for add a TRNS chunk to declare a special color as totally transparent (as pointed out in other answer, this would disallow the use of that colour)
Related
If my input is less than a multiple of 512 bit , i need to append bit and length bits to my input so that it is a multiple of 512 bit .
https://infosecwriteups.com/breaking-down-sha-256-algorithm-2ce61d86f7a3
But what if my input is already a multiple of 512 bit ? is it still required to do the bit append ? for example , if my message is already 512 bit long , do i need to bit append it to become 1024 bit long ?
And what if my input is less than a multiple of 512 bit , but long enough to not allow append length bits ? for example , my input is 504 bit long.
It seems to explain it rather clearly:
The number of bits we add is calculated as such so that after addition of these bits the length of the message should be exactly 64 bits less than a multiple of 512.
If there were 512 bits, you have to pad it to 960 bits, then add 64 bits for length for a total of 1024. The same with 504, since 504 > 512-64. Otherwise you'd have a situation where the last 64 bits are sometimes the length bits and sometimes not, which doesn't seem right.
The only case where you wouldn't add any padding is if the data is already 512*n-64, e.g. if it were 448 bits. Then you add no padding but still add the length bits, and end up with 512.
According to RFC 62634, you should always pad data with atleast one bit set to 1 and append length (64-bit).
Even if input is 448 bit long you should pad it resulting in two 512-bit chunks:
[448 bits of input]['1']['0' x 511][64 bits representing input length]
So this is wrong:
The only case where you wouldn't add any padding is if the data is already 512*n-64, e.g. if it were 448 bits. Then you add no padding but still add the length bits, and end up with 512.
Note: If you accept input only as byte array, minimum padding is 1 byte set to 10000000 (binary).
64 bit architecture like x86-64 have word size of 64bits. In this case, if a memory access crosses over the word boundary, then it will require double the time to access data. So alignment is required. - This is what I know. Correct me if I am wrong.
Now, GCC uses 16 byte alignment (msvc atleast uses 8 byte alignment) for long double whose non-padding size is 10 bytes. But anyways, with 8 byte alignment it requires 2 read cycles and it is the same case with 16 byte alignment. So why stricter 16 byte alignment? What is the purpose of alignment other than that I mentioned above?
Also, in fact, since the non-padding part of long double (the 80-bit x87 extended FP) is 10 bytes, actually 4 byte alignment is sufficient for that. In this case also, it can read data within 2 read cycles (either 4-6 or 8-2). So, also explain where this assumption has gone wrong.
(The actual sizeof(long double) is 12 in the i386 System V ABI, 16 in x86-64 System V. Multiples of their respective alignof() of 4 and 16)
I am making a simple png image from scratch. I have had the scanlines data for it. Now I want to make it into zlib stream without being compressed. How can I do that? I have read the "ZLIB Compressed Data Format Specification version 3.3" at "https://www.ietf.org/rfc/rfc1950.txt" but still not understanding. Could someone give me a hint about setting the bytes in zlib stream?
Thanks in advance!
As mentioned in RFC1950, the details of the compression algorithm are described in another castle RFC: DEFLATE Compressed Data Format Specification version 1.3 (RFC1951).
There we find
3.2.3. Details of block format
Each block of compressed data begins with 3 header bits
containing the following data:
first bit BFINAL
next 2 bits BTYPE
Note that the header bits do not necessarily begin on a byte
boundary, since a block does not necessarily occupy an integral
number of bytes.
BFINAL is set if and only if this is the last block of the data
set.
BTYPE specifies how the data are compressed, as follows:
00 - no compression
[... a few other types]
which is the one you wanted. These 2 bits BTYPE, in combination with the last-block marker BFINAL, is all you need to write "uncompressed" zlib-compatible data:
3.2.4. Non-compressed blocks (BTYPE=00)
Any bits of input up to the next byte boundary are ignored.
The rest of the block consists of the following information:
0 1 2 3 4...
+---+---+---+---+================================+
| LEN | NLEN |... LEN bytes of literal data...|
+---+---+---+---+================================+
LEN is the number of data bytes in the block. NLEN is the
one's complement of LEN.
So, the pseudo-algorithm is:
set the initial 2 bytes to 78 9c ("default compression").
for every block of 32768 or less bytesᵃ
if it's the last block, write 01, else write 00
... write [block length] [COMP(block length)]ᵇ
... write the immediate data
repeat until all data is written.
Don't forget to add the Adler-32 checksum at the end of the compressed data, in big-endian order, after 'compressing' it this way. The Adler-32 checksum is to verify the uncompressed, original data. In the case of PNG images, that data has already been processed by its PNG filters and has row filter bytes appended – and that is "the" data that gets compressed by this FLATE-compatible algorithm.
ᵃ This is a value that happened to be convenient for me at the time; it ought to be safe to write blocks as large as 65535 bytes (just don't try to cross that line).
ᵇ Both as words with the low byte first, then high byte. It is briefly mentioned in the introduction.
I have a number that has about 540,000 digits and I want to compress this number to a reasonable length since 540,000 is kinda absurd. What would be the best compression algorithm for this and how small can I compress it to?
A little background: Basically I have a picture that is 200 pixels wide and 300 pixels long. I'm taking out the red green blue values of each pixel. So each pixel is represented with 9 digits (because each red green value is represented with a number between 0 - 255). The picture has 60000 pixels in total. So representing this picture as a number would equate to a number equal to 9 x 60000 = 540,000.
That's not a number. That's an image. There are many ways to compress an image. For lossless compression, look at PNG, JPEG-2000, and BCIF.
You don't gain anything converting an image into a number. The entropy level is the same and any good compression algorithm (lossless) would perform the same, irrespective of the encoding you use
If you want a textual version of the image as opposed to a binary format, consider encoding it in Base64 rather than Base10. This'll have each pixel be represented with 4 characters rather than 9
https://en.wikipedia.org/wiki/Base64
Further compression should be done by encoding the image as a PNG & then taking the base64 representation of that
Here I have binary image,and I need to compress it using Run-length encoding RLE.I used the regular RLE algorithm and using maximum count is 16.
Instead of reducing the file size, it is increasing it. For example 5*5 matrix, 10 values repeating count is one,that is making the file bigger.
How to avoid this glitch? Is there any better way I can apply RLE partially to the matrix?
If it's for your own usage only you can create your custom image file format, and in the header you can mark if RLE is used or not, and the range of coordinates of X and Y and possible the bit planes for which it is used. But if you want to produce an image file that follows some defined image file format that uses RLE (.pcx comes into my mind) you must follow the file format specifications. If I remember correctly, in .pcx there wasn't any option to disable RLE partially.
If you are not required to use RLE and you are only looking for an easy to implement compression method, before using any compression, I suggest that you first check how many bytes your 5x5 binary matrix file takes. If the file size is 25 bytes or more, then you are saving it using at least one byte (8 bits) for each element (or alternatively you have a lot of data which is not matrix content). If you don't need to store the size, 5x5 binary matrix takes 25 bits, which is 4 bytes and 1 bit, so practically 5 bytes. I'm quite sure that there's no compression method that is generally useful for files that have size of 5 bytes. If you have matrices of different sizes, you can use eg. unsigned integer 16-bit fields (2 bytes each) for maximum matrix horizontal/vertical size of 65535 or unsigned integer 32-bit fields (4 bytes each) for maximum matrix horizontal/vertical size of 4294967295.
For example 100x100 binary matrix takes 10000 bits, which is 1250 bytes. Add 2 x 2 = 4 bytes for 16-bit size fields or 2 x 4 = 8 bytes for 32-bit size fields. After this, you can plan what would be the best compression method.