How many bits per component do I have to specify for pngcrunch-optimized 24bit PNG files with alphatransparency? - iphone

CGBitmapContextCreate takes an parameter that's not very obvious to me:
For example, for a 32-bit pixel format
and an RGB color space, you would
specify a value of 8 bits per
component.
I have created 24-bit PNG files with alphatransparency, and added them to Xcode. At compile time, Xcode optimizes those PNG tiles with pngcrunch.
So, when trying to make an graphics context out of such an image file on iPhone-OS, I need to specify the bits per component.
In this case, I would say they're 4 bits per component, although I dont know if alpha counts as an component.

It's 8 bits per component:
Red:8;
Green:8;
Blue:8;
Alpha:8;
That adds up to 32 bits per pixel. Your 24-bit png with transparency is 24-bits for RGB, plus 8 bits for transparency (the 'alpha channel').

Related

Trying to understand how 1-bit BMP image is drawn

As can be seen in this example, each channel (R, G, B) in a BMP file takes an input. A 24-bit BMP image has 8 bit for-R , 8-bit for G, and 8 bit for B. I saved an image in MS-paint as monochrome(black and white). Its property says the image's depth is 1-bit. The question is who gets this 1 bit: R , G or B? Is it not mandatory that all the three channels must get certain value? I am not able to understand how MS-Paint has drawn this BMP image using 1 bit.
Thanks in advance for your replies.
There's multiple ways to store a bitmap. In this case, the important distinction is RGB versus indexed.
In an RGB bitmap, every pixel is associated with three separate values, one for red, another for green, and another for blue. Depending on the "bitness" (bit depth) and the specific pixel format, the different colour channels can have different amount of bits allocated for them - the simplest case is the typical true-color with 8 bits for each of the channels, and another 8 bits (optional) for the alpha channel. However, some pixel formats allocate a bit differently - the idea is that the human eye has different sensitivity to each of those channels, and you can save up on space and improve visual quality by allocating the bits in a smarter way. For example, one of the more popular pixel formats is BGR-565 - that is, 16 bits total, 5 bits for blue, 6 bits for green and 5 bits for red.
In an indexed bitmap, the value stored with each of the pixels is an index (hence "indexed bitmap") into a palette (a colour table). The palette is usually a simple table of colours, using RGB "pixel" formats to assign each index with some specific colour. For example, index 0 might mean black, 1 might mean turqouise etc.
In this case, the bit-depth doesn't exactly map into colour quality - you're not trying to map the whole colour space, you're focusing on some subset of the possible colours instead. For example, if you have 256 shades of grey (say, from black to white), a true-colour bitmap would need at least three bytes per pixel (and each of those three bytes would have the same value), while you could use an indexed bitmap with a pallete of all the grey colours, requiring only one byte per pixel (plus the cost of the pallete - 256 * 3 bytes). There's a lot of benefits to using indexed bitmaps, and a lot of tricks to improve the visual quality further without using more bits-per-pixel, but that would be way beyond the scope of this question.
This also means that you only need as many possible values as you want to show. If you only need 16 different colours, you only need four bits per pixel. If you only need a monochromatic bitmap (that is, either a pixel is "on", or it's "off"), you only need one bit per pixel - and that's exactly your case. If you have the amount of distinct colours you need, you can easily get the required bit depth by taking a base-2 logarithm (e.g. log 256 = 8).
So let's say you have an image that only uses two colours - black and white. You'll build a pallete with two colours, black and white. And for each of the pixels in the bitmap, you either save 0 if it's black, or 1 if it's white.
Now, when you want to draw a bitmap like this, you simply read the palette (0 -> RGB(0, 0, 0), 1 -> RGB(1, 1, 1) in this case), and then you read one pixel after another. If the bit is zero, paint a black pixel. If it's one, paint a white pixel. Done :)
No, it depends on the type of data you chose to save as. Because you chose to save as monochrome, the RGB mapping is not used here, and the used mapping would go as a one byte per pixel, ranging from white to black.
Each type has its own mapping ways, saving as 24-bit will give you RGB mapping, saving as 256 will map a byte for each pixel, each value represents a color( you can find the table on the internet), as for monochrome, you'll have the same as a 256 bitmap, but the color table will only have white and black colors.
Sorry for the mistake, the way I explained for monochrome is actually used by Gray Scale, the monochrome uses one bit to indicate if the pixel is black or white, depending on the value of each bit, no mapping table is used.

iOS 256 Colors (VGA) to RGB

I'd like to convert a VGA color (256 colors; 8 bit) to a RGB color on iOS.
Is it possible to compute this or do I have to use color tables (using CGColorSpaceCreateIndexed).
UIColor does not support 256 Colors.
Thanks :)
Somewhere, the title you're porting should have set the palette. On the VGA, the 256 colours are mapped through a table that the programmer has previously set to convert them into 18 bit RGB colour (at a uniform 6 bits per channel). If you're running the original title through emulation then watch for writes to ports 0x3c6, 0x3c8 and 0x3c9 or calls to the BIOS via int 10h, with ax = 0x1010 (to set a single colour) or 0x1012 (to set a range). If you have the original code, obviously look for the source of the palette table.
In drawing terms, you can keep the palette yourself, for example as a C-style array of 256 CGColorRefs, or use CGColorSpaceCreateIndexed as you suggest (ignore Apple's slight documentation error; the colour table can contain up to 256 entries, not up to 255) probably with a bitmap context to just pass your buffer off to CoreGraphics and forget about it.
I expect the remapping will be performed on the CPU, so if that gets a bit too costly then consider using GL ES 2.x and writing a suitable pixel shader — you'd upload your actual image as, say, a luminance (ie, single channel) texture, plus a 256x1 texture where the colour at each spot is a palette entry, then write a shader that reads from the first texture for the current texture coordinates and uses that value to index the second.

Best practice for PNG optimization?

I 'd like to prepare my PNGs for the best optimization, so I can get the best image quality (lossless if possible) and the smallest size.
From what I understand, I should use: PNG, 72 dpi, RGB, but what else?
Here is what we find in the iPhone HIG:
Note:*The standard bit depth for icons and images is 24 bits (8 bits each for red, green, and blue), plus an 8-bit alpha channel. The PNG format is recommended, because it preserves color depth and supports an embedded alpha channel.
I guess this mean we should save the image as PNG 24 and create them in 8 bits mode? But I also read about 32 bits for best quality ?
The interlacing scheme (witch add to the file size) allows for the PNGs to display faster. Does this applies to the iPhone?
Thanks.
I suggest using ImageAlpha (lossy) on as many images as you can, because it greatly reduces their size.
Optimize all images with ImageOptim — it will remove all invisible junk and re-compress the data.
Disable Xcode conversion, because it undoes other optimisations and can make images much slower to load.
24 bit is red, green and blue with 8 bits each. 32 bits is RGB plus an 8-bit alpha channel. So if you need (semi-)transparent images, you should go for 32bit PNG, otherwise 24bit.
You don't have to compress/crush the PNGs yourself, Xcode's build steps will automatically use pngcrush and re-order the color channels for the iPhone's BGR memory alignment.
For my background app I am using JPEG (Export for web in photoshop) with quality 70.
Last day I heard about pngcrunch, tried it but file size is the same...
http://pmt.sourceforge.net/pngcrush/
Take a look at this previous post :
Understanding 24 bit PNG generated with Photoshop

Understanding 24 bit PNG generated with Photoshop

A 24 bit .png file with transparency, as those that can be generated with Photoshop, has really 24 bits distributed across each color plus the alpha ? or the 24 bit refer only to the colors and ignores the alpha (RGBA 8888).
Is there any tool to examine a PNG file and verify this kind of information? Does Photoshop have any options to verify or configure this?
24 bit + alpha is actually 32 bits per pixel. Meaning you have the Red, Green, Blue and Alpha channels, each being 8 bit, allowing for 256 shades per channel translating to 256 x 256 x 256 x 256 possible colour combinations. That's what the "millions of colours" and "billions of colours" mean in certain graphics and video software.
As I understand it, there are three kinds of "24 bit" pngs:
24 bits with no transparency. No alpha information, truly 24 bits per pixel.
24 bits per pixel with alpha transparency. This would be 24 bits of color information with 8 bits of alpha (allows for various levels of transparency) - 32 bits per pixel total.
24 bits per pixel with binary transparency. This would be 24 bits of color information with 1 bit of alpha (transparent or not transparent) - 25 bits per pixel total.
24 bit PNG doesn't say much. An image has a pixel format. The pixel format describes the Colorspace used (such as CMYK, RGB) and bits per channel information (i.e. how many bits are allocated to represent each channel of the colorspace in use).
Go to File > File Info > Advanced. That should tell you what you are looking for.
After dissecting the exported file myself (from Photoshop CS6), I found that the "24 bit" file generated by Photoshop is actually still 8 bit. The RGBA is still one byte per channel. The IHDR PNG chunk still says that it's 8 bits per channel.
It's an 8 bit PNG.
The exported PNG also contains about 825 bytes of useless marketing text data (per PNG image).
See the image (with the byte for "bits per channel" selected):
See the specification for more details:
http://www.libpng.org/pub/png/spec/1.2/png-1.2.pdf

PNG8+Alpha from Fireworks (colormap) are different/smaller than from elsewhere (RGBA). Why?

In Fireworks, when you export a PNG8 file with alpha transparency, the resulting file will be something like this:
png8-fireworks.png: PNG image data, 500 x 500, 8-bit colormap, non-interlaced
If you convert a 32bit PNG using other tools (PNGOUT, Smush.it) the result looks like this:
png24-smushit.png: PNG image data, 500 x 500, 8-bit/color RGBA, non-interlaced
png8-pngout.png: PNG image data, 500 x 500, 8-bit/color RGBA, non-interlaced
What exactly is the difference? They both have alpha transparency, but the Fireworks file is 8KB while the others are 20KB. Now the Fireworks file in noticeably lower quality (namely with banding on gradients).
For some images the PNG8+alpha from Fireworks works great and has a super small file size comparatively. I just haven't been able to figure out what Fireworks is doing and how it is different than the other methods.
The PNG8 file is a very efficient format. It finds the unique colors in the image and only saves those in a small palette. The cool part is that it also saves alpha transparency in the palette with each color. (If you have three pure reds (#FF0000) in your image, but each has a different alpha value, let's say 255, 128, 65, it will save three entries in the palette.
You can also in Fireworks choose to limit the palette size to a power of 2, so you can reduce colors used for more savings. Often a 256 color image will look fine at 64 colors and save a lot of weight.
from sites of both tools:
PNGOUT:
It won't convert an image to a color type or bit depth that cannot losslessly store the image.
It won't reduce the number of colors being used in an image, or convert the colors to grayscale unless all the colors correspond to PNG grayscale values already.
Smush.it:
It is a "lossless" tool […]
Neither gives you a 256 paletted png: it's the diff between "colormap" (= palette) and "rgba" (truecolor = R of 2^8 x G of 2^8 x B of 2^8 x Alpha of 2^8, with 2^8 = 256).
Fireworks does.
PNG-8 means 8 bits per pixel, which means it can only display 256 different colours (from a pallet).
24 and 32 bits per pixel allow you to use far more colours (and hence get nice smooth gradients) but come at the cost of filesize.