JPEG Encoding question - encoding

24 bits are available per pixel.
Assuming
1. eyes are sensitive to brightness than color.
2. eyes are sensitive to red & green than blue.
What kind of encoding can I choose?
I thought about it,but didn't get an idea. Y'CbCr with 4:2:0 encoding works for the brightness part, but what about the color?

That's already accounted for. YUV420 meens that the color components are subsampled. I'm not sure if it was horizontally or vertically though. That means that your image will contain half the color information compared to luminence. Also, the quantization tables are different for the color components so that will also increase the compression rate.

Related

How to convert a black and white photo that was originally colored, back to its original color?

I've converted a colored photo to black and white, and bolded the edges. Now i need to convert it back to its original color with the bolded edges. Is there any function in matlab which allows me to do so?
Once you remove the colour from an image, there is no possible way to automatically put it back. You're basically reducing a set of 16,777,216 colours to a set of 256 - on average each shade of grey has 65,536 equivalent colours, and without the original image there's no way to guess which it could be.
Now, if you were to take the bolded lines from your black-and-white image and paint them on top of the original coloured image, that might end up producing what you're looking for.
If what you are trying to do is to use some filter over the B/W image and then use that with the original color. I suggest you convert your image to a color space with Lightness channel that suits your needs (for example L*a*b* if you need the ligtness to be uniformly distributed regarding human recognition of differences) and apply your filter only over the Lightness channel.

Transparency with JPEGs

JPEGs are smaller in size than PNGs. So, I thought that if I can make a specific region in a JPEG-file transparent, with some code, maybe I can save some bytes.
So does anyone know how to achieve this with for example PHP or JavaScript?
No. You can't do this. JPGs do not support alpha channels and have no capacity to designate certain colors as transparent either (GIF-style).
There's several issues with this, all of them have to do with that JPEG is a lossy compression format. The JPEG format is optimized for natural images and sharp edges will get blurred. If you intend that a specific pixel should have the value #d67fff there's no guarantee that after color conversion, FDCT, quantization, IDCT and color conversion, the pixel still will have that value. There's also a strong possibility that that pixel value will occur in areas that you don't want.
No. JPEG does not support transparency and is not likely to do so any time
soon. http://www.faqs.org/faqs/jpeg-faq/part1/section-12.html
You cannot do that, the client renders the image and doesn't know that you want it to treat that color as transparent (plus various compression methods on jpeg wouldn't work well with transparencies anyway).
I believe you can go with an 8-bit custom-pallet png, should save you a lot of space. Otherwise 24-bit PNG is your only high color option.
You can convert your image to SVG containing a color information as JPEG and an alpha channel as grayscale mask. Here is a tool I wrote to do it https://github.com/igrmk/transpeg

Convert print, CMYK images to tiled, RGB images for iPhone?

I was given some high-res images, which were originally made for a printed magazine, to show in an iPhone app, like the Xcode PhotoScroller app (like iPhone's native Photo viewer app). I'm down-sizing them to 1024 x 1536 px and I'm going to be slicing them up for use with UIScrollView and CATiledLayer.
When I'm resizing them, should I also convert them from CMYK to RGB?
I think so because RGB is for digital, right? But they also looked fine on the iPhone as CMYK. Why do they say to use RGB for digital?
What's the best way to resize them to 1/2 & 1/4 and slice all 3 sizes up?
1024/4 = 256, so I'm thinking of making every tile (except for the edge ones) 256 x 256 px. I tried Tile Cutter, which worked, but I have 20 images, so I'll have to do it 20 times. Plus, it doesn't let you put levels deep, so I'll also have to resize each image twice in PhotoShop. So, that's 60 images that I'll have to run through The Cutter. It shouldn't take too long, but odds are, I'll be doing this again, so I'd like to have a better solution. Ideally, it'd be cool to do this with the iPhone, but for now, I think I'll use Paul Alexander's Tile Ruby script unless you suggest a better option. I also might try Zoomify.
RGB has a wider range of colors then CMYK.
CMYK is the range of colors printed on a white paper. it stands for Cyan Magenta Yellow and blacK. (think of the 4 colors in your printer cardiges. CMY for colors and K for black.)
when miking CMY you have a very very dark grey. It goes on a scale of 0-100% from each color.
RGB are monitors colors. It's the way LCDs and CRTs process colors. With Red Green and Blue. It goes on a scale of 0-255. 255 of both 3 colors makes white.
Now since monitors are backed with backlight, it can make bright color printers can't do. like shiny green or shiny pink.
A CMYK picture will look fine on screen. A RGB will lose color on print (like those shiny greens will become matte).
For the iPhone, work on RGB. reasons:
- It process directly RGB values
- You'll get precise color
- RGB takes less memory then CMYK

Histogram equalization with color correction (iPhone/objective-C)

I am trying to implement a histogram equalization method (HE) for a UIImage in my iphone app.
I read the following:
http://en.wikipedia.org/wiki/Histogram_equalization
But it says:
Still, it should be noted that applying the same method on the Red, Green, and Blue components of an RGB image may yield dramatic changes in the image's color balance since the relative distributions of the color channels change as a result of applying the algorithm. However, if the image is first converted to another color space, Lab color space, or HSL/HSV color space in particular, then the algorithm can be applied to the luminance or value channel without resulting in changes to the hue and saturation of the image.
So would this be a feasible approach?
Grab UIImage data and convert from RGB to HSL
Apply HE on luminance channel
convert data back to RGB
Create new UIImage from data
Will this be slow, I wonder? Also, will I have to deal with 8/16/24 bit data differently, as I have no idea what kind of image will be used with my app? Or can I assume 24 bit for images in the iPhone?
I would appreciate any pointers to objective-C code that does color corrected histogram equalization.
I have looked at the library below, but it does not do any color correction for HE:
http://code.google.com/p/simple-iphone-image-processing/source/browse/#svn/trunk/Classes%3Fstate%3Dclosed
Thanks!
Yes you can do it this way, that will work. Yes it will "cost more" since you have to do the conversion back and forth - but that's the price you will have to pay if you don't want to affect the hue and saturation. Is that worth it for the images you're correcting? It would depend on your application, are you OK with a hit in performance vs best quality? You will likely only have to deal with 8bit color components, you can assume "24 bit" for images but that's 3 x 8bit components The only way to know your answers though is to try.
I recommend using YUV Colorspace. Both for accuracy and for computation simplicity (Linear Combination).
One method would be applying the histogram equalization on the RGB image (Image2).
Then let the user to chose what he wants, Apply only on luminosity or all 3 channels.
For the first choice take UV channels of the original image with the Y channel of the equalized image and convert back to RGB.
For the second choice just leave the user with Image2.
Since after transformation, you deal with I/V as being continuous values, you will have to apply some binning strategy, which results in a step Histogram for the quantity you wish to equalize. Therefore, you might speed this up by reducing the bin size?
Just write the codes and model applying HE to each of the RGB component. Although there are much calculation for its 3 components, but programming speed is OK. In most of the cases, the contrast is improved, but the "look" of the image is changed. So agree to transform the RGB into another space then apply the HE again. I am looking for the formula and also the correct color space for the HE. Which color space is easier?
I write the HE in the iPad platform, but I find after opening a big image taken from my Canon, the whole program crashes after UIPopoverContoller, UIImagePickerController functions. I think it maybe due to I am pushing too much on the phone's OS, or the OS allocates only a limit amount of memory space for each of the apps. If apps is using more than pre-set memory, then the iOS just kills the apps right away. So must take care of the size of the input image, and the garbage collection of unused memory, and memory leak. Using XCode's instrument tool to check for leakage is a must.

Histogram of image

I have 2 images that look nearly identical. The histogram for one (256 bins) has intensities distributed pretty evenly throughout. The other has intensities at the lowest and highest bin. Why would this be? Then wouldnt it appear binary (thats not the case)?
Think about it this way: Imagine you are taking a histogram of two grayscale images with each pixel represented by a color value 0-255. One image contains pixels that all have gray levels of 128. The second image contains a "checkerboard" pattern (pixels alternate between 0 and 255). If you step back far enough that you no longer see individual pixels, they will appear identical to the naked eye. Your brain "averages" the alternating black and white pixels into a field of gray.
This is what your images are doing. The first image has colors distributed evenly throughout the range and the second image has concentrations of specific colors, but if you calculate an average color for the image (and also for sub-sections within the image) you should see similar values for both.
Never trust in your eyes! They will always lie to you.
Consider this silly example that can be illustrative here. An X-Ray 'photo' is nothing more than black and white dots. But as they are small and mixed along the image, your eyes see different shades of gray.
The same can happen in a digital image, where, although the pixels may have the same size, then can be black and white and 'distributed' in the image in such a way that you see it as having more graylevels. This is called halftone.
Without seeing the images it's hard to say, but it sounds like the second may be slightly clipped.
The difference also could just be a slight difference in contrast in the images that's no visible to the naked eye.