Histogram equalization with color correction (iPhone/objective-C) - iphone

I am trying to implement a histogram equalization method (HE) for a UIImage in my iphone app.
I read the following:
http://en.wikipedia.org/wiki/Histogram_equalization
But it says:
Still, it should be noted that applying the same method on the Red, Green, and Blue components of an RGB image may yield dramatic changes in the image's color balance since the relative distributions of the color channels change as a result of applying the algorithm. However, if the image is first converted to another color space, Lab color space, or HSL/HSV color space in particular, then the algorithm can be applied to the luminance or value channel without resulting in changes to the hue and saturation of the image.
So would this be a feasible approach?
Grab UIImage data and convert from RGB to HSL
Apply HE on luminance channel
convert data back to RGB
Create new UIImage from data
Will this be slow, I wonder? Also, will I have to deal with 8/16/24 bit data differently, as I have no idea what kind of image will be used with my app? Or can I assume 24 bit for images in the iPhone?
I would appreciate any pointers to objective-C code that does color corrected histogram equalization.
I have looked at the library below, but it does not do any color correction for HE:
http://code.google.com/p/simple-iphone-image-processing/source/browse/#svn/trunk/Classes%3Fstate%3Dclosed
Thanks!

Yes you can do it this way, that will work. Yes it will "cost more" since you have to do the conversion back and forth - but that's the price you will have to pay if you don't want to affect the hue and saturation. Is that worth it for the images you're correcting? It would depend on your application, are you OK with a hit in performance vs best quality? You will likely only have to deal with 8bit color components, you can assume "24 bit" for images but that's 3 x 8bit components The only way to know your answers though is to try.

I recommend using YUV Colorspace. Both for accuracy and for computation simplicity (Linear Combination).
One method would be applying the histogram equalization on the RGB image (Image2).
Then let the user to chose what he wants, Apply only on luminosity or all 3 channels.
For the first choice take UV channels of the original image with the Y channel of the equalized image and convert back to RGB.
For the second choice just leave the user with Image2.

Since after transformation, you deal with I/V as being continuous values, you will have to apply some binning strategy, which results in a step Histogram for the quantity you wish to equalize. Therefore, you might speed this up by reducing the bin size?

Just write the codes and model applying HE to each of the RGB component. Although there are much calculation for its 3 components, but programming speed is OK. In most of the cases, the contrast is improved, but the "look" of the image is changed. So agree to transform the RGB into another space then apply the HE again. I am looking for the formula and also the correct color space for the HE. Which color space is easier?
I write the HE in the iPad platform, but I find after opening a big image taken from my Canon, the whole program crashes after UIPopoverContoller, UIImagePickerController functions. I think it maybe due to I am pushing too much on the phone's OS, or the OS allocates only a limit amount of memory space for each of the apps. If apps is using more than pre-set memory, then the iOS just kills the apps right away. So must take care of the size of the input image, and the garbage collection of unused memory, and memory leak. Using XCode's instrument tool to check for leakage is a must.

Related

artifacts in processed images

This question is related to my previous post Image Processing Algorithm in Matlab in stackoverflow, which I already got the results that I wanted to.
But now I am facing another problem, and getting some artefacts in the process images. In my original images (stack of 600 images) I can't see any artefacts, please see the original image from finger nail:
But in my 10 processed results I can see these lines:
I really don't know where they come from?
Also if they belong to the camera's sensor why can't I see them in my original images? Any idea?
Edit:
I have added the following code suggested by #Jonas. It reduces the artefact, but does not completely remove them.
%averaging of images
im = D{1}(:,:);
for i = 2:100
im = imadd(im,D{i}(:,:));
end
im = im/100;
imshow(im,[]);
for i=1:100
SD{i}(:,:)=imsubtract(D{i}(:,:),im(:,:))
end
#belisarius has asked for more images, so I am going to upload 4 images from my finger with speckle pattern and 4 images from black background size( 1280x1024 ):
And here is the black background:
Your artifacts are in fact present in your original image, although not visible.
Code in Mathematica:
i = Import#"http://i.stack.imgur.com/5hM3u.png"
EntropyFilter[i, 1]
The lines are faint, but you can see them by binarization with a very low level threshold:
Binarize[i, .001]
As for what is causing them, I can only speculate. I would start tracing from the camera output itself. Also, you may post two or three images "as they come straight from the camera" to allow us some experimenting.
The camera you're using is most likely has a CMOS chip. Since they have independent column (and possibly row) amplifiers, which may have slightly different electronic properties, you can get the signal from one column more amplified than from another.
Depending on the camera, these variability in column intensity can be stable. In that case, you're in luck: Take ~100 dark images (tape something over the lens), average them, and then subtract them from each image before running the analysis. This should make the lines disappear. If the lines do not disappear (or if there are additional lines), use the post-processing scheme proposed by Amro to remove the lines after binarization.
EDIT
Here's how you'd do the background subtraction, assuming that you have taken 100 dark images and stored them in a cell array D with 100 elements:
% take the mean; convert to double for safety reasons
meanImg = mean( double( cat(3,D{:}) ), 3);
% then you cans subtract the mean from the original (non-dark-frame) image
correctedImage = rawImage - meanImg; %(maybe you need to re-cast the meanImg first)
Here is an answer that in opinion will remove the lines more gently than the above mentioned methods:
im = imread('image.png'); % Original image
imFiltered = im; % The filtered image will end up here
imChanged = false(size(im));% To document the filter performance
% 1)
% Compute the histgrams for each column in the lower part of the image
% (where the columns are most clear) and compute the mean and std each
% bin in the histogram.
histograms = hist(double(im(501:520,:)),0:255);
colMean = mean(histograms,2);
colStd = std(histograms,0,2);
% 2)
% Now loop though each gray level above zero and...
for grayLevel = 1:255
% Find the columns where the number of 'graylevel' pixels is larger than
% mean_n_graylevel + 3*std_n_graylevel). - That is columns that contains
% statistically 'many' pixel with the current 'graylevel'.
lineColumns = find(histograms(grayLevel+1,:)>colMean(grayLevel+1)+3*colStd(grayLevel+1));
% Now remove all graylevel pixels in lineColumns in the original image
if(~isempty(lineColumns))
for col = lineColumns
imFiltered(:,col) = im(:,col).*uint8(~(im(:,col)==grayLevel));
imChanged(:,col) = im(:,col)==grayLevel;
end
end
end
imshow(imChanged)
figure,imshow(imFiltered)
Here is the image after filtering
And this shows the pixels affected by the filter
You could use some sort of morphological opening to remove the thin vertical lines:
img = imread('image.png');
SE = strel('line',2,0);
img2 = imdilate(imerode(img,SE),SE);
subplot(121), imshow(img)
subplot(122), imshow(img2)
The structuring element used was:
>> SE.getnhood
ans =
1 1 1
Without really digging into your image processing, I can think of two reasons for this to happen:
The processing introduced these artifacts. This is unlikely, but it's an option. Check your algorithm and your code.
This is a side-effect because your processing reduced the dynamic range of the picture, just like quantization. So in fact, these artifacts may have already been in the picture itself prior to the processing, but they couldn't be noticed because their level was very close to the background level.
As for the source of these artifacts, it might even be the camera itself.
This is a VERY interesting question. I used to deal with this type of problem with live IR imagers (video systems). We actually had algorithms built into the cameras to deal with this problem prior to the user ever seeing or getting their hands on the image. Couple questions:
1) are you dealing with RAW images or are you dealing with already pre-processed grayscale (or RGB) images?
2) what is your ultimate goal with these images. Is the goal to simply get rid of the lines regardless of the quality in the rest of the image that results, or is the point to preserve the absolute best image quality. Are you to perform other processing afterwards?
I agree that those lines are most likely in ALL of your images. There are 2 reasons for those lines ever showing up in an image, one would be in a bright scene where OP AMPs for columns get saturated, thus causing whole columns of your image to get the brightest value camera can output. Another reason could be bad OP AMPs or ADCs (Analog to Digital Converters) themselves (Most likely not an ADC as normally there is essentially 1 ADC for th whole sensor, which would make the whole image bad, not your case). The saturation case is actually much more difficult to deal with (and I don't think this is your problem). Note: Too much saturation on a sensor can cause bad pixels and columns to arise in your sensor (which is why they say never to point your camera at the sun). The bad column problem can be dealt with. Above in another answer, someone had you averaging images. While this may be good to find out where the bad columns (or bad single pixels, or the noise matrix of your sensor) are (and you would have to average pointing the camera at black, white, essentially solid colors), it isn't the correct answer to get rid of them. By the way, what I am explaining with the black and white and averaging, and finding bad pixels, etc... is called calibrating your sensor.
OK, so saying you are able to get this calibration data, then you WILL be able to find out which columns are bad, even single pixels.
If you have this data, one way that you could erase the columns out is to:
for each bad column
for each pixel (x, y) on the bad column
pixel(x, y) = Average(pixel(x+1,y),pixel(x+1,y-1),pixel(x+1,y+1),
pixel(x-1,y),pixel(x-1,y-1),pixel(x-1,y+1))
What this essentially does is replace the bad pixel with a new pixel which is the average of the 6 remaining good pixels around it. The above is an over-simplified version of an algorithm. There are certainly cases where a singly bad pixel could be right next the bad column and shouldn't be used for averaging, or two or three bad columns right next to each other. One could imagine that you would calculate the values for a bad column, then consider that column good in order to move on to the next bad column, etc....
Now, the reason I asked about the RAW versus B/W or RGB. If you were processing a RAW, depending on the build of the sensor itself, it could be that only one sub-pixel (if you will) of the bayer filtered image sensor has the bad OP AMP. If you could detect this, then you wouldn't necessarily have to throw out the other good sub-pixel's data. Secondarily, if you are using an RGB sensor, to take a grayscale photo, and you shot it in RAW, then you may be able to calculate your own grayscale pixels. Many sensors when giving back a grayscale image when using an RGB sensor, will simply pass back the Green pixel as the overall pixel. This is due to the fact that it really serves as the luminescence of an image. This is why most image sensors implement 2 green sub-pixels for every r or g sub-pixel. If this is what they are doing (not ALL sensors do this) then you may have better luck getting rid of just the bad channel column, and performing your own grayscale conversion using.
gray = (0.299*r + 0.587*g + 0.114*b)
Apologies for the long winded answer, but I hope this is still informational to someone :-)
Since you can not see the lines in the original image, they are either there with low intensity difference in comparison with original range of image, or added by your processing algorithm.
The shape of the disturbance hints to the first option... (Unless you have an algorithm that processes each row separately.)
It seems like your sensor's columns are not uniform, try taking a picture without the finger (background only) using the same exposure (and other) settings, then subtracting it from the photo of the finger (prior to other processing). (Make sure the background is uniform before taking both images.)

Perl - Ratio of homogeneous areas of an image

I would like to check whether an image has a lot of homogeneous areas. Therefore I would like to get some kind of value of an image that declares a ratio for images depending on the amount/size of homogeneous areas (e.g. that value could have a range from 0 to 5).
Instead of a value there could be some kind of classification as well.
[many homogeneous areas -> value/class 5 ; few homogeneous areas -> value/class 0]
I would like to do that in perl. Is there a package/function or something like that?
What you want seems to be an area of image processing research which I am not familiar with. However, GraphicsMagick's mogrify utility has a -segment option:
Use -segment to segment an image by analyzing the histograms of the color components and identifying units that are homogeneous with the fuzzy c-means technique. The scale-space filter analyzes the histograms of the three color components of the image and identifies a set of classes. The extents of each class is used to coarsely segment the image with thresholding. The color associated with each class is determined by the mean color of all pixels within the extents of a particular class. Finally, any unclassified pixels are assigned to the closest class with the fuzzy c-means technique.
I don't know if this is any use to you. You might have to hit the library on this one, and read some research. You do have access to this through PerlMagick as well. However, it does not look like it gives access to the internals, but just produces an image based on parameters.
In my tests (without really understanding what the parameters do), photos turned entirely black, whereas PNG images with large areas of similar colors were reduced to a sort of an average color. Whether you can use that fact to develop a measure is an open question I am not going to investigate ;-)

Count black spots in an image - iPhone - Objective C

I need to count the number of black spots in an image(Not the percentage of black spots but the count). Can anyone suggest a step wise procedure that is used in image manipulation to count the spots.
Objective : Count black spots in an image
What I've done till now :
1. Converted image to grayscale
2. Read the pixels for their intensity values
3. I have set a threshold to find darker areas
Other implementations:
1. Gaussian blur
2. Histogram equalisations
What i have browsed :
Flood fill algorithms, Water shed algorithms
Thanks a lot..
you should first "label" the image, then count the number of labels you have found.
the label operation is the first operation done in a blob analysis operation: it groups similar adjacent pixels into a single object, and assign a value to this object. the condition for grouping generally is a background/foreground distinction: the label operation will group adjacent pixels which are part of the foreground, where background is defined as pure black or pure white, and foreground is any pixel whose color is not the color of the background.
the label operation is pretty easy to implement and requires not much resources.
_(see the wikipedia article, or this page for more information on labelling. a good paper on the implementation of the label operation is "Two Strategies to Speed up Connected Component Labeling Algorithms" by Kesheng Wu, Ekow Otoo and Kenji Suzuki)_
after labelling, count the number of labels (you can even count the labels while labelling), and you have the number of "black spots".
the next step is defining what a black spot is: converting your input image into a grayscale image (by converting it to HSL and using the luminance plane for example) then applying a threshold should do it. if the illumination of your input image is not even, you may need a better thresholding algorithm (a form of adaptive threshold)...
It sounds like you want to label the black spots (Blobs) using a binary image labelling algorithm. This should give you a place to start

Transparency with JPEGs

JPEGs are smaller in size than PNGs. So, I thought that if I can make a specific region in a JPEG-file transparent, with some code, maybe I can save some bytes.
So does anyone know how to achieve this with for example PHP or JavaScript?
No. You can't do this. JPGs do not support alpha channels and have no capacity to designate certain colors as transparent either (GIF-style).
There's several issues with this, all of them have to do with that JPEG is a lossy compression format. The JPEG format is optimized for natural images and sharp edges will get blurred. If you intend that a specific pixel should have the value #d67fff there's no guarantee that after color conversion, FDCT, quantization, IDCT and color conversion, the pixel still will have that value. There's also a strong possibility that that pixel value will occur in areas that you don't want.
No. JPEG does not support transparency and is not likely to do so any time
soon. http://www.faqs.org/faqs/jpeg-faq/part1/section-12.html
You cannot do that, the client renders the image and doesn't know that you want it to treat that color as transparent (plus various compression methods on jpeg wouldn't work well with transparencies anyway).
I believe you can go with an 8-bit custom-pallet png, should save you a lot of space. Otherwise 24-bit PNG is your only high color option.
You can convert your image to SVG containing a color information as JPEG and an alpha channel as grayscale mask. Here is a tool I wrote to do it https://github.com/igrmk/transpeg

How does one embed a file inside of an image? iOS iPhone

There is an app on the app store called active photo (http://itunes.apple.com/us/app/active-photo/id366798464?mt=8) that allows you to embed a hidden image or .exe file inside of an image. I would like to know how to do this regrading adding images to images, kinda like sub images in the original image.
I've been looking into metadata but no tag seems to be big enough to hold an NSData representation of the second picture.
How would one go about adding any type of file to an image, either through embedding or metadata, that would allow the image to be sent though email and or text message and still retain the data?
Thank you.
This is known as steganography.
I would imagine the simplest way of hiding a file inside a JPEG image is just to alter its pixel data in such a way that the compression doesn't damage it but is subtle enough that an interceptor can't detect the hidden data.
I don't think it is possible with JPEG because it's a lossy compression so you would end up corrupting the embedded file. But PNG uses a compression method similar to Deflate, which is loseless.
I have started writing a program like this. The idea was to hide bytes of data by splitting them into the least significant bits of pixels' color channels. Let me do some examples.
An RGB-8 image represents a pixel with 3 bytes, one for red, one for green and one for blue. I store 3 bits into red channel, two into green (human eye is more sensitive to green color) and 3 into blue. So I embed one byte per pixel. Similarly with RGBA-8 image I do 2-2-2-2. This of course involves some bitwise operations.
Things become more interesting with RGB(A)-16 images, where there are two bytes per channel. I use the entire least significant byte of every channel with minimal distortion (worst case 255 / 65535 = ~3.9%) and store up to 3 or 4 bytes of data per pixel. Not bad!!
Moreover there are no complex bitwise operations in this case, a single assignement does the job.
There are lot of improvement to it. I thought to ask the user a password, hash it and seed a secure pseudo random number generator, then no longer move pixel by pixel but instead asking the generator for a new random index.
The drawback of this solution is that the more data has already been embedded, the slower it becomes, because the generator will give more and more occupied indices. But it is much more secure in this way. To make it even more safer I thought to introduce noise data in the untouched pixels, in order to hide the positions of the true data.
As you can see you can do a lot with PNG images! If you are interested I can give the code I wrote so far.