Image Enhancement using combination between SVD and Wavelet Transform - matlab

My objective is to handle illumination and expression variations on an image. So I tried to implement a MATLAB code in order to work with only the important information within the image. In other words, to work with only the "USEFUL" information on an image. To do that, it is necessary to delete all unimportant information from the image.
Reference: this paper
Lets see my steps:
1) Apply the Histogram Equalization in order to get an histo_equalized_image=histeq(MyGrayImage). so that large intensity variations
can be handled to some extent.
2) Apply svd approximations on the histo_equalized_image. But before to do that, I applied the svd decomposition ([L D R]=svd(histo_equalized_image)), then these singular values are used to make the derived image J=L*power(D, i)*R where i varies between 1 and 2.
3) Finally, the derived image is combined with the original image to: C=(MyGrayImage+(a*J))/1+a. Where a varies from 0 to 1.
4) But all the steps above are not able to perform well under varying conditions. So finally, wavelet transform should be used to handle those variations(we use only the LL image bloc). Low frequency component contains the useful information, also, unimportant
information gets lost in this component. The (LL) component is ineffective with illumination changes and expression variations.
I wrote a matlab code for that, and I would like to know if my code is correct or no (if no, so how to correct it). Furthermore, I am interested to know if I can optimize these steps. Can we improve this method? if yes, so how? Please I need help.
Lets see now my Matlab code:
%Read the RGB image
image=imread('img.jpg');
%convert it to grayscale
image_gray=rgb2gray(image);
%convert it to double
image_double=im2double(image_gray);
%Apply histogram equalization
histo_equalized_image=histeq(image_double);
%Apply the svd decomposition
[U S V] = svd(histo_equalized_image);
%calculate the derived image
P=U * power(S, 5/4) * V';
%Linearly combine both images
J=(single(histo_equalized_image) + (0.25 * P)) / (1 + 0.25);
%Apply DWT
[c,s]=wavedec2(J,2,'haar');
a1=appcoef2(c,s,'haar',1); % I need only the LL bloc.

You need to define, what do you mean by "USEFUL" or "important" information. And only then do some steps.
Histogram equalization is global transformation, which gives different results on different images. You can make an experiment - do histeq on image, that benefits from it. Then make two copies of the original image and draw in one black square (30% of image area) and white square on second. Then apply histeq and compare results.
Low frequency component contains the useful information, also,
unimportant information gets lost in this component.
Really? Edges and shapes - which are (at least for me) quite important are in high frequencies. Again we need definition of "useful" information.
I cannot see theoretical background why and how your approach would work. Could you a little bit explain, why do you choose this method?
P.S. I`m not sure if this papers are relevant to you, but recommend "Which Edges Matter?" by Bansal et al. and "Multi-Scale Image Contrast Enhancement" by V. Vonikakis and I. Andreadis.

Related

Image Parameters (Standard Deviation, Mean and Entropy) of an RGB Image

I couldn't find an answer for RGB image.
How can someone get a value of SD,mean and Entropy of RGB image using MATLAB?
From http://airccse.org/journal/ijdms/papers/4612ijdms05.pdf TABLE3, it seems he got one answer so did he get the average of the RGB values?
Really in need of any help.
After reading the paper, because you are dealing with colour images, you have three channels of information to access. This means that you could alter one of the channels for a colour image and it could still affect the information it's trying to portray. The author wasn't very clear on how they were obtaining just a single value to represent the overall mean and standard deviation. Quite frankly, because this paper was published in a no-name journal, I'm not surprised how they managed to get away with it. If this was attempted to be published in more well known journals (IEEE, ACM, etc.), this would probably be rejected outright due to that very ambiguity.
On how I interpret this procedure, averaging all three channels doesn't make sense because you want to capture the differences over all channels. Doing this averaging will smear that information and those differences get lost. Practically speaking, if you averaged all three channels, should one channel change its intensity by 1, and when you averaged the channels together, the reported average would be so small that it probably would not register as a meaningful difference.
In my opinion, what you should perhaps do is treat the entire RGB image as a 1D signal, then perform the mean, standard deviation and entropy of that image. As such, given an RGB image stored in image_rgb, you can unroll the entire image into a 1D array like so:
image_1D = double(image_rgb(:));
The double casting is important because you want to maintain floating point precision when calculating the mean and standard deviation. The images will probably be of an unsigned integer type, and so this casting must be done to maintain floating point precision. If you don't do this, you may have calculations that get saturated or clamped beyond the limits of that data type and you won't get the right answer. As such, you can calculate the mean, standard deviation and entropy like so:
m = mean(image_1D);
s = std(image_1D);
e = entropy(image_1D);
entropy is a function in MATLAB that calculates the entropy of images so you should be fine here. As noted by #CitizenInsane in his answer, entropy unrolls a grayscale image into a 1D vector and applies the Shannon definition of entropy on this 1D vector. In a similar token, you can do the same thing with a RGB image, but we have already unrolled the signal into a 1D vector anyway, and so the input into entropy will certainly be well suited for the unrolled RGB image.
I have no idea how the author actually did it. But what you could do, is to treat the image as a 1D-array of size WxHx3 and then simply calculate the mean and standard deviation.
Don't know if table 3 is obtain in the same way but at least looking at entropy routine in image toolbox of matlab, RGB values are vectorized to single vector:
I = imread('rgb'); % Read RGB values
I = I(:); % Vectorization of RGB values
p = imhist(I); % Histogram
p(p == 0) = []; % remove zero entries in p
p = p ./ numel(I); % normalize p so that sum(p) is one.
E = -sum(p.*log2(p));

Compute the combined image of SVD perturbations

I know how to generate a combined image:
STEP1: I = imread('image.jpg');
STEP2: Ibw = single(im2double(I));
STEP3: [U S V] = svd(Ibw); %where U and S are letf and right odd vectors, respectively, and D the
%diagonal matrix of particular values
% calculate derived image
STEP4: P = U * power(S, i) * V'; % where i is between 1 and 2
%To compute the combined image of SVD perturbations:
STEP5: J = (single(I) + (alpha*P))/(1+alpha); % where alpha is between 0 and 1
So by integrating P into I , we get a combined image J which keeps the main information of the original image and is expected to work better against minor changes of expression, illumination and occlusions..
I have some questions:
1) I would like to know in details What is the motivation of applying Step3 ? and what we are perturbing here?
2)In Step3, what was meant by "particular values"?
3) The derived image P can also be called: "the perturbed image"?
Any help will be very appreciated!
This method originated from this paper that can be accessed here. Let's answer your questions in order.
If you want to know why this step is useful, you need to know a bit of theory about how the SVD works. The SVD stands for Singular Value Decomposition. What you are doing with the SVD is that it is transforming your N-dimensional data in such a way where it orders it according to which dimension exhibits the most amount of variation, and the other dimensions are ordered by this variation in decreasing order (SVD experts and math purists... don't shoot me. This is how I understand the SVD to be). The singular values in this particular context give you a weighting of how much each dimension of your data contributes to in its overall decomposition.
Therefore, by applying that particular step (P = U * power(S, i) * V';), you are giving more emphasis to the "variation" in your data so that the most important features in your image will stand out while the unimportant ones will "fade" away. This is really the only rationale that I can see behind why they're doing this.
The "particular" values are the singular values. These values are part of the S matrix and those values appear in the diagonals of the matrix.
I wouldn't call P the derived image, but an image that locates which parts of the image are more important in comparison to the rest of the image. By mixing this with the original image, those features that you should concentrate on are more emphasized while the other parts of the image that most people wouldn't pay attention to, the get de-emphasized in the overall result.
I would recommend you read that paper that you got this algorithm from as it explains the whole process fairly well.
Some more references for you
Take a look at this great tutorial on the SVD here. Also, this post may answer more questions regarding the insight of the algorithm.

Weighted Lucas Kanade - Gaussian Function MATLAB

I implemented the Basic Lucas Kanade Optical Flow algorithm in Matlab.
I used the algorithm from Wikipedia.
Since I want to improve this Basic optical flow algorithm, I tried adding a weightening function which makes certain Pixels in the beighbourhood more important or less important (see also Wikipedia).
I basically calculated the following for every Pixel in the beighbourhood and the Center Pixel itself.
for: Center Pixel and every neighbourhood-pixel
sigma = 10;
weight(s) = (1/(2*pi*sigma^2)) * exp(-((first-x)^2+(second-y)^2)/(2*sigma^2))
x,y ist the Center Point pixel, it always stays the same.
first,second is the current neighbourhood-pixel
Since I am using a 5x5 neighbourhood, (first-x) or (second-y) will always be one of these: "0,1,-1,2,-2"
I then apply the weight-values in each part of the sum.
Problem: With Sigma = 10 I don't get a better result for the optical flow than without the weightening function.
With smaller Sigmas it's not better. Afterall there is no difference between the Output vectors with or without the gaussian function
Is there a way to improve this Gaussian function to actually make the vectors more acurate than without weightening?
Thank you sooo much.
I'm not sure how you apply the values, but it usually should make a little difference.
For a better optical flow you could :
presmooth the images with a gaussian
use a spatiotemporal Lucas-Kanade method
or use a more advanced algorithm

Matlab image centroid simulation

I was given this task, I am a noob and need some pointers to get started with centroid calculation in Matlab:
Instead of an image first I was asked to simulate a Gaussian distribution(2 dimensional), add noise(random noise) and plot the intensities, now the position of the centroid changes due to noise and I need to bring it back to its original position by
-clipping level to get rid of the noise, noise reduction by clipping or smoothing, sliding average (lpf) (averaging filter 3-5 samples ), calculating the means or using Convolution filter kernel - which does matrix operations which represent the 2-D images
Since you are a noob, even if we wrote down the answer verbatim you probably won't understand how it works. So instead I'll do what you asked, give you pointers and you'll have to read the related documentation :
a) to produce a 2-d Gaussian use meshgrid or ndgrid
b) to add noise to the image look into rand ,randn or randi, depending what exactly you need.
c) to plot the intensities use imagesc
d) to find the centroid there are several ways, try to further search SO, you'll find many discussions. Also you can check TMW File exchange for different implementations for that.

MATLAB image processing HELP!

I am trying to find the area of some regions on an image.
alt text http://img821.imageshack.us/img821/7541/cell1.jpg
For example, I want find the area of the dark-large region on the upper left side.
and I want to find the area of any of the closed geometry from the image.
How can I do that in matlab.
I looked online and I tried regionprops(), but it didn't identify the different regions.
filter your image using 'imfilter'. use 'fspecial' to define your filter. Then use an active contour model to segment the large objects. google 'active contour matlab'. use the 'polygon' and area function to find the area of enclosed contours.
I can reccomand you a few ways to do that:
a) Arithmetic mean filter:
f = imfilter(g, fspecial('average', [m n]))
b) Geometric mean filter
f = exp(imfilter(log(g), ones(m, n), 'replicate')) .^ (1/(m*n))
c) Harmonic mean filter
f = (m*n) ./ imfilter(1 ./ (g + eps), ones(m, n), 'replicate');
where n and m are size of a mask (for instace you can set m=3 n=3)
To add to hkf's answer, you might want to apply some pre-processing to your image to make it easier to handle.
I think you're on the right track with noise reduction. Your contours look relatively easy to detect - maybe you could simply binarize your image, apply combinations of imdilate, imclose and imerode to take care of artifacts (this is mostly trial and error), then try detecting the contours.
Then, of course, the challenge is to find a recipe that works for all images, and not just one sample.
I think you can use contour methods for this problem. Finally, you can extract with the help of a contourdata extracting function. Research, you will see it on the Mathworks web site.