MATLAB image processing HELP! - matlab

I am trying to find the area of some regions on an image.
alt text http://img821.imageshack.us/img821/7541/cell1.jpg
For example, I want find the area of the dark-large region on the upper left side.
and I want to find the area of any of the closed geometry from the image.
How can I do that in matlab.
I looked online and I tried regionprops(), but it didn't identify the different regions.

filter your image using 'imfilter'. use 'fspecial' to define your filter. Then use an active contour model to segment the large objects. google 'active contour matlab'. use the 'polygon' and area function to find the area of enclosed contours.

I can reccomand you a few ways to do that:
a) Arithmetic mean filter:
f = imfilter(g, fspecial('average', [m n]))
b) Geometric mean filter
f = exp(imfilter(log(g), ones(m, n), 'replicate')) .^ (1/(m*n))
c) Harmonic mean filter
f = (m*n) ./ imfilter(1 ./ (g + eps), ones(m, n), 'replicate');
where n and m are size of a mask (for instace you can set m=3 n=3)

To add to hkf's answer, you might want to apply some pre-processing to your image to make it easier to handle.
I think you're on the right track with noise reduction. Your contours look relatively easy to detect - maybe you could simply binarize your image, apply combinations of imdilate, imclose and imerode to take care of artifacts (this is mostly trial and error), then try detecting the contours.
Then, of course, the challenge is to find a recipe that works for all images, and not just one sample.

I think you can use contour methods for this problem. Finally, you can extract with the help of a contourdata extracting function. Research, you will see it on the Mathworks web site.

Related

Discrete Wavelet Transform Matlab

I am trying to use the functions provided in the Matlab Wavelet Toolbox to create a multi-level discrete wavelet decomposition of an image, extracting the coefficients, manipulating them, and recomposing them back into the image.
I tried using a number of functions but none of them seem to do what I need. These are the steps to do this.
Use wavedec2 to decompose the image into [C,S].
[C,S] = wavedec2(X,N,Lo_D,Hi_D)
I then must use detcoef2 to extract the detail coefficients from [C,S]. [C,S] is the 'wavelet decomposition structure', it does not represent the actual coefficients such as cD, cH, cV.
[H,V,D] = detcoef2('all',C,S,N)
Manipulate the data
Reconstruct [C,S] ???? no function does this.
Use waverec2 to recompose the original image.
X = waverec2(C,S,Lo_R,Hi_R)
The problem is with step 4. There is no function that recreates the [C,S] and I can't call the function waverec2 because it needs the manipulated version of C and S.
Do I not need wavedec2 and waverec2? Perhaps should I just use detcoef2 and upcoef2?
Someone with some experience with DWT could solve this in a minute, I am fairly new to it.
Thanks
I'm curious as to why you can't use dwt2 for computing the 2D DWT of images. What you have there is a lot more work than what you should be doing. dwt2 is much more suitable to do what you want. You'd call dwt2 like so:
[LL,LH,HL,HH] = dwt2(X,Lo_D,Hi_D);
X is your image, and Lo_D and Hi_D are your low-pass and high-pass filters you want to apply to the image. LL is the low-passed version of the image, where the horizontal and vertical directions are low-passed, LH is where the vertical direction is low-passed and the horizontal direction is high-passed, HL is the vertical direction is high-passed and the horizontal direction is low-passed, and HH is where both directions are high-passed. As such LH, HL and HH are the detail coefficients while LL contains the structure.
You can also specify the filter you want with a string as the second parameter:
[LL,LH,HL,HH] = dwt2(X,'wname');
'wname' is a string that specifies what filter you want. You can type in help wfilters to see what filters are available.
For example, by doing using cameraman.tif from MATLAB's system path, we can do a one level 2D DWT (using the Haar wavelet) and show all of the components like so:
im = imread('cameraman.tif');
[LL, LH, HL, HH] = dwt2(im2double(im), 'haar');
imshow([LL LH; HL HH], []);
I use im2double to convert the image to double precision to ensure accuracy. We get this image:
Note that the image is subsampled by 2 in order to produce the decompositions of LL, LH, HL and HH.
Once you have these components, you can certainly manipulate them to your heart's content. Once you manipulate them, you can simply use idwt2 like so:
Y = idwt2(LL,LH,HL,HH,Lo_R,Hi_R); %//or
Y = idwt2(LL,LH,HL,HH,'wname');
The four components are assumed to be double, and so you can convert the images back to whatever type that was representing the image. Assuming your image was uint8, you can do: Y = im2uint8(Y); to convert back.
This should hopefully be what you're looking for!

Compute the combined image of SVD perturbations

I know how to generate a combined image:
STEP1: I = imread('image.jpg');
STEP2: Ibw = single(im2double(I));
STEP3: [U S V] = svd(Ibw); %where U and S are letf and right odd vectors, respectively, and D the
%diagonal matrix of particular values
% calculate derived image
STEP4: P = U * power(S, i) * V'; % where i is between 1 and 2
%To compute the combined image of SVD perturbations:
STEP5: J = (single(I) + (alpha*P))/(1+alpha); % where alpha is between 0 and 1
So by integrating P into I , we get a combined image J which keeps the main information of the original image and is expected to work better against minor changes of expression, illumination and occlusions..
I have some questions:
1) I would like to know in details What is the motivation of applying Step3 ? and what we are perturbing here?
2)In Step3, what was meant by "particular values"?
3) The derived image P can also be called: "the perturbed image"?
Any help will be very appreciated!
This method originated from this paper that can be accessed here. Let's answer your questions in order.
If you want to know why this step is useful, you need to know a bit of theory about how the SVD works. The SVD stands for Singular Value Decomposition. What you are doing with the SVD is that it is transforming your N-dimensional data in such a way where it orders it according to which dimension exhibits the most amount of variation, and the other dimensions are ordered by this variation in decreasing order (SVD experts and math purists... don't shoot me. This is how I understand the SVD to be). The singular values in this particular context give you a weighting of how much each dimension of your data contributes to in its overall decomposition.
Therefore, by applying that particular step (P = U * power(S, i) * V';), you are giving more emphasis to the "variation" in your data so that the most important features in your image will stand out while the unimportant ones will "fade" away. This is really the only rationale that I can see behind why they're doing this.
The "particular" values are the singular values. These values are part of the S matrix and those values appear in the diagonals of the matrix.
I wouldn't call P the derived image, but an image that locates which parts of the image are more important in comparison to the rest of the image. By mixing this with the original image, those features that you should concentrate on are more emphasized while the other parts of the image that most people wouldn't pay attention to, the get de-emphasized in the overall result.
I would recommend you read that paper that you got this algorithm from as it explains the whole process fairly well.
Some more references for you
Take a look at this great tutorial on the SVD here. Also, this post may answer more questions regarding the insight of the algorithm.

Image Enhancement using combination between SVD and Wavelet Transform

My objective is to handle illumination and expression variations on an image. So I tried to implement a MATLAB code in order to work with only the important information within the image. In other words, to work with only the "USEFUL" information on an image. To do that, it is necessary to delete all unimportant information from the image.
Reference: this paper
Lets see my steps:
1) Apply the Histogram Equalization in order to get an histo_equalized_image=histeq(MyGrayImage). so that large intensity variations
can be handled to some extent.
2) Apply svd approximations on the histo_equalized_image. But before to do that, I applied the svd decomposition ([L D R]=svd(histo_equalized_image)), then these singular values are used to make the derived image J=L*power(D, i)*R where i varies between 1 and 2.
3) Finally, the derived image is combined with the original image to: C=(MyGrayImage+(a*J))/1+a. Where a varies from 0 to 1.
4) But all the steps above are not able to perform well under varying conditions. So finally, wavelet transform should be used to handle those variations(we use only the LL image bloc). Low frequency component contains the useful information, also, unimportant
information gets lost in this component. The (LL) component is ineffective with illumination changes and expression variations.
I wrote a matlab code for that, and I would like to know if my code is correct or no (if no, so how to correct it). Furthermore, I am interested to know if I can optimize these steps. Can we improve this method? if yes, so how? Please I need help.
Lets see now my Matlab code:
%Read the RGB image
image=imread('img.jpg');
%convert it to grayscale
image_gray=rgb2gray(image);
%convert it to double
image_double=im2double(image_gray);
%Apply histogram equalization
histo_equalized_image=histeq(image_double);
%Apply the svd decomposition
[U S V] = svd(histo_equalized_image);
%calculate the derived image
P=U * power(S, 5/4) * V';
%Linearly combine both images
J=(single(histo_equalized_image) + (0.25 * P)) / (1 + 0.25);
%Apply DWT
[c,s]=wavedec2(J,2,'haar');
a1=appcoef2(c,s,'haar',1); % I need only the LL bloc.
You need to define, what do you mean by "USEFUL" or "important" information. And only then do some steps.
Histogram equalization is global transformation, which gives different results on different images. You can make an experiment - do histeq on image, that benefits from it. Then make two copies of the original image and draw in one black square (30% of image area) and white square on second. Then apply histeq and compare results.
Low frequency component contains the useful information, also,
unimportant information gets lost in this component.
Really? Edges and shapes - which are (at least for me) quite important are in high frequencies. Again we need definition of "useful" information.
I cannot see theoretical background why and how your approach would work. Could you a little bit explain, why do you choose this method?
P.S. I`m not sure if this papers are relevant to you, but recommend "Which Edges Matter?" by Bansal et al. and "Multi-Scale Image Contrast Enhancement" by V. Vonikakis and I. Andreadis.

Matlab image centroid simulation

I was given this task, I am a noob and need some pointers to get started with centroid calculation in Matlab:
Instead of an image first I was asked to simulate a Gaussian distribution(2 dimensional), add noise(random noise) and plot the intensities, now the position of the centroid changes due to noise and I need to bring it back to its original position by
-clipping level to get rid of the noise, noise reduction by clipping or smoothing, sliding average (lpf) (averaging filter 3-5 samples ), calculating the means or using Convolution filter kernel - which does matrix operations which represent the 2-D images
Since you are a noob, even if we wrote down the answer verbatim you probably won't understand how it works. So instead I'll do what you asked, give you pointers and you'll have to read the related documentation :
a) to produce a 2-d Gaussian use meshgrid or ndgrid
b) to add noise to the image look into rand ,randn or randi, depending what exactly you need.
c) to plot the intensities use imagesc
d) to find the centroid there are several ways, try to further search SO, you'll find many discussions. Also you can check TMW File exchange for different implementations for that.

Image deblurring using MATLAB

I have two images, one is degraded and one is part of the original image. I need to enhance the first image by using the second one, and I need to do this in the frequency domain. I cut the same area from the degraded image, took its FFT, and tried to calculate the transfer function, but when I applied that function to the image the result was terrible.
So I tried h=fspecial('motion',9,45); to be my transfer function and then reconstructed the image with the code given below.
im = imread('home_degraded.png');
im = rgb2gray(im);
h = fspecial('motion',9,45);
H = zeros(519,311);
H(1:7,1:7) = h;
Hf = fft2(H);
d = 0.02;
Hf(find(abs(Hf)<d))=1;
I = ifft2(fft2(im)./Hf);
imshow(mat2gray(abs(I)))
I have two questions now:
How can I generate a transfer function by using the small rectangles (I mean by not using h=fspecial('motion',9,45);)?
What methods can I use to remove noise from an enhanced image?
I can recommend you a few ways to do that:
Arithmetic mean filter:
f = imfilter(g, fspecial('average', [m n]))
Geometric mean filter
f = exp(imfilter(log(g), ones(m, n), 'replicate')) .^ (1/(m*n))
Harmonic mean filter
f = (m*n) ./ imfilter(1 ./ (g + eps), ones(m, n), 'replicate');
where n and m are size of a mask (for instance, you can set m = 3 n = 3)
Basically what you want to do has two steps (at least) to it:
Estimate the PSF (blur kernel) by using the patch of the image with the squares in it.
Use the estimated kernel to do deconvolution to your blurry image
If you want to "guess" the PSF for step 1, that's fine but it's better to calculate it.
For step 2, you MUST first use edgetaper which will subside the ringing effects in your image, which you call noise.
The you use non-blind deconvolution (step 2) by using the function deconvlucy following this syntax:
J = deconvlucy(I,PSF)
this deconvolution procedure adds some noise, especially if your PSF is not 100% accurate, but you can make it smoother if you allow for more iterations (trading in details, NFL).
For the first step, if you don't care about the fact that you have the "sharp" square, you can just use blind deconvolution deconvblind and get some estimate for the PSF.
If you want to do it correctly and use the sharp patch then you can use it as your data term target in any optimization scheme involving the estimation of the PSF.