increasing the resolution of a grayscale image - matlab

I have a grayscale image, and need to increase its resolution. How can this be done in MATLAB? Would it mainly be done by multiplying the dimensions of the image for instance?

You need to perform interpolation. There are many ways to do this. Use imresize (e.g. imgOut=imresize(img,scale,method);), or if you do not have the Image Processing Toolbox, consider the following code:
function imres = resizeim(I,outsize,interpalg)
if nargin<3 || isempty(interpalg),
interpalg='cubic';
end
rows=outsize(1);
cols=outsize(2);
vscale = size(I,1) / rows;
hscale = size(I,2) / cols;
imgClass = class(I);
imres = interp2(double(I), (1:cols)*hscale + 0.5 * (1 - hscale), ...
(1:rows)'*vscale + 0.5 * (1 - vscale), ...
interpalg);
imres = cast(imres,imgClass);
Note: This is a rough start. You many need to perform pre-filtering, or other transformations. Also, this example only supports 2D (grayscale) images. For RGB, adapt this to process each color plane, or simple process each plane in a loop. Again, this is just an example.
Aside from edge handling, this gives the same results as imresize with anti-aliasing turned off (i.e. imresize(...,'Antialiasing',false)).
Regarding edge handling, see the documentation for interp2 for information on the extrapval parameter. The code gets ugly, but you can either patch the min/max elements in the interpolation points (interp2 inputs) to simply map exactly to the edges, or you can use NaN for extrapval, and post-process imres to replace the NaNs with its neighbor, etc. Note that simply interpolating at points such as linspace(1,size(I,1),rows) will not give the expected scale change.

You can also perform sinc interpolation by Fourier transforming the image, zero-padding it, inverse Fourier transforming it, and taking the absolute value.
im_rz = abs(ifft2(padarray(fft2(im),[row_pad, col_pad])))

You can "hallucinate" high resolution details. See for example Glasner et al "Single image Super resolution" ICCV 2009.
An implementation of that algorithm can be found here

Related

Matlab watershed algorithm - control separation width

I am interested in separating features on an image using the watershed algorithm. Using the matlab tutorial, I tried to write a small proof of principle algorithm that I can further use in my image analysis.
Im = imread('../../Pictures/testrec.png');
bw = rgb2gray(Im);
figure
imshow(bw,'InitialMagnification','fit'), title('bw')
%Compute the distance transform of the complement of the binary image.
D = bwdist(~bw);
figure
imshow(D,[],'InitialMagnification','fit')
title('Distance transform of ~bw')
%Complement the distance transform, and force pixels that don't belong to the objects to be at Inf .
D = -D;
D(~bw) = Inf;
%Compute the watershed transform and display the resulting label matrix as an RGB image.
L = watershed(D);
L(~bw) = 0;
rgb = label2rgb(L,'jet',[.5 .5 .5]);
figure
imshow(rgb,'InitialMagnification','fit')
title('Watershed transform of D')
It appears that the feature separation is somewhat random, as can be seen from the prolonged feature in the middle. However, there does not seem to be any parameters for the watershed algorithm, that could be used to optimize its performance. Can you suggest how such parameter can be introduced, or a better algorithm to process the data.
Bonus Question: I am interested to first separate my image using bwconncomp, then selectively apply the watershed algorithm to only some of the regions. Assume I know which of the cc.PixelIdxList regions I want to apply the algorithm to - how do I get a new PixelIdxList with separated components.
Watershed transformation cannot separate convex shapes.
There is no way to change that. A convex shape always results in one object.
Blobs very close to convex will always result in poor watershed results.
The only reason why you have that "somewhat random" result instead of a single basin is that a few pixels are a bit off the perimeter.
Results of watershed are improved by pre- and post-processing. But that would be very specific to a certain problem.

Matlab Image Processing - increasing contrast using histogram

I have to import a greyscale image into matlab, convert the pixels it is composed off from unsigned 8 bit ints into doubles, plot a histogram, and finally improve the image quality using the transformation:
v'(x,y) = a(v(x,y)) + b
where v(x,y) is the value of the pixel at point x,y and v'(x,y) is the new value of the pixel.
My primary issue is the value of the two constants, a and b that we have to choose for the transformation. My understanding is that an equalized histogram is desirable for a good image. Other code found on the Internet deals either with using the built-in MATLAB histeq function or calculating probability densities. I've found no reference anywhere to choosing constants for the transformation given above.
I'm wondering if anyone has any tips or ideas on how to go about choosing these constants. I think the rest of my code does what it's supposed to:
image = imread('image.png');
image_of_doubles = double(image);
for i=1:1024
for j=1:806
pixel = image_of_doubles(i,j);
pixel = 0.95*pixel + 5;
image_of_doubles(i,j) = pixel;
end
end
[n_elements,centers] = hist(image_of_doubles(:),20);
bar(centers,n_elements);
xlim([0 255]);
Edit: I also played a bit with different values of constants. The a constant when changed seems to be the one that stretches the histogram; however this only works for a values that are between 0.8 - 1.2 (and it doesn't stretch it enough - it equalizes the histogram only on the range 150 - 290). If you apply an a of let's say 0.5, the histogram is split into two blobs, with a lot of pixels at about 4 or 5 different intensities; again, not equalized in the least.
The operation that you are interested in is known as linear contrast stretching. Basically, you want to multiply each of the intensities by some gain and then shift the intensities by some number so you can manipulate the dynamic range of the histogram. This is not the same as histeq or using the probability density function of the image (a precursor to histogram equalization) to enhance the image. Histogram equalization seeks to flatten the image histogram so that the intensities are more or less equiprobable in being encountered in the image. If you'd like a more authoritative explanation on the topic, please see my answer on how histogram equalization works:
Explanation of the Histogram Equalization function in MATLAB
In any case, choosing the values of a and b is highly image dependent. However, one thing I can suggest if you want this to be adaptive is to use min-max normalization. Basically, you take your histogram and linearly map the intensities so that the lowest input intensity gets mapped to 0, and the highest input intensity gets mapped to the highest value of the associated data type for the image. For example, if your image was uint8, this means that the highest value gets mapped to 255.
Performing min/max normalization is very easy. It is simply the following transformation:
out(x,y) = max_type*(in(x,y) - min(in)) / (max(in) - min(in));
in and out are the input and output images. min(in) and max(in) are the overall minimum and maximum of the input image. max_type is the maximum value associated for the input image type.
For each location (x,y) of the input image, you substitute an input image intensity and run through the above equation to get your output. You can convince yourself that substituting min(in) and max(in) for in(x,y) will give you 0 and max_type respectively, and everything else is linearly scaled in between.
Now, with some algebraic manipulation, you can get this to be of the form out(x,y) = a*in(x,y) + b as mentioned in your problem statement. Concretely:
out(x,y) = max_type*(in(x,y) - min(in)) / (max(in) - min(in));
out(x,y) = (max_type*in(x,y)/(max(in) - min(in)) - (max_type*min(in)) / (max(in) - min(in)) // Distributing max_type in the summation
out(x,y) = (max_type/(max(in) - min(in)))*in(x,y) - (max_type*min(in))/(max(in) - min(in)) // Re-arranging first term
out(x,y) = a*in(x,y) + b
a is simply (max_type/(max(in) - min(in)) and b is -(max_type*min(in))/(max(in) - min(in)).
You would make these a and b and run through your code. BTW, if I may suggest something, please consider vectorizing your code. You can very easily use the equation and operate on the entire image data at once, rather than looping over each value in the image.
Simply put, your code would now look like this:
image = imread('image.png');
image_of_doubles = 0.95*double(image) + 5; %// New
[n_elements,centers] = hist(image_of_doubles(:),20);
bar(centers,n_elements);
.... isn't it much simpler?
Now, modify your code so that the new constants a and b are calculated in the way we discussed:
image = imread('image.png');
%// New - get maximum value for image type
max_type = double(intmax(class(image)));
%// Calculate a and b
min_val = double(min(image(:)));
max_val = double(max(image(:)));
a = max_type / (max_val - min_val);
b = -(max_type*min_val) / (max_val - min_val);
%// Compute transformation
image_of_doubles = a*double(image) + b;
%// Plot the histogram - before and after
figure;
subplot(2,1,1);
[n_elements1,centers1] = hist(double(image(:)),20);
bar(centers1,n_elements1);
%// Change
xlim([0 max_type]);
subplot(2,1,2);
[n_elements2,centers2] = hist(image_of_doubles(:),20);
bar(centers2,n_elements2);
%// Change
xlim([0 max_type]);
%// New - show both images
figure; subplot(1,2,1);
imshow(image);
subplot(1,2,2);
imshow(image_of_doubles,[]);
It's very important that you cast the maximum integer to double because what is returned is not only the integer, but the integer cast to that data type. I've also taken the liberty to change your code so that we can display the histogram before and after the transformation, as well as what the image looks like before and after you transform it.
As an example of seeing this work, let's use the pout.tif image that's part of the image processing toolbox. It looks like this:
You can definitely see that this requires some image contrast enhancement operation because the dynamic range of intensities is quite limited.
This had the appearance of looking washed out.
Using this as the image, this is what the histogram looks like before and after.
We can certainly see the histogram being stretched. Now this is what the images look like:
You can definitely see more detail, even though it's more dark. This tells you that a simple linear scaling isn't enough. You may want to actually use histogram equalization, or use a gamma-law or power law transformation.
Hopefully this will be enough to get you started though. Good luck!

Image Enhancement using combination between SVD and Wavelet Transform

My objective is to handle illumination and expression variations on an image. So I tried to implement a MATLAB code in order to work with only the important information within the image. In other words, to work with only the "USEFUL" information on an image. To do that, it is necessary to delete all unimportant information from the image.
Reference: this paper
Lets see my steps:
1) Apply the Histogram Equalization in order to get an histo_equalized_image=histeq(MyGrayImage). so that large intensity variations
can be handled to some extent.
2) Apply svd approximations on the histo_equalized_image. But before to do that, I applied the svd decomposition ([L D R]=svd(histo_equalized_image)), then these singular values are used to make the derived image J=L*power(D, i)*R where i varies between 1 and 2.
3) Finally, the derived image is combined with the original image to: C=(MyGrayImage+(a*J))/1+a. Where a varies from 0 to 1.
4) But all the steps above are not able to perform well under varying conditions. So finally, wavelet transform should be used to handle those variations(we use only the LL image bloc). Low frequency component contains the useful information, also, unimportant
information gets lost in this component. The (LL) component is ineffective with illumination changes and expression variations.
I wrote a matlab code for that, and I would like to know if my code is correct or no (if no, so how to correct it). Furthermore, I am interested to know if I can optimize these steps. Can we improve this method? if yes, so how? Please I need help.
Lets see now my Matlab code:
%Read the RGB image
image=imread('img.jpg');
%convert it to grayscale
image_gray=rgb2gray(image);
%convert it to double
image_double=im2double(image_gray);
%Apply histogram equalization
histo_equalized_image=histeq(image_double);
%Apply the svd decomposition
[U S V] = svd(histo_equalized_image);
%calculate the derived image
P=U * power(S, 5/4) * V';
%Linearly combine both images
J=(single(histo_equalized_image) + (0.25 * P)) / (1 + 0.25);
%Apply DWT
[c,s]=wavedec2(J,2,'haar');
a1=appcoef2(c,s,'haar',1); % I need only the LL bloc.
You need to define, what do you mean by "USEFUL" or "important" information. And only then do some steps.
Histogram equalization is global transformation, which gives different results on different images. You can make an experiment - do histeq on image, that benefits from it. Then make two copies of the original image and draw in one black square (30% of image area) and white square on second. Then apply histeq and compare results.
Low frequency component contains the useful information, also,
unimportant information gets lost in this component.
Really? Edges and shapes - which are (at least for me) quite important are in high frequencies. Again we need definition of "useful" information.
I cannot see theoretical background why and how your approach would work. Could you a little bit explain, why do you choose this method?
P.S. I`m not sure if this papers are relevant to you, but recommend "Which Edges Matter?" by Bansal et al. and "Multi-Scale Image Contrast Enhancement" by V. Vonikakis and I. Andreadis.

Matlab - Watershed to extract lines - lost information

I have a vein image as follow. I use watershed algorithm to extract the skeleton of the vein.
My code: (K is the original image).
level = graythresh(K);
BW = im2bw(K,level);
D = bwdist(~BW);
DL = watershed(D);
bgm = DL == 0;
imshow(bgm);
The result is:
As you can see lot of information is lost. Can anybody help me out? Thanks.
It looks like the lighting is somewhat uneven. This can be corrected using certain morphological operations. The basic idea is to compute an image that represents just the uneven lighting and subtract it, or to divide by it (which also enhances contrast). Because we want to find just the lighting, it is important to use a large enough structuring element, so that the operation examines more global properties rather than local ones.
%# Load image and convert to [0,1].
A = im2double(imread('http://i.stack.imgur.com/TQp1i.png'));
%# Any large (relative to objects) structuring element will do.
%# Try sizes up to about half of the image size.
se = strel('square',32);
%# Removes uneven lighting and enhances contrast.
B = imdivide(A,imclose(A,se));
%# Otsu's method works well now.
C = B > graythresh(B);
D = bwdist(~C);
DL = watershed(D);
imshow(DL==0);
Here are C (left), plus DL==0 (center) and its overlay on the original image:
You are losing information because when you apply im2bw, you are basically converting your uint8 image, where the pixel brightness takes a value from intmin('uint8')==0 to intmax('uint8')==255, into a binary image (where only logical values are used). This is what entails a loss of information that you observed.
If you display the image BW you will see that all the elements of K that had a value greater than the threshold level turn into ones, while those ones below the threshold turn into zeros.
Yes, you'll need to lower your threshold likely (lower than what Otsu's method is giving you). And if the edge map is noisy when you lower the threshold, you should apply a 2-D Gaussian smoothing filter before you lower the threshold. This will move the edges slightly but will clean up noise too, so it's a tradeoff.
The 2-D Gaussian can be applied doing something like
w=gausswin(N,Alpha) % you'll have to play with N and alpha
K = imfilter(K,w,'same','symmetric'); % something like these options
Before you apply the rest of your algorithm.

Normalization of intensity, matlab

I have real world 3D points which I want to project on a plane. The most of intensity [0-1] values fall in lower region (near zero).
Please see image 'before' his attched below.
I tried to normalize values
Col_=Intensity; % before
max(Col_)=0.46;min(Col_)=0.06;
Col=(Col_-min(Col_))/(max(Col_)-min(Col_));% after
max(Col)=1;min(Col)=0;
But still i have maximum values falling in lower region (near zero).
Please see second fig after normalization.
Result is still most of black region.Any suggestion. How can I strech my intensity information.
regards,!
It looks like you have already normalized as much as you can with linear scaling. If you want to get more contrast, you will have to give up preserving the original scaling and use a non-linear equalization.
For example: http://en.wikipedia.org/wiki/Histogram_equalization
If you have the image processing toolbox, matlab will do it for you:
http://www.mathworks.com/help/toolbox/images/ref/histeq.html
It looks like you have very few values outside the first bin, if you don't need to preserve the uniqueness of the intensities, you could just scale by a larger amount and clip the few that exceed 1.
When I normalize intensities I do something like this:
Col = Col - min(Col(:));
Col = Col/max(Col(:));
This will normalize your data points to the range [0,1].
Now, since you have many small values, you might be able to make out small changes better through log scaling.
Col_scaled = log(1+Col);
Linear scaling with such data rarely works for me. Using the log function is akin to tweaking gamma for visualization purposes.
I think the only thing you can do here is reduce the range.
After normalization do the following:
t = 0.1;
Col(Col > t) = t;
This will simply truncate the range of the data, which may be sufficient for what you are doing. Then you can re-normalize again if you wish.