I'm currently working with MATLAB to do some image processing. I've been set a task to basically recreate the convolution function for applying filters. I managed to get the code working okay and everything seemed to be fine.
The next part was for me to do the following..
Write your own m-function for unsharp masking of a given image to produce a new output image.
Your function should apply the following steps:
Apply smoothing to produce a blurred version of the original image,
Subtract the blurred image from the original image to produce an edge image,
Add the edge image to the original image to produce a sharpened image.
Again I've got code mocked up to do this but I run into a few problems. When carrying out the convolution, my image is cropped down by one pixel, this means when I go to carry out the subtraction for the unsharpening the images are not the same size and the subtraction cannot take place.
To overcome this I want to create a blank matrix in the convolution function that is the same size as the image being inputted, the new image will then go on top of this matrix so in affect the new image has a one pixel border around it to make it to its original size. When I try and implement this, all I get as an output is the blank matrix I just created. Why is this happening and if so would you be able to help me fix it?
My code is as follows.
Convolution
function [ imgout ] = convolution( img, filter )
%UNTITLED Summary of this function goes here
% Detailed explanation goes here
[height, width] = size(img); % height, width: number of im rows, etc.
[filter_height, filter_width] = size(filter);
for height_bound = 1:height - filter_height + 1; % Loop over output elements
for width_bound = 1:width - filter_width + 1;
imgout = zeros(height_bound, width_bound); % Makes an empty matrix the correct size of the image.
sum = 0;
for fh = 1:filter_height % Loop over mask elements
for fw = 1:filter_width
sum = sum + img(height_bound - fh + filter_height, width_bound - fw + filter_width) * filter(fh, fw);
end
end
imgout(height_bound, width_bound) = sum; % Store the result
end
end
imshow(imgout)
end
Unsharpen
function sharpen_image = img_sharpen(img)
blur_image = medfilt2(img);
convolution(img, filter);
edge_image = img - blur_image;
sharpen_image = img + edge_image;
end
Yes. Concatenation, e.g.:
A = [1 2 3; 4 5 6]; % Matrix
B = [7; 8]; % Column vector
C = [A B]; % Concatenate
Related
I'm trying to do a vertical histogram of a binary image. I don't want to use MATLAB's functions. How to do it?
I have tried this code but I don't know if it's correct or not:
function H = histogram_binary(image)
[m,n] = size(image);
H = zeros(1,256);
for i = 1:m
for j = 1:n
H(1,image(i,j)) = H(1,image(i,j))+1;
end
end
The image is:
The result:
Why can't I see the value of black pixels in the histogram?
% Read the binary image...
img = imread('66He7.png');
% Count total, white and black pixels...
img_vec = img(:);
total = numel(img_vec);
white = sum(img_vec);
black = total - white;
% Plot the result in the form of an histogram...
bar([black white]);
set(gca(),'XTickLabel',{'Black' 'White'});
set(gca(),'YLim',[0 total]);
Output:
For what concerns your code, it is not counting black pixels since they have a value of 0 and your loop start from 1... rewrite it as follows:
function H = histogram_binary(img)
img_vec = img(:);
H = zeros(1,256);
for i = 0:255
H(i+1) = sum(img_vec == i);
end
end
But keep in mind that counting all the byte occurrences on a binary image (that can only contain 0 or 1 values) is kinda pointless and will make your histogram lack readability. On a side note, avoid using image as a variable name, since this would override an existing function.
As mentioned by #beaker in the comments above, vertical histogram in such cases generally refers to a vertical projection. Here is a way to do this :
I = imread('YcP1o.png'); % Read input image
I1 = rgb2gray(I); % convert image to grayscale
I2 = im2bw(I1); % convert grayscale to binary
I3 = ~I2; % invert the binary image
I4 = sum(I3,1); % project the inverted binary image vertically
I5 = (I4>1); % ceil the vector
plot([1:1:size(I,2)],I5); ylim([0 2])
You can further check for 0->1 transitions to count the number of characters using sum(diff(I5)>0) which gives 13 as answer in this case.
Currently, I am using the code below to segment an image into a grid of cellSizeX pixels times cellSizeY pixels:
grid = zeros(cellSizeX, cellSizeY, ColorCount, cellTotalX, cellTotalY);
for i = 1:cellSizeX:(HorRowCount)
for j = 1:cellSizeY:(VertColumnCount)
try
grid(:,:,:,icount, jcount) = img(i:i+cellSizeX-1, j:j+cellSizeY-1, :);
catch
end
jcount = jcount + 1;
end
icount = icount + 1;
jcount = 1;
end
While this code runs fine and satisfactorily, there are things that nag me:
Via some testing with tic and toc, comparing switching index positions such as grid(:,:,:,icount,jcount) and grid(icount,jcount,:,:,:), I see that grid(:,:,:,icount,jcount) is fastest. But can anything be improved here?
The code will work only if the requested cellSizeX and cellSizeY are proportional to the image img. So requesting cellSizeX and cellSizeY of 9 x 9 on image with size 40 x 40 will result in matlab complaining about exceeding matrix's dimension. Any suggestion regarding this? I do not want to simply fill in blank area for those cells. These cells will be used further in Vlfeat SIFT.
How about converting the image into a cellarray with each cell of size CellSizeX x CellSizeY x ColorCount, then stacking all these cells to a single array grid?
ca = mat2cell( img, cellSizeY * ones(1, cellTotalY), ...
cellSizeX * ones(1, cellTotalX), ...
ColorCount );
grid = reshape( cat( 4, ca{:} ),...
cellSizeX, cellSizeY, ColorCount, cellTotalX, cellTotalY);
It is accustomed in the image processing community to pad image with non-zero values depending on the values of the image at the boundary. Look at the function padarray for more information. You may pad your input image such that its padded size will be proportional to CellSizeX and CellSizeY (padding does not have to be identical at both axes).
I am doing vlfeat in Matlab and I am following this question here.
These below are my simple testing images:
Left Image:
Right Image:
I did a simple test with 2 simple images here (the right image is just rotated version of the left), and I got the result accordingly:
It works, but I have one more requirement, which is to match the SIFT points of the two images and show them, like this:
I do understand that vl_ubcmatch returns 2 arrays of matched indices, and it is not a problem to map them for which point goes to which point on two images. However, I am currently stuck in matlab's procedure. I found this. But that only works if the subplot stays that way. When you add an image into the subplot, the size changes and the normalization failed.
Here is my code: (im and im2 are images. f, d and f2, d2 are frames and descriptors from vl_sift function from 2 images respectively)
[matches score] = vl_ubcmatch(d,d2,threshold);%threshold originally is 1.5
if (mode >= 2)%verbose 2
subplot(211);
imshow(uint8(im));
hold on;
plot(f(1,matches(1,:)),f(2,matches(1,:)),'b*');
subplot(212);
imshow(uint8(im2));
hold on;
plot(f2(1,matches(2,:)),f2(2,matches(2,:)),'g*');
end
if (mode >= 3)%verbose 3
[xa1 ya1] = ds2nfu( f(1,matches(1,:)), f(2,matches(1,:)));
[xa2 ya2] = ds2nfu( f2(1,matches(2,:)), f2(2,matches(2,:)));
for k=1:numel(matches(1,:))
xxa1 = xa1(1, k);
yya1 = ya1(1, k);
xxa2 = xa2(1, k);
yya2 = ya2(1, k);
annotation('line',[xxa1 xxa2],[yya1 yya2],'color','r');
end
end
The code above yields this:
I think subplot isn't a good way to go for something like this. Is there a better method for this in Matlab? If possible, I want something like an empty panel that I can draw my image, draw lines freely and zoom freely, just like drawing 2D games in OpenGL style.
From zplesivcak's suggestion, yes, it is possible, and not that problematic after all. Here is the code:
% After we have applied vl_sift with 2 images, we will get frames f,f2,
% and descriptor d,d2 of the images. After that, we can apply it into
% vl_ubcmatch to perform feature matching:
[matches score] = vl_ubcmatch(d,d2,threshold); %threshold originally is 1.5
% check for sizes and take longest width and longest height into
% account
if (size(im,1) > size(im2,1))
longestWidth = size(im,1);
else
longestWidth = size(im2,1);
end
if (size(im,2) > size(im2,2))
longestHeight = size(im,2);
else
longestHeight = size(im2,2);
end
% create new matrices with longest width and longest height
newim = uint8(zeros(longestWidth, longestHeight, 3)); %3 cuz image is RGB
newim2 = uint8(zeros(longestWidth, longestHeight, 3));
% transfer both images to the new matrices respectively.
newim(1:size(im,1), 1:size(im,2), 1:3) = im;
newim2(1:size(im2,1), 1:size(im2,2), 1:3) = im2;
% with the same proportion and dimension, we can now show both
% images. Parts that are not used in the matrices will be black.
imshow([newim newim2]);
hold on;
X = zeros(2,1);
Y = zeros(2,1);
% draw line from the matched point in one image to the respective matched point in another image.
for k=1:numel(matches(1,:))
X(1) = f(1, matches(1, k));
Y(1) = f(2, matches(1, k));
X(2) = f2(1, matches(2, k)) + longestHeight; % for placing matched point of 2nd image correctly.
Y(2) = f2(2, matches(2, k));
line(X,Y);
end
Here is the test case:
By modifying the canvas width and height of one of the images from the question, we see that the algorithm above will take care of that and display the image accordingly. Unused area will be black. Furthermore, we see that the algorithm can match the features of two images respectively.
EDIT:
Alternatively, suggested by Maurits, for cleaner and better implementation, check out Lowe SIFT matlab wrappers.
If you have Matlab Computer Vision Library installed on your disc already, you can simply use
M1 = [f(1, match(1, :)); f(2, match(1, :)); ones(1, length(match))];
M2 = [f2(1, match(2, :)); f2(2, match(2, :)); ones(1, length(match))];
showMatchedFeatures(im,im2,[M1(1:2, :)]',[M2(1:2, :)]','montage','PlotOptions',{'ro','g+','b-'} );
How to do this on matlab?
zero pad the face image with a five‐pixel
thick rim around the borders of the
image. show the resulting image.
Must be manual codes on script.
save this function as create_padded_image.m
function padded_image = create_padded_image(image, padding)
if nargin < 2
% if no padding passed - define it.
padding = 5;
end
if nargin < 1
% let's create an image if none is given
image = rand(5, 4)
end
% what are the image dimensions?
image_size = size(image);
% allocate zero array of new padded image
padded_image = zeros(2*padding + image_size(1), 2*padding + image_size(2))
% write image into the center of padded image
padded_image(padding+1:padding+image_size(1), padding+1:padding+image_size(2)) = image;
end
Then call it like this:
% read in image - assuming that your image is a grayscale image
$ image = imread(filename);
$ padded_image = create_padded_image(image)
This sounds like homework, so I will just give you a hint:
In MATLAB it is very easy to put the content of one matrix into another at precisely the correct place. Check out the help for matrix indexing and you should be able to solve it.
I realize you want to code this yourself, but for reference, you can use the PADARRAY function. Example:
I = imread('coins.png');
II = padarray(I,[5 5],0,'both');
imshow(II)
Note this works also for multidimensional matrices (RGB images for example)
I posted another question about the Roberts operator, but I decided to post a new one since my code has changed significantly since that time.
My code runs, but it does not generate the correct image, instead the image becomes slightly brighter.
I have not found a mistake in the algorithm, but I know this is not the correct output. If I compare this program's output to edge(<image matrix>,'roberts',<threshold>);, or to images on wikipedia, it looks nothing like the effect of the roberts operator shown there.
code:
function [] = Robertize(filename)
Img = imread(filename);
NewImg = Img;
SI = size(Img);
I_W = SI(2)
I_H = SI(1)
Robertsx = [1,0;0,-1];
Robertsy = [0,-1;1,0];
M_W = 2; % do not need the + 1, I assume the for loop means while <less than or equal to>
% x and y are reversed...
for y=1 : I_H
for x=1 : I_W
S = 0;
for M_Y = 1 : M_W
for M_X = 1 : M_W
if (x + M_X - 1 < 1) || (x + M_X - 1 > I_W)
S = 0;
%disp('out of range, x');
continue
end
if (y + M_Y - 1 < 1) || (y + M_Y - 1 > I_H)
S = 0;
%disp('out of range, y');
continue
end
S = S + Img(y + M_Y - 1 , x + M_X - 1) * Robertsx(M_Y,M_X);
S = S + Img(y + M_Y - 1, x + M_X - 1) * Robertsy(M_Y,M_X);
% It is y + M_Y - 1 because you multiply Robertsx(1,1) *
% Img(y,x).
end
end
NewImg(y,x) = S;
end
end
imwrite(NewImg,'Roberts.bmp');
end
I think you may be misinterpreting how the Roberts Cross operator works. Use this page as a guide. Notice that it states that you convolve the original image separately with the X and Y operator. Then, you may calculate the final gradient (i.e. "total edge content") value by taking the square root of the sum of squares of the two (x and y) gradient values for a particular pixel. You're presently summing the x and y values into a single image, which will not give the correct results.
EDIT
I'll try to explain a bit better. The problem with summation instead of squaring/square root is that you can end up with negative values. Negative values are natural using this operator depending on the edge orientation. That may be why you think the image 'lightens' -- because when you display the image in MATLAB the negative values go to black, the zero values go to grey, and the positive values go to white. Here's the image I get when I run your code (with a few changes -- mostly setting NewImg to be zeros(size(Img)) so it's a double type instead of uint8. uint8 types don't allow negative values... Here's the image I get:.
You have to be very careful when trying to save files as well. Instead of calling imwrite, call imshow(NewImg,[]). That will automatically rescale the values in the double-valued image to show them correctly, with the most negative number being equal to black and most positive equal to white. Thus, in areas with little edge content (like the sky), we would expect grey and that's what we get!
I ran your code and got the effect you described. See how everything looks lighter:
Figure 1 - Original on the left, original roberts transformation on the right
The image on my system was actually saturated. My image was uint8 and the operations were pushing the image past 255 or under 0 (for the negative side) and everything became lighter.
By changing the line of code in the imread to convert to double as in
Img = double(rgb2gray( imread(filename)));
(note my image was color so I did an rgb conversion, too. You might use
Img = double(( imread(filename)));
I got the improved image:
Original on left, corrected code on right.
Note that I could also produce this result using 2d convolution rather than your loop:
Robertsx = [1,0;0,-1];
Robertsy = [0,-1;1,0];
dataR = conv2(data, Robertsx) + conv2(data, Robertsy);
figure(2);
imagesc(dataR);
colormap gray
axis image
For the following result:
Here is an example implementation. You could easily replace CONV2/IMFILTER with your own 2D convolution/correlation function:
%# convolve image with Roberts kernels
I = im2double(imread('lena512_gray.jpg')); %# double image, range [0,1]
hx = [+1 0;0 -1]; hy = [0 +1;-1 0];
%#Gx = conv2(I,hx);
%#Gy = conv2(I,hy);
Gx = imfilter(I,hx,'conv','same','replicate');
Gy = imfilter(I,hy,'conv','same','replicate');
%# gradient approximation
G = sqrt(Gx.^2+Gy.^2);
figure, imshow(G), colormap(gray), title('Gradient magnitude [0,1]')
%# direction of the gradient
Gdir = atan2(Gy,Gx);
figure, imshow(Gdir,[]), title('Gradient direction [-\pi,\pi]')
colormap(hot), colorbar%, caxis([-pi pi])
%# quiver plot
ySteps = 1:8:size(I,1);
xSteps = 1:8:size(I,2);
[X,Y] = meshgrid(xSteps,ySteps);
figure, imshow(G,[]), hold on
quiver(X, Y, Gx(ySteps,xSteps), Gy(ySteps,xSteps), 3)
axis image, hold off
%# binarize gradient, and compare against MATLAB EDGE function
BW = im2bw(G.^2, 6*mean(G(:).^2));
figure
subplot(121), imshow(BW)
subplot(122), imshow(edge(I,'roberts')) %# performs additional thinning step