how to get Image of same size after processing - matlab

I am estimating ridge orientation of an fingerprint image by dividing it into blocks of 41*41..image is of size 240*320..here is my code and the problem is that I am getting output image size different than input image.
% matalb code for orientation
im =imread('D:\project\116_2_5.jpg');
im = double(im);
[m,n] = size(im);
% to normalise image
nor = im - mean(im(:));
im = nor/std(nor(:));
w = 41;
% To calculate x and y gradient component using 3*3 sobel mask
[delx,dely] = gradient(im);
% Ridge orientation
for i=21:w:240-41
for j=21:w:320-41
A = delx(i-20:i+20,j-20:j+20);
B = dely(i-20:i+20,j-20:j+20);
Gxy = sum(sum(A.*B));
Gxx = sum(sum(A.*A));
Gyy = sum(sum(B.*B));
diff = Gxx-Gyy;
theta(i-20:i+20,j-20:j+20) = (pi/2) + 0.5*atan2(2*Gxy,diff);
end;
end;
but in this process i am loosing the pixels at the boundries so as to avoid the "index exceed" error i.e size of theta is m = 240-41 = 199 and n = 320-41=279..Thus my input image size is 240*320 and output image size is size 199*279..How can i get output image same as size of input image.
one more thing that i dnt have to use "blockproc" function...Thanks in advance

You can use padarray to add zeros onto your matrix:
A1 = padarray(A,[7 8],'post'); % 240+7=41*7, 320+8=41*8
B1 = padarray(B,[7 8],'post');
then generate Gxx, Gyy, and Gxy with A1 and B1.
Method 2:
Besides, I tried to simplify your code a little bit by removing the loops, for your reference:
% Ridge orientation
Gxy = delx .* dely;
Gxx = delx .* delx;
Gyy = dely .* dely;
fun = #(x) sum(x(:))*ones(size(x));
theta_Gxy = blockproc(Gxy,[41 41],fun, 'PadPartialBlocks', true);
theta_diff = blockproc(Gxx-Gyy,[41 41],fun, 'PadPartialBlocks', true);
theta0 = pi/2 + 0.5 * atan2(2 * theta_Gxy, theta_diff);
theta = theta0(1:240, 1:320);
You may check blockproc for more details.

Related

How to split an image using MATLAB Mat2Cell

I have an image with 4476x9058 pixels. I'm trying to use mat2cell to split it into subimages with 100x300 pixels each. However, I'm getting the error:
Input arguments, D1 through D2, must sum to
each dimension of the input matrix size,
[4476 9058].
The code is shown below:
image =rand(4476,9058);
blockSizeRow = 100;
blockSizeCol = 300;
[nrows, ncols] = size(image);
nBlocksRow = floor(nrows / blockSizeRow);
nBlocksCol = floor(ncols / blockSizeCol);
rowDist = [blockSizeRow * ones(1, nBlocksRow), mod(nrows, nBlocksRow)];
colDist = [blockSizeCol * ones(1, nBlocksCol), mod(ncols, nBlocksCol)];
blockImages = mat2cell(image, rowDist, colDist,1);
change mod(nrows, nBlocksRow) to mod(nrows, blockSizeRow), and mod(ncols, nBlocksCol) to mod(ncols, blockSizeCol)

homomorphic filtering for image restoration of colored image

Can anyone please help me on applying homomorphic filter on a colored image in matlab?
I know homomorphic filtering for gray images, but it is getting hard for colored images.
I = imread('E:\degraded images\village.jpg');
imshow(I)
%I am using a colored image
I = im2double(I);
I = log(1 + I);
M = 2*size(I,1) + 1;
N = 2*size(I,2) + 1;
sigma = 10;
[X, Y] = meshgrid(1:N,1:M);
centerX = ceil(N/2);
centerY = ceil(M/2);
gaussianNumerator = (X - centerX).^2 + (Y - centerY).^2;
H = exp(-gaussianNumerator./(2*sigma.^2));
H = 1 - H;
imshow(H,'InitialMagnification',25)
H = fftshift(H);
If = fft2(I, M, N);
Iout = real(ifft2(repmat( H, [1, 1, 3 ] ) .* If));
Iout = Iout(1:size(I,1),1:size(I,2));
Ihmf = exp(Iout) - 1;
imshowpair(I, Ihmf, 'montage');
The last imshowpair is not working for double datatype. If I convert it into gray image, then there will be another problem of converting gray image into color image.
You are processing the trucolor image as three independent channels but then selecting only the first (red) for the exponential and imshowpair.
Replace this line:
Iout = Iout(1:size(I,1),1:size(I,2));
with
Iout = Iout(1:size(I,1),1:size(I,2),:);
to keep all three color channels.
Updated based on error message in comment:
Based on the error message in the comments it appears imshowpair is not working as it is not available in your version of MATLAB (R2010a), it was added to the Image Processing Toolbox in R2012a. Use this line as suggested by #rayryeng instead:
imshow(cat(2,I,Ihmf));

Filtering using Gabor filter

I have implemented a Gabor filter but don't know how to convolve it with the input image so as to get the desired result.My input image is of size 240*320 and i am dividing it into a block of 17*17..so there are 14*18 =255 blocks in total..I am also not sure what should be the size of Gabor filter and should i need to convolve each filter (14*18 = 252 filters)to each block or whole image..
here is my code..
% compute Gabor filter
sigma_x = 4;
sigma_y = 4;
w = 17; % size of block
wg = 5; % size of filter
Gabor = zeros(m1,n1);
for i = 9:w:240-w
for j3 = 9:w:320-w
pim = im(i-ww:i+ww,j-ww:j+ww); % block of input image
f = F(i,j); %each block contains single value
of freuency and theta
O = theta(i,j);
[x,y] = meshgrid(-floor(wg/2):floor(wg/2));
x_phi = x.*cos(O) + y.*sin(O);
y_phi = -x.*sin(O) + y.*cos(O);
x_val = ((x_phi).^2)./(sigma_x.^2);
y_val = ((y_phi).^2)./(sigma_y.^2);
h = exp(-0.5*(x_val+y_val)).*cos((2*pi*f).*(x_phi));
Gabor(i-ww:i+ww,j-ww:j+ww) = conv2(pim,h,'same');
end;
end;
output image is of fingerprint and should not consisit of white dots on black fingerprint line.

How to create 64 Gabor features at each scale and orientation in the spatial and frequency domain

Normally, a Gabor filter, as its name suggests, is used to filter an image and extract everything that it is oriented in the same direction of the filtering.
In this question, you can see more efficient code than written in this Link
Assume 16 scales of Filters at 4 orientations, so we get 64 gabor filters.
scales=[7:2:37], 7x7 to 37x37 in steps of two pixels, so we have 7x7, 9x9, 11x11, 13x13, 15x15, 17x17, 19x19, 21x21, 23x23, 25x25, 27x27, 29x29, 31x31, 33x33, 35x35 and 37x37.
directions=[0, pi/4, pi/2, 3pi/4].
The equation of Gabor filter in the spatial domain is:
The equation of Gabor filter in the frequency domain is:
In the spatial domain:
function [fSiz,filters,c1OL,numSimpleFilters] = init_gabor(rot, RF_siz)
image=imread('xxx.jpg');
image_gray=rgb2gray(image);
image_gray=imresize(image_gray, [100 100]);
image_double=double(image_gray);
rot = [0 45 90 135]; % we have four orientations
RF_siz = [7:2:37]; %we get 16 scales (7x7 to 37x37 in steps of two pixels)
minFS = 7; % the minimum receptive field
maxFS = 37; % the maximum receptive field
sigma = 0.0036*RF_siz.^2 + 0.35*RF_siz + 0.18; %define the equation of effective width
lambda = sigma/0.8; % it the equation of wavelength (lambda)
G = 0.3; % spatial aspect ratio: 0.23 < gamma < 0.92
numFilterSizes = length(RF_siz); % we get 16
numSimpleFilters = length(rot); % we get 4
numFilters = numFilterSizes*numSimpleFilters; % we get 16x4 = 64 filters
fSiz = zeros(numFilters,1); % It is a vector of size numFilters where each cell contains the size of the filter (7,7,7,7,9,9,9,9,11,11,11,11,......,37,37,37,37)
filters = zeros(max(RF_siz)^2,numFilters); % Matrix of Gabor filters of size %max_fSiz x num_filters, where max_fSiz is the length of the largest filter and num_filters the total number of filters. Column j of filters matrix contains a n_jxn_j filter (reshaped as a column vector and padded with zeros).
for k = 1:numFilterSizes
for r = 1:numSimpleFilters
theta = rot(r)*pi/180; % so we get 0, pi/4, pi/2, 3pi/4
filtSize = RF_siz(k);
center = ceil(filtSize/2);
filtSizeL = center-1;
filtSizeR = filtSize-filtSizeL-1;
sigmaq = sigma(k)^2;
for i = -filtSizeL:filtSizeR
for j = -filtSizeL:filtSizeR
if ( sqrt(i^2+j^2)>filtSize/2 )
E = 0;
else
x = i*cos(theta) - j*sin(theta);
y = i*sin(theta) + j*cos(theta);
E = exp(-(x^2+G^2*y^2)/(2*sigmaq))*cos(2*pi*x/lambda(k));
end
f(j+center,i+center) = E;
end
end
f = f - mean(mean(f));
f = f ./ sqrt(sum(sum(f.^2)));
p = numSimpleFilters*(k-1) + r;
filters(1:filtSize^2,p)=reshape(f,filtSize^2,1);
fSiz(p)=filtSize;
end
end
% Rebuild all filters (of all sizes)
nFilts = length(fSiz);
for i = 1:nFilts
sqfilter{i} = reshape(filters(1:(fSiz(i)^2),i),fSiz(i),fSiz(i));
%if you will use conv2 to convolve an image with this gabor, so you should also add the equation below. But if you will use imfilter instead of conv2, so do not add the equation below.
sqfilter{i} = sqfilter{i}(end:-1:1,end:-1:1); %flip in order to use conv2 instead of imfilter (%bug_fix 6/28/2007);
convv=imfilter(image_double, sqfilter{i}, 'same', 'conv');
figure;
imagesc(convv);
colormap(gray);
end
phi = ij*pi/4; % ij = 0, 1, 2, 3
theta = 3;
sigma = 0.65*theta;
filterSize = 7; % 7:2:37
G = zeros(filterSize);
for i=(0:filterSize-1)/filterSize
for j=(0:filterSize-1)/filterSize
xprime= j*cos(phi);
yprime= i*sin(phi);
K = exp(2*pi*theta*sqrt(-1)*(xprime+ yprime));
G(round((i+1)*filterSize),round((j+1)*filterSize)) =...
exp(-(i^2+j^2)/(sigma^2))*K;
end
end
As of R2015b release of the Image Processing Toolbox, you can create a Gabor filter bank using the gabor function in the image processing toolbox, and you can apply it to an image using imgaborfilt.
In the frequency domain:
sigma_u=1/2*pi*sigmaq;
sigma_v=1/2*pi*sigmaq;
u0=2*pi*cos(theta)*lambda(k);
v0=2*pi*sin(theta)*lambda(k);
for u = -filtSizeL:filtSizeR
for v = -filtSizeL:filtSizeR
if ( sqrt(u^2+v^2)>filtSize/2 )
E = 0;
else
v_theta = u*cos(theta) - v*sin(theta);
u_theta = u*sin(theta) + v*cos(theta);
E=(1/2*pi*sigma_u*sigma_v)*((exp((-1/2)*(((u_theta-u0)^2/sigma_u^2))+((v_theta-v0)^2/sigma_v^2))) + (exp((-1/2)*(((u_theta+u0)^2/sigma_u^2))+((v_theta+v0)^2/sigma_v^2))));
end
f(v+center,u+center) = E;
end
end

Issues with imgIdx in DescriptorMatcher mexopencv

My idea is simple here. I am using mexopencv and trying to see whether there is any object present in my current that matches with any image stored in my database.I am using OpenCV DescriptorMatcher function to train my images.
Here is a snippet, I am wishing to build on top of this, which is one to one one image matching using mexopencv, and can also be extended for image stream.
function hello
detector = cv.FeatureDetector('ORB');
extractor = cv.DescriptorExtractor('ORB');
matcher = cv.DescriptorMatcher('BruteForce-Hamming');
train = [];
for i=1:3
train(i).img = [];
train(i).points = [];
train(i).features = [];
end;
train(1).img = imread('D:\test\1.jpg');
train(2).img = imread('D:\test\2.png');
train(3).img = imread('D:\test\3.jpg');
for i=1:3
frameImage = train(i).img;
framePoints = detector.detect(frameImage);
frameFeatures = extractor.compute(frameImage , framePoints);
train(i).points = framePoints;
train(i).features = frameFeatures;
end;
for i = 1:3
boxfeatures = train(i).features;
matcher.add(boxfeatures);
end;
matcher.train();
camera = cv.VideoCapture;
pause(3);%Sometimes necessary
window = figure('KeyPressFcn',#(obj,evt)setappdata(obj,'flag',true));
setappdata(window,'flag',false);
while(true)
sceneImage = camera.read;
sceneImage = rgb2gray(sceneImage);
scenePoints = detector.detect(sceneImage);
sceneFeatures = extractor.compute(sceneImage,scenePoints);
m = matcher.match(sceneFeatures);
%{
%Comments in
img_no = m.imgIdx;
img_no = img_no(1);
%I am planning to do this based on the fact that
%on a perfect match imgIdx a 1xN will be filled
%with the index of the training
%example 1,2 or 3
objPoints = train(img_no+1).points;
boxImage = train(img_no+1).img;
ptsScene = cat(1,scenePoints([m.queryIdx]+1).pt);
ptsScene = num2cell(ptsScene,2);
ptsObj = cat(1,objPoints([m.trainIdx]+1).pt);
ptsObj = num2cell(ptsObj,2);
%This is where the problem starts here, assuming the
%above is correct , Matlab yells this at me
%index exceeds matrix dimensions.
end [H,inliers] = cv.findHomography(ptsScene,ptsObj,'Method','Ransac');
m = m(inliers);
imgMatches = cv.drawMatches(sceneImage,scenePoints,boxImage,boxPoints,m,...
'NotDrawSinglePoints',true);
imshow(imgMatches);
%Comment out
%}
flag = getappdata(window,'flag');
if isempty(flag) || flag, break; end
pause(0.0001);
end
Now the issue here is that imgIdx is a 1xN matrix , and it contains the index of different training indices, which is obvious. And only on a perfect match is the matrix imgIdx is completely filled with the matched image index. So, how do I use this matrix to pick the right image index. Also
in these two lines, I get the error of index exceeding matrix dimension.
ptsObj = cat(1,objPoints([m.trainIdx]+1).pt);
ptsObj = num2cell(ptsObj,2);
This is obvious since while debugging I saw clearly that the size of m.trainIdx is greater than objPoints, i.e I am accessing points which I should not, hence index exceeds
There is scant documentation on use of imgIdx , so anybody who has knowledge on this subject, I need help.
These are the images I used.
Image1
Image2
Image3
1st update after #Amro's response:
With the ratio of min distance to distance at 3.6 , I get the following response.
With the ratio of min distance to distance at 1.6 , I get the following response.
I think it is easier to explain with code, so here it goes :)
%% init
detector = cv.FeatureDetector('ORB');
extractor = cv.DescriptorExtractor('ORB');
matcher = cv.DescriptorMatcher('BruteForce-Hamming');
urls = {
'http://i.imgur.com/8Pz4M9q.jpg?1'
'http://i.imgur.com/1aZj0MI.png?1'
'http://i.imgur.com/pYepuzd.jpg?1'
};
N = numel(urls);
train = struct('img',cell(N,1), 'pts',cell(N,1), 'feat',cell(N,1));
%% training
for i=1:N
% read image
train(i).img = imread(urls{i});
if ~ismatrix(train(i).img)
train(i).img = rgb2gray(train(i).img);
end
% extract keypoints and compute features
train(i).pts = detector.detect(train(i).img);
train(i).feat = extractor.compute(train(i).img, train(i).pts);
% add to training set to match against
matcher.add(train(i).feat);
end
% build index
matcher.train();
%% testing
% lets create a distorted query image from one of the training images
% (rotation+shear transformations)
t = -pi/3; % -60 degrees angle
tform = [cos(t) -sin(t) 0; 0.5*sin(t) cos(t) 0; 0 0 1];
img = imwarp(train(3).img, affine2d(tform)); % try all three images here!
% detect fetures in query image
pts = detector.detect(img);
feat = extractor.compute(img, pts);
% match against training images
m = matcher.match(feat);
% keep only good matches
%hist([m.distance])
m = m([m.distance] < 3.6*min([m.distance]));
% sort by distances, and keep at most the first/best 200 matches
[~,ord] = sort([m.distance]);
m = m(ord);
m = m(1:min(200,numel(m)));
% naive classification (majority vote)
tabulate([m.imgIdx]) % how many matches each training image received
idx = mode([m.imgIdx]);
% matches with keypoints belonging to chosen training image
mm = m([m.imgIdx] == idx);
% estimate homography (used to locate object in query image)
ptsQuery = num2cell(cat(1, pts([mm.queryIdx]+1).pt), 2);
ptsTrain = num2cell(cat(1, train(idx+1).pts([mm.trainIdx]+1).pt), 2);
[H,inliers] = cv.findHomography(ptsTrain, ptsQuery, 'Method','Ransac');
% show final matches
imgMatches = cv.drawMatches(img, pts, ...
train(idx+1).img, train(idx+1).pts, ...
mm(logical(inliers)), 'NotDrawSinglePoints',true);
% apply the homography to the corner points of the training image
[h,w] = size(train(idx+1).img);
corners = permute([0 0; w 0; w h; 0 h], [3 1 2]);
p = cv.perspectiveTransform(corners, H);
p = permute(p, [2 3 1]);
% show where the training object is located in the query image
opts = {'Color',[0 255 0], 'Thickness',4};
imgMatches = cv.line(imgMatches, p(1,:), p(2,:), opts{:});
imgMatches = cv.line(imgMatches, p(2,:), p(3,:), opts{:});
imgMatches = cv.line(imgMatches, p(3,:), p(4,:), opts{:});
imgMatches = cv.line(imgMatches, p(4,:), p(1,:), opts{:});
imshow(imgMatches)
The result:
Note that since you did not post any testing images (in your code you are taking input from the webcam), I created one by distorting one the training images, and using it as a query image. I am using functions from certain MATLAB toolboxes (imwarp and such), but those are non-essential to the demo and you could replace them with equivalent OpenCV ones...
I must say that this approach is not the most robust one.. Consider using other techniques such as the bag-of-word model, which OpenCV already implements.