Extract Freeman chain code of an object - Matlab - matlab

I am trying this code to generate a freeman chain code based on the code in https://www.crisluengo.net/archives/324 but it uses the DIPimage. Therefore, does someone has an idea how to by pass the dip_array function?
Code:
clc;
clear all;
Image = rgb2gray(imread('https://upload-icon.s3.us-east-2.amazonaws.com/uploads/icons/png/1606078271536061993-512.png'));
BW = imbinarize(Image);
BW = imfill(BW,'holes');
BW = bwareaopen(BW, 100);
BW = padarray(BW,60,60,'both')
BW = imcomplement(BW);
imshow(BW)
[B,L] = bwboundaries(BW,'noholes');
%%%%%%%https://www.crisluengo.net/archives/324%%%%%
directions = [ 1, 0
1,-1
0,-1
-1,-1
-1, 0
-1, 1
0, 1
1, 1];
indx = find(dip_array(img),1)-1;
sz = imsize(img);
start = [floor(indx/sz(2)),0];
start(2) = indx-(start(1)*sz(2));
cc = []; % The chain code
coord = start; % Coordinates of the current pixel
dir = 1; % The starting direction
while 1
newcoord = coord + directions(dir+1,:);
if all(newcoord>=0) && all(newcoord<sz) ...
&& img(newcoord(1),newcoord(2))
cc = [cc,dir];
coord = newcoord;
dir = mod(dir+2,8);
else
dir = mod(dir-1,8);
end
if all(coord==start) && dir==1 % back to starting situation
break;
end
end

I don't want to translate the whole code, I don't have time right now, but I can give a few pointers:
dip_array(img) extracts the MATLAB array with the pixel values that is inside the dip_image object img. If you use BW as input image here, you can simply remove the call to dip_array: indx = find(BW,1)-1.
imsize(img) returns the sizes of the image img. The MATLAB function size is equivalent (in this particular case).
Dimensions for the dip_image object are different from those for MATLAB arrays: they are indexed as img(x,y), whereas MATLAB arrays are indexed as BW(y,x).
Indices for dip_image objects start at 0, not at 1 as MATLAB arrays do.
These last two points change how you'd compute start. I think it'd be something like this:
indx = find(BW,1);
sz = size(BW);
start = [1,floor((indx-1)/sz(2))+1];
start(1) = indx-((start(2)-1)*sz(1));
But it's easier to use ind2sub (not sure why I did the explicit calculation in the blog post):
indx = find(BW,1);
sz = size(BW);
start = ind2sub(sz,indx);
You also probably want to swap the two columns of directions for the same reason, and change all(newcoord>=0) && all(newcoord<sz) into all(newcoord>0) && all(newcoord<=sz).

Related

How do I find local threshold for coefficients in image compression using DWT in MATLAB

I'm trying to write an image compression script in MATLAB using multilayer 3D DWT(color image). along the way, I want to apply thresholding on coefficient matrices, both global and local thresholds.
I like to use the formula below to calculate my local threshold:
where sigma is variance and N is the number of elements.
Global thresholding works fine; but my problem is that the calculated local threshold is (most often!) greater than the maximum band coefficient, therefore no thresholding is applied.
Everything else works fine and I get a result too, but I suspect the local threshold is miscalculated. Also, the resulting image is larger than the original!
I'd appreciate any help on the correct way to calculate the local threshold, or if there's a pre-set MATLAB function.
here's an example output:
here's my code:
clear;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%% COMPRESSION %%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% read base image
% dwt 3/5-L on base images
% quantize coeffs (local/global)
% count zero value-ed coeffs
% calculate mse/psnr
% save and show result
% read images
base = imread('circ.jpg');
fam = 'haar'; % wavelet family
lvl = 3; % wavelet depth
% set to 1 to apply global thr
thr_type = 0;
% global threshold value
gthr = 180;
% convert base to grayscale
%base = rgb2gray(base);
% apply dwt on base image
dc = wavedec3(base, lvl, fam);
% extract coeffs
ll_base = dc.dec{1};
lh_base = dc.dec{2};
hl_base = dc.dec{3};
hh_base = dc.dec{4};
ll_var = var(ll_base, 0);
lh_var = var(lh_base, 0);
hl_var = var(hl_base, 0);
hh_var = var(hh_base, 0);
% count number of elements
ll_n = numel(ll_base);
lh_n = numel(lh_base);
hl_n = numel(hl_base);
hh_n = numel(hh_base);
% find local threshold
ll_t = ll_var * (sqrt(2 * log2(ll_n)));
lh_t = lh_var * (sqrt(2 * log2(lh_n)));
hl_t = hl_var * (sqrt(2 * log2(hl_n)));
hh_t = hh_var * (sqrt(2 * log2(hh_n)));
% global
if thr_type == 1
ll_t = gthr; lh_t = gthr; hl_t = gthr; hh_t = gthr;
end
% count zero values in bands
ll_size = size(ll_base);
lh_size = size(lh_base);
hl_size = size(hl_base);
hh_size = size(hh_base);
% count zero values in new band matrices
ll_zeros = sum(ll_base==0,'all');
lh_zeros = sum(lh_base==0,'all');
hl_zeros = sum(hl_base==0,'all');
hh_zeros = sum(hh_base==0,'all');
% initiate new matrices
ll_new = zeros(ll_size);
lh_new = zeros(lh_size);
hl_new = zeros(lh_size);
hh_new = zeros(lh_size);
% apply thresholding on bands
% if new value < thr => 0
% otherwise, keep the previous value
for id=1:ll_size(1)
for idx=1:ll_size(2)
if ll_base(id,idx) < ll_t
ll_new(id,idx) = 0;
else
ll_new(id,idx) = ll_base(id,idx);
end
end
end
for id=1:lh_size(1)
for idx=1:lh_size(2)
if lh_base(id,idx) < lh_t
lh_new(id,idx) = 0;
else
lh_new(id,idx) = lh_base(id,idx);
end
end
end
for id=1:hl_size(1)
for idx=1:hl_size(2)
if hl_base(id,idx) < hl_t
hl_new(id,idx) = 0;
else
hl_new(id,idx) = hl_base(id,idx);
end
end
end
for id=1:hh_size(1)
for idx=1:hh_size(2)
if hh_base(id,idx) < hh_t
hh_new(id,idx) = 0;
else
hh_new(id,idx) = hh_base(id,idx);
end
end
end
% count zeros of the new matrices
ll_new_size = size(ll_new);
lh_new_size = size(lh_new);
hl_new_size = size(hl_new);
hh_new_size = size(hh_new);
% count number of zeros among new values
ll_new_zeros = sum(ll_new==0,'all');
lh_new_zeros = sum(lh_new==0,'all');
hl_new_zeros = sum(hl_new==0,'all');
hh_new_zeros = sum(hh_new==0,'all');
% set new band matrices
dc.dec{1} = ll_new;
dc.dec{2} = lh_new;
dc.dec{3} = hl_new;
dc.dec{4} = hh_new;
% count how many coeff. were thresholded
ll_zeros_diff = ll_new_zeros - ll_zeros;
lh_zeros_diff = lh_zeros - lh_new_zeros;
hl_zeros_diff = hl_zeros - hl_new_zeros;
hh_zeros_diff = hh_zeros - hh_new_zeros;
% show coeff. matrices vs. thresholded version
figure
colormap(gray);
subplot(2,4,1); imagesc(ll_base); title('LL');
subplot(2,4,2); imagesc(lh_base); title('LH');
subplot(2,4,3); imagesc(hl_base); title('HL');
subplot(2,4,4); imagesc(hh_base); title('HH');
subplot(2,4,5); imagesc(ll_new); title({'LL thr';ll_zeros_diff});
subplot(2,4,6); imagesc(lh_new); title({'LH thr';lh_zeros_diff});
subplot(2,4,7); imagesc(hl_new); title({'HL thr';hl_zeros_diff});
subplot(2,4,8); imagesc(hh_new); title({'HH thr';hh_zeros_diff});
% idwt to reconstruct compressed image
cmp = waverec3(dc);
cmp = uint8(cmp);
% calculate mse/psnr
D = abs(cmp - base) .^2;
mse = sum(D(:))/numel(base);
psnr = 10*log10(255*255/mse);
% show images and mse/psnr
figure
subplot(1,2,1);
imshow(base); title("Original"); axis square;
subplot(1,2,2);
imshow(cmp); colormap(gray); axis square;
msg = strcat("MSE: ", num2str(mse), " | PSNR: ", num2str(psnr));
title({"Compressed";msg});
% save image locally
imwrite(cmp, 'compressed.png');
I solved the question.
the sigma in the local threshold formula is not variance, it's the standard deviation. I applied these steps:
used stdfilt() std2() to find standard deviation of my coeff. matrices (thanks to #Rotem for pointing this out)
used numel() to count the number of elements in coeff. matrices
this is a summary of the process. it's the same for other bands (LH, HL, HH))
[c, s] = wavedec2(image, wname, level); %apply dwt
ll = appcoeff2(c, s, wname); %find LL
ll_std = std2(ll); %find standard deviation
ll_n = numel(ll); %find number of coeffs in LL
ll_t = ll_std * (sqrt(2 * log2(ll_n))); %local the formula
ll_new = ll .* double(ll > ll_t); %thresholding
replace the LL values in c in a for loop
reconstruct by applying IDWT using waverec2
this is a sample output:

MATLAB to Scilab conversion: mfile2sci error "File contains no instruction"

I am very new to Scilab, but so far have not been able to find an answer (either here or via google) to my question. I'm sure it's a simple solution, but I'm at a loss. I have a lot of MATLAB scripts I wrote in grad school, but now that I'm out of school, I no longer have access to MATLAB (and can't justify the cost). Scilab looked like the best open alternative. I'm trying to convert my .m files to Scilab compatible versions using mfile2sci, but when running the mfile2sci GUI, I get the error/message shown below. Attached is the original code from the M-file, in case it's relevant.
I Searched Stack Overflow and companion sites, Google, Scilab documentation.
The M-file code follows (it's a super basic MATLAB script as part of an old homework question -- I chose it as it's the shortest, most straightforward M-file I had):
Mmax = 15;
N = 20;
T = 2000;
%define upper limit for sparsity of signal
smax = 15;
mNE = zeros(smax,Mmax);
mESR= zeros(smax,Mmax);
for M = 1:Mmax
aNormErr = zeros(smax,1);
aSz = zeros(smax,1);
ESR = zeros(smax,1);
for s=1:smax % for-loop to loop script smax times
normErr = zeros(1,T);
vESR = zeros(1,T);
sz = zeros(1,T);
for t=1:T %for-loop to carry out 2000 trials per s-value
esr = 0;
A = randn(M,N); % generate random MxN matrix
[M,N] = size(A);
An = zeros(M,N); % initialize normalized matrix
for h = 1:size(A,2) % normalize columns of matrix A
V = A(:,h)/norm(A(:,h));
An(:,h) = V;
end
A = An; % replace A with its column-normalized counterpart
c = randperm(N,s); % create random support vector with s entries
x = zeros(N,1); % initialize vector x
for i = 1:size(c,2)
val = (10-1)*rand + 1;% generate interval [1,10]
neg = mod(randi(10),2); % include [-10,-1]
if neg~=0
val = -1*val;
end
x(c(i)) = val; %replace c(i)th value of x with the nonzero value
end
y = A*x; % generate measurement vector (y)
R = y;
S = []; % initialize array to store selected columns of A
indx = []; % vector to store indices of selected columns
coeff = zeros(1,s); % vector to store coefficients of approx.
stop = 10; % init. stop condition
in = 0; % index variable
esr = 0;
xhat = zeros(N,1); % intialize estimated x signal
while (stop>0.5 && size(S,2)<smax)
%MAX = abs(A(:,1)'*R);
maxV = zeros(1,N);
for i = 1:size(A,2)
maxV(i) = abs(A(:,i)'*R);
end
in = find(maxV == max(maxV));
indx = [indx in];
S = [S A(:,in)];
coeff = [coeff R'*S(:,size(S,2))]; % update coefficient vector
for w=1:size(S,2)
r = y - ((R'*S(:,w))*S(:,w)); % update residuals
if norm(r)<norm(R)
index = w;
end
R = r;
stop = norm(R); % update stop condition
end
for j=1:size(S,2) % place coefficients into xhat at correct indices
xhat(indx(j))=coeff(j);
end
nE = norm(x-xhat)/norm(x); % calculate normalized error for this estimate
%esr = 0;
indx = sort(indx);
c = sort(c);
if isequal(indx,c)
esr = esr+1;
end
end
vESR(t) = esr;
sz(t) = size(S,2);
normErr(t) = nE;
end
%avsz = sum(sz)/T;
aSz(s) = sum(sz)/T;
%aESR = sum(vESR)/T;
ESR(s) = sum(vESR)/T;
%avnormErr = sum(normErr)/T; % produce average normalized error for these run
aNormErr(s) = sum(normErr)/T; % add new avnormErr to vector of all av norm errors
end
% just put this here to view the vector
mNE(:,M) = aNormErr;
mESR(:,M) = ESR;
% had an 'end' placed here, might've been unmatched
mNE%reshape(mNE,[],Mmax)
mESR%reshape(mESR,[],Mmax)]
figure
dimx = [1 Mmax];
dimy = [1 smax];
imagesc(dimx,dimy,mESR)
colormap gray
strESR = sprintf('Average ESR, N=%d',N);
title(strESR);
xlabel('M');
ylabel('s');
strNE = sprintf('Average Normed Error, N=%d',N);
figure
imagesc(dimx,dimy,mNE)
colormap gray
title(strNE)
xlabel('M');
ylabel('s');
The command used (and results) follow:
--> mfile2sci
ans =
[]
****** Beginning of mfile2sci() session ******
File to convert: C:/Users/User/Downloads/WTF_new.m
Result file path: C:/Users/User/DOWNLO~1/
Recursive mode: OFF
Only double values used in M-file: NO
Verbose mode: 3
Generate formatted code: NO
M-file reading...
M-file reading: Done
Syntax modification...
Syntax modification: Done
File contains no instruction, no translation made...
****** End of mfile2sci() session ******
To convert the foo.m file one has to enter
mfile2sci <path>/foo.m
where stands for the path of the directoty where foo.m is. The result is written in /foo.sci
Remove the ```` at the begining of each line, the conversion will proceed normally ?. However, don't expect to obtain a working .sci file as the m2sci converter is (to me) still an experimental tool !

Issues with imgIdx in DescriptorMatcher mexopencv

My idea is simple here. I am using mexopencv and trying to see whether there is any object present in my current that matches with any image stored in my database.I am using OpenCV DescriptorMatcher function to train my images.
Here is a snippet, I am wishing to build on top of this, which is one to one one image matching using mexopencv, and can also be extended for image stream.
function hello
detector = cv.FeatureDetector('ORB');
extractor = cv.DescriptorExtractor('ORB');
matcher = cv.DescriptorMatcher('BruteForce-Hamming');
train = [];
for i=1:3
train(i).img = [];
train(i).points = [];
train(i).features = [];
end;
train(1).img = imread('D:\test\1.jpg');
train(2).img = imread('D:\test\2.png');
train(3).img = imread('D:\test\3.jpg');
for i=1:3
frameImage = train(i).img;
framePoints = detector.detect(frameImage);
frameFeatures = extractor.compute(frameImage , framePoints);
train(i).points = framePoints;
train(i).features = frameFeatures;
end;
for i = 1:3
boxfeatures = train(i).features;
matcher.add(boxfeatures);
end;
matcher.train();
camera = cv.VideoCapture;
pause(3);%Sometimes necessary
window = figure('KeyPressFcn',#(obj,evt)setappdata(obj,'flag',true));
setappdata(window,'flag',false);
while(true)
sceneImage = camera.read;
sceneImage = rgb2gray(sceneImage);
scenePoints = detector.detect(sceneImage);
sceneFeatures = extractor.compute(sceneImage,scenePoints);
m = matcher.match(sceneFeatures);
%{
%Comments in
img_no = m.imgIdx;
img_no = img_no(1);
%I am planning to do this based on the fact that
%on a perfect match imgIdx a 1xN will be filled
%with the index of the training
%example 1,2 or 3
objPoints = train(img_no+1).points;
boxImage = train(img_no+1).img;
ptsScene = cat(1,scenePoints([m.queryIdx]+1).pt);
ptsScene = num2cell(ptsScene,2);
ptsObj = cat(1,objPoints([m.trainIdx]+1).pt);
ptsObj = num2cell(ptsObj,2);
%This is where the problem starts here, assuming the
%above is correct , Matlab yells this at me
%index exceeds matrix dimensions.
end [H,inliers] = cv.findHomography(ptsScene,ptsObj,'Method','Ransac');
m = m(inliers);
imgMatches = cv.drawMatches(sceneImage,scenePoints,boxImage,boxPoints,m,...
'NotDrawSinglePoints',true);
imshow(imgMatches);
%Comment out
%}
flag = getappdata(window,'flag');
if isempty(flag) || flag, break; end
pause(0.0001);
end
Now the issue here is that imgIdx is a 1xN matrix , and it contains the index of different training indices, which is obvious. And only on a perfect match is the matrix imgIdx is completely filled with the matched image index. So, how do I use this matrix to pick the right image index. Also
in these two lines, I get the error of index exceeding matrix dimension.
ptsObj = cat(1,objPoints([m.trainIdx]+1).pt);
ptsObj = num2cell(ptsObj,2);
This is obvious since while debugging I saw clearly that the size of m.trainIdx is greater than objPoints, i.e I am accessing points which I should not, hence index exceeds
There is scant documentation on use of imgIdx , so anybody who has knowledge on this subject, I need help.
These are the images I used.
Image1
Image2
Image3
1st update after #Amro's response:
With the ratio of min distance to distance at 3.6 , I get the following response.
With the ratio of min distance to distance at 1.6 , I get the following response.
I think it is easier to explain with code, so here it goes :)
%% init
detector = cv.FeatureDetector('ORB');
extractor = cv.DescriptorExtractor('ORB');
matcher = cv.DescriptorMatcher('BruteForce-Hamming');
urls = {
'http://i.imgur.com/8Pz4M9q.jpg?1'
'http://i.imgur.com/1aZj0MI.png?1'
'http://i.imgur.com/pYepuzd.jpg?1'
};
N = numel(urls);
train = struct('img',cell(N,1), 'pts',cell(N,1), 'feat',cell(N,1));
%% training
for i=1:N
% read image
train(i).img = imread(urls{i});
if ~ismatrix(train(i).img)
train(i).img = rgb2gray(train(i).img);
end
% extract keypoints and compute features
train(i).pts = detector.detect(train(i).img);
train(i).feat = extractor.compute(train(i).img, train(i).pts);
% add to training set to match against
matcher.add(train(i).feat);
end
% build index
matcher.train();
%% testing
% lets create a distorted query image from one of the training images
% (rotation+shear transformations)
t = -pi/3; % -60 degrees angle
tform = [cos(t) -sin(t) 0; 0.5*sin(t) cos(t) 0; 0 0 1];
img = imwarp(train(3).img, affine2d(tform)); % try all three images here!
% detect fetures in query image
pts = detector.detect(img);
feat = extractor.compute(img, pts);
% match against training images
m = matcher.match(feat);
% keep only good matches
%hist([m.distance])
m = m([m.distance] < 3.6*min([m.distance]));
% sort by distances, and keep at most the first/best 200 matches
[~,ord] = sort([m.distance]);
m = m(ord);
m = m(1:min(200,numel(m)));
% naive classification (majority vote)
tabulate([m.imgIdx]) % how many matches each training image received
idx = mode([m.imgIdx]);
% matches with keypoints belonging to chosen training image
mm = m([m.imgIdx] == idx);
% estimate homography (used to locate object in query image)
ptsQuery = num2cell(cat(1, pts([mm.queryIdx]+1).pt), 2);
ptsTrain = num2cell(cat(1, train(idx+1).pts([mm.trainIdx]+1).pt), 2);
[H,inliers] = cv.findHomography(ptsTrain, ptsQuery, 'Method','Ransac');
% show final matches
imgMatches = cv.drawMatches(img, pts, ...
train(idx+1).img, train(idx+1).pts, ...
mm(logical(inliers)), 'NotDrawSinglePoints',true);
% apply the homography to the corner points of the training image
[h,w] = size(train(idx+1).img);
corners = permute([0 0; w 0; w h; 0 h], [3 1 2]);
p = cv.perspectiveTransform(corners, H);
p = permute(p, [2 3 1]);
% show where the training object is located in the query image
opts = {'Color',[0 255 0], 'Thickness',4};
imgMatches = cv.line(imgMatches, p(1,:), p(2,:), opts{:});
imgMatches = cv.line(imgMatches, p(2,:), p(3,:), opts{:});
imgMatches = cv.line(imgMatches, p(3,:), p(4,:), opts{:});
imgMatches = cv.line(imgMatches, p(4,:), p(1,:), opts{:});
imshow(imgMatches)
The result:
Note that since you did not post any testing images (in your code you are taking input from the webcam), I created one by distorting one the training images, and using it as a query image. I am using functions from certain MATLAB toolboxes (imwarp and such), but those are non-essential to the demo and you could replace them with equivalent OpenCV ones...
I must say that this approach is not the most robust one.. Consider using other techniques such as the bag-of-word model, which OpenCV already implements.

Matlab figure keeps the history of the previous images

I am working on rotating image manually in Matlab. Each time I run my code with a different image the previous images which are rotated are shown in the Figure. I couldn't figure it out. Any help would be appreciable.
The code is here:
[screenshot]
im1 = imread('gradient.jpg');
[h, w, p] = size(im1);
theta = pi/12;
hh = round( h*cos(theta) + w*abs(sin(theta))); %Round to nearest integer
ww = round( w*cos(theta) + h*abs(sin(theta))); %Round to nearest integer
R = [cos(theta) -sin(theta); sin(theta) cos(theta)];
T = [w/2; h/2];
RT = [inv(R) T; 0 0 1];
for z = 1:p
for x = 1:ww
for y = 1:hh
% Using matrix multiplication
i = zeros(3,1);
i = RT*[x-ww/2; y-hh/2; 1];
%% Nearest Neighbour
i = round(i);
if i(1)>0 && i(2)>0 && i(1)<=w && i(2)<=h
im2(y,x,z) = im1(i(2),i(1),z);
end
end
end
end
x=1:ww;
y=1:hh;
[X, Y] = meshgrid(x,y); % Generate X and Y arrays for 3-D plots
orig_pos = [X(:)' ; Y(:)' ; ones(1,numel(X))]; % Number of elements in array or subscripted array expression
orig_pos_2 = [X(:)'-(ww/2) ; Y(:)'-(hh/2) ; ones(1,numel(X))];
new_pos = round(RT*orig_pos_2); % Round to nearest neighbour
% Check if new positions fall from map:
valid_pos = new_pos(1,:)>=1 & new_pos(1,:)<=w & new_pos(2,:)>=1 & new_pos(2,:)<=h;
orig_pos = orig_pos(:,valid_pos);
new_pos = new_pos(:,valid_pos);
siz = size(im1);
siz2 = size(im2);
% Expand the 2D indices to include the third dimension.
ind_orig_pos = sub2ind(siz2,orig_pos(2*ones(p,1),:),orig_pos(ones(p,1),:), (1:p)'*ones(1,length(orig_pos)));
ind_new_pos = sub2ind(siz, new_pos(2*ones(p,1),:), new_pos(ones(p,1),:), (1:p)'*ones(1,length(new_pos)));
im2(ind_orig_pos) = im1(ind_new_pos);
imshow(im2);
There is a problem with the initialization of im2, or rather, the lack of it. im2 is created in the section shown below:
if i(1)>0 && i(2)>0 && i(1)<=w && i(2)<=h
im2(y,x,z) = im1(i(2),i(1),z);
end
If im2 exists before this code is run and its width or height is larger than the image you are generating the new image will only overwrite the top left corner of your existing im2. Try initializing im2 by adding adding
im2 = zeros(hh, ww, p);
before
for z = 1:p
for x = 1:ww
for y = 1:hh
...
As a bonus it might make your code a little faster since Matlab won't have to resize im2 as it grows in the loop.

Batch Processing Image Files in MATLAB

I'm a beginner in MATLAB and image processing.
I encountered a problem while trying to use the batch processing and hope someone would be able to enlighten me. Thanks.
Following the example from MATLAB, I did these:
p = which('Picture1.tif');
filelist = dir([fileparts(p) filesep 'Picture*.tif']);
fileNames = {filelist.name}'
I = imread(fileNames{1});
imshow(I)
Because I wanted to select the region of interest,
BW = roipoly(I);
BW1 = not(BW);
N = roifill(I,BW1);
After I selected the ROI, I created a function in the editor:
function Segout = DetectLines(N)
[junk threshold] = edge(N, 'sobel');
fudgeFactor = .5;
BWs = edge(N, 'sobel', threshold*fudgeFactor);
se90 = strel('line', 3, 90);
se0 = strel('line', 3, 0);
BWsdil = imdilate(BWs, [se90 se0]);
BWdfill = imfill(BWsdil, 'holes');
BWnobord = imclearborder(BWdfill, 4);
seD = strel('diamond', 1);
BWfinal = imerode(BWnobord, seD);
BWfinal = imerode(BWfinal, seD);
BWoutline = bwperim(BWfinal);
Segout = N;
Segout(BWoutline) = 255;
end
Back to the command window, I typed;
Segout = DetectLines(N);
figure, imshow(Segout)
The figure that came out was what I expected.
The problem comes now when I try to loop over images. I'm not sure if I've done it
correctly.
Following the example, I created another function in the editor;
function SegoutSequence = BatchProcessFiles(fileNames, fcn)
N = imread(fileNames{1});
[mrows, ncols] = size(N);
nImages = length(fileNames);
SegoutSequence = zeros(mrows, ncols, nImages, class(N));
parfor (k = 1:nImages)
N = imread(fileNames{k});
SegoutSequence(:,:,k) = fcn(N);
end
end
At the command window, I typed:
SegoutSequence = BatchProcessFiles(fileNames, #DetectLines);
implay(SegoutSequence)
However, the result was not what I wanted it to be. It was not the ROI that I wanted. Can anyone help me with this? Thank you very much.
Picture1:
Picture1 after selecting ROI:
Lookin at your code, you only selected the ROI once for the image you tested individually.
However, when you call the BatchProcessFiles function, you dont select a region of interest, and the DetectLines function is applied to the raw images... Therefore you need to pass the mask you created with roipoly to your BatchProcessFiles function to do the same on all the images.
As a side note, you might get better results if you try Hough transformation to detect the lines.
Also your code breaks if the images are not grayscale (you can add a call to rgb2gray() to be on the safe side)
Sample solution:
MATLAB
I = imread(fileNames{1});
BW = not( roipoly(I) );
SegoutSequence = BatchProcessFiles(fileNames, BW, #DetectLines);
implay(SegoutSequence)
BatchProcessFiles.m
function SegoutSequence = BatchProcessFiles(fileNames, Mask, fcn)
% ...
parfor (k = 1:nImages)
N = imread(fileNames{k});
if ndims(N)==3, N=rgb2gray(N); end
if ~isempty(Mask), N = roifill(N, Mask); end
SegoutSequence(:,:,k) = fcn(N);
end
end
or call it as: BatchProcessFiles(fileNames, [], #DetectLines) if you don't need to apply the mask