Eliminate all vertical and diagonal lines in MATLAB - matlab

Here is the input image 5.png:
Here is my code:
clear all; close all; clc;
%Input Image
A = imread('C:\Users\efu\Desktop\5.png');
% figure, imshow(A);
C=medfilt2(A,[3 5]);
% figure,imshow(C);
D=imfill(C);
% figure,imshow(D);
%Image obtained using MATLAB function 'edge'
E=edge(D,'canny',[0.01 .02],3);
figure, imshow(E); title('Image obtained using MATLAB function');
image=E;
img=im2bw(image);
% imshow(img)
se = strel('line',3,0);
zz = imerode(img,se);
figure, imshow(zz);
Output after canny edge detection:
After Eroding:
Here the problem is that after eroding all horizontal edges are broken, but I don't want that. I want to extract all horizontal lines without breaking, besides want to remove all vertical and diagonal lines.
Please someone modify the code.

It's a hack but it works. A short note, I would advise against using clear all. It has some negative effects such as clearing references to your mex functions. I'm not really sure what that means but there seems to be no real use for it unless you want to clear global variables.
Basically what I did here was threefold. I added a gaussian filter to give a bit more tolerance, increased the erosion rate of strel, and searched across angles to give a bit of angle tolerance. In the end, you have a bit more than you started with on the horizontal part but it does clear the image up a lot better. If you wanted you could just add up the zz matrices and threshold it to get a new binary image which could be a bit closer to your original. Really enjoyed the question by the way and made me look forward to image processing in the fall.
clear; close all;
%Input Image
A = imread('5.png');
% figure, imshow(A);
C=medfilt2(A,[3 5]);
% figure,imshow(C);
D=imfill(C);
% figure,imshow(D);
%Image obtained using MATLAB function 'edge'
E=edge(D,'canny',[0.01 .02],4);
figure, imshow(E); title('Image obtained using MATLAB function');
image=E;
img=double(image);
img = imgaussfilt(img,.5);
% imshow(img)
zz_out = zeros(size(img));
% se = strel('line',3,-90);
% zz = imerode(img,se);
% se2 = strel('line',3,0);
% zz2 = imerode(img,se2);
for ii = -5:.1:5
se = strel('line',20,ii);
zz = imerode(img,se);
zz_out = or(zz,zz_out);
end
% zz_out = img-zz;
figure, imshow(zz_out);

Related

Plot digitization in MATLAB using ginput

I'm trying to digitize this image using MATLAB:
I have the following script:
%// Get data from plot
clear all; close all;
%// Input
fname = 'Fig15a.PNG';
xvec = [1e3:1:1e8];
yvec = [1e-4:1:1e-1];
xt = [1e3 1e4 1e5 1e6 1e7 1e8];
yt = [1e-4 1e-3 1e-2 1e-1];
%// Read and plot the image
im = imread(fname);
figure(1), clf
im = im(end:-1:1,:,:);
image(xvec,yvec,im)
axis xy;
grid on;
%// Set ticks
set(gca,'xtick',xt,'ytick',yt); %// Match tick marks
%// Collect data
[x,y] = ginput; %// Click on points, and then hit ENTER to finish
%// Plot collected data
hold on; plot(x,y,'r-o'); hold off;
%// Then save data as:
save Fig15a.mat x y
The script works fine
Is there a way I can change the x and y axes to a log scale ?
I have tried adding the following code in different places without luck:
%// Set Log scale on x and y axes
set(gca,'XScale','log','YScale','log');
Below's a proof of concept that should get you on the right track. I have replaced things in your original code with what I consider "good practices".
function q36470836
%% // Definitions:
FIG_NUM = 36470836;
%% // Inputs:
fname = 'http://i.stack.imgur.com/2as4t.png';
xt = logspace(3,8,6);
yt = logspace(-4,-1,4);
%% // Init
figure(FIG_NUM); clf
% Read and plot the image
im = imread(fname);
hIMG = imshow(im); axis image;
%// Set ticks
hDigitizer = axes('Color','none',...
'XLim',[xt(1) xt(end)],'YLim',[yt(1) yt(end)],...
'XScale','log','YScale','log',...
'Position',hIMG.Parent.Position .* [1 1 696/785 (609-64+1)/609]);
uistack(hDigitizer,'top'); %// May be required in some cases
grid on; hold on; grid minor;
%// Collect data:
[x,y] = ginput; %// Click on points, and then hit ENTER to finish
%// Plot collected data:
scatter(x,y,'o','MarkerEdgeColor','r');
%// Save data:
save Fig15a.mat x y
Here's an example of what it looks like:
Few notes:
xt, yt may be created in a cleaner fashion using logspace.
It is difficult (possibly impossible) to align the digitization grid with the image correctly, which would inevitably result in errors in your data. Though this can be helped in the following scenarios (for which you will require a vector graphics editor, such as the freeware InkScape):
If, by any chance, you got this image from a PDF file, where it appears as a vector image (you can test this by zooming in as much as you like without the chart becoming pixelated; this seems to be your case from the way the .png looks), you would be better off saving it as a vector image and then you have two options:
Exporting the image to a bitmap with a greatly increased resolution and then attempting the digitization procedure again.
Saving the vector image as .svg then opening the file using your favorite text editor and getting the exact coordinates of the points.
If the source image is a bitmap (as opposed to vector graphic), you can "trace the bitmap", thus converting it to vectoric, then #GOTO step 1.
This solution doesn't (currently) support resizing of the figure.
The magic numbers appearing in the Position setting are scaling factors explained in the image below (and also size(im) is [609 785 3]). These can technically be found using "primitive image processing" but in this case I just hard-coded them explicitly.
You can plot in double logarithmic scale with
loglog(x,y);
help loglog or the documentation give additional information.
For a single logarithmic scale use
semilogx(x,y);
semilogy(x,y);

Lowpass filter non working

I have a noisy image that I am trying to clean using a lowpass filter (code below, modified from here). The image I get as a result is essentially identical to the one I gave as an input.
I'm not an expert, but my conclusion would be that the input image is so noisy that no patterns are found. Do you agree? Do you have any suggestion on how to interpret the result?
Result from the code:
Input image:
Code:
clear; close all;
frame = 20;
size_y = 512; % This is actually size_x
size_x = 256; % This is actually size_y
roi=5;thresh=100000;
AA = imread('image.png');
A = zeros(size_x, size_y);
A = AA(1:size_x, 1:size_y);
A(isnan(A)) = 0 ;
B = fftshift(fft2(A));
fabs = abs(B);
figure; imshow(B);
local_extr = ordfilt2(fabs, roi^2, ones(roi)); % find local maximum within 3*3 range
result = (fabs == local_extr) & (fabs > thresh);
[r, c] = find(result);
for i=1:length(r)
if (r(i)-128)^2+(c(i)-128)^2>thresh % periodic noise locates in the position outside the 20-pixel-radius circle
B(r(i)-2:r(i)+2,c(i)-2:c(i)+2)=0; % zero the frequency components
end
end
Inew=ifft2(fftshift(B));
figure;
subplot(2,1,1); imagesc(A), colormap(gray); title('Original image');
subplot(2,1,2);imagesc(real(Inew)),colormap(gray); title('Filtered image');
For filtering this kind of signal, you can try to use the median filter. It might be more appropriated than a means or Gaussian filter. The median filter is very effective on "salt and paper" noise when the mean just blur the noise.
As the signal seems very noisy, you need to try to find the good size of kernel for the filter. You can also try to increase the contrast of the image (after filtering) in order to see more the difference between the gray levels.

MATLAB 3D sparse reconstruction issues. Somehow I can't get the final scatter plot of the points to work

I'm trying to reconstruct the shape of a sail. I'm using the 3D sparse reconstruction method. I'm using two cameras with which I took two pictures. I managed to do the calibration of such cameras too. In the pictures it is possible to see the checkerboard and the code I wrote detects it properly.
Now, since my pictures are black and white and the quality of the cameras is quite low, I cannot use the detectFeatures method properly. Problems arise when I'm trying to use matchFeatures. To overcome this problem I decided to use instead a cpselect command. By doing so I can manually click on the features. The matching between points from the two views seems now to be correct. When I carry on with the code and try to reconstruct the 3D plot I get points all over the place. It seems deformed. The plot clearly does not represent the sail and I don't know why.
The code follows.
Thank you in advance
% % Load precomputed camera parameters
load IP_CalibrationCarlos.mat %Calibration feature
%
I1 = imread('/Users/riccardocamin/Documents/MATLAB/Frames/Scan1.1.jpg');
I2 = imread('/Users/riccardocamin/Documents/MATLAB/Frames/Scan2.1.jpg');
%
[I1, newOrigin1] = undistortImage(I1, cameraParameters, 'OutputView', 'full');
[I2, newOrigin2] = undistortImage(I2, cameraParameters, 'OutputView', 'full');
%
I1 = imcrop(I1, [80 10 1040 1300]); %Necessary so images have same size
I2 = imcrop(I2, [0 10 1067 1300]);
%
squareSize = 82; % checkerboard square size in millimeters
%
[imagePoints, boardSize, pairsUsed] = detectCheckerboardPoints(rgb2gray(I1), rgb2gray(I2));
[refPoints1, boardSize] = detectCheckerboardPoints(rgb2gray(I1));
[refPoints2, boardSize] = detectCheckerboardPoints(rgb2gray(I2));
%
% % Translate detected points back into the original image coordinates
refPoints1 = bsxfun(#plus, refPoints1, newOrigin1);
refPoints2 = bsxfun(#plus, refPoints2, newOrigin2);
%
worldPoints = generateCheckerboardPoints(boardSize, squareSize);
%
[R1, t1] = extrinsics(refPoints1, worldPoints, cameraParameters); %R = r t = translation
[R2, t2] = extrinsics(refPoints2, worldPoints, cameraParameters);
%
% % Calculate camera matrices using the |cameraMatrix| function.
cameraMatrix1 = cameraMatrix(cameraParameters, R1, t1);
cameraMatrix2 = cameraMatrix(cameraParameters, R2, t2);
%
cpselect(I1, I2); % Save them as 'matchedPoints1'and 'matchedPoints2'
%
indexPairs = matchFeatures(matchedPoints1, matchedPoints2);
% Visualize correspondences
figure;
showMatchedFeatures(I1, I2, matchedPoints1, matchedPoints2);
title('Matched Features');
%
[points3D] = triangulate(matchedPoints1, matchedPoints2, ...
cameraMatrix1, cameraMatrix2);
%
x = -points3D(:,1);
y = -points3D(:,2);
z = -points3D(:,3);
figure
scatter3(x,y,z, 25);
xlabel('X');
ylabel('Y');
zlabel('Z');
The first problem you have is that you are cropping the images. Once you do that, all your coordinates are off. You do not need to do that here, because the images do not need to be the same size.
The second question is how precisely did you select the matching points? From the picture you have posted, it seems that your matches can be off by a few pixels, which can result in a large reconstruction error. Can you try finding the centroids of the spots on the sail using regionprops?
Also, are the two cameras stationary relative to each other? If so, then you may be better off calibrating the stereo pair, and doing the dense reconstruction as in this example. In that case, the two images would have to be of the same size.

MATLAB: Compare data from imfindcircles to data from bwconncomp

WARNING - Complete n00b here.
I'm working on a project which needs to find holes. I found a method that is fairly accurate (bwconncomp) but I get some extra data that I don't need (a.k.a holes that aren't holes). Now the holes are circular so I was just going to do a check with imfindcircles.
So what I need to do is take the center coordinate and radius info from imfindcircles to filter out the non-circular holes in bwconncomp. How should I tackle that?
%Find Circles
[centersDark, radiiDark] = imfindcircles(im,[10 75],'ObjectPolarity','dark');
%Find holes (I think)
cc = bwconncomp(BW);
%Put box around holes (prints to figure for debugging... kinda)
rp = regionprops(cc,'BoundingBox');
So just to clarify, I need to figure out how to weed out the extra info in cc from the data received in from imfindcircles (those variables being centersDark and radiiDark)
Here's a sample image:
You can also improve your segmentation stage:
close all;
% read input
im = imread('holes.bmp');
grayImg = rgb2gray(im);
% create an image where holes are removed
med = medfilt2(grayImg,[100,100],'symmetric');
figure();
imshow(med);
title('median');
% subtract image with holes from image without holes
% so the whole process is less dependent on ilumination of foreground
diff = double(med) - double(BW);
figure();
imshow(uint8(diff+127));
title('difference');
% apply threshold
bw = diff > 50;
figure();
imshow(bw);
title('threshold');
% remove small objects
SE = strel('disk',10);
opened = imopen(bw,SE);
figure();
imshow(opened);
title('imopen');
% get bounding boxes for each hole
cc = bwconncomp(opened);
rp = regionprops(cc,'BoundingBox');
% draw rectangles
figure();
imshow(im);
hold on;
title('result');
for i = 1:length(rp)
rectangle('Position',rp(i).BoundingBox,'EdgeColor','red')
end
Result:

How to plot a 2D FFT in Matlab?

I am using fft2 to compute the Fourier Transform of a grayscale image in MATLAB.
What is the common way to plot the magnitude of the result?
Assuming that I is your input image and F is its Fourier Transform (i.e. F = fft2(I))
You can use this code:
F = fftshift(F); % Center FFT
F = abs(F); % Get the magnitude
F = log(F+1); % Use log, for perceptual scaling, and +1 since log(0) is undefined
F = mat2gray(F); % Use mat2gray to scale the image between 0 and 1
imshow(F,[]); % Display the result
Here is an example from my HOW TO Matlab page:
close all; clear all;
img = imread('lena.tif','tif');
imagesc(img)
img = fftshift(img(:,:,2));
F = fft2(img);
figure;
imagesc(100*log(1+abs(fftshift(F)))); colormap(gray);
title('magnitude spectrum');
figure;
imagesc(angle(F)); colormap(gray);
title('phase spectrum');
This gives the magnitude spectrum and phase spectrum of the image. I used a color image, but you can easily adjust it to use gray image as well.
ps. I just noticed that on Matlab 2012a the above image is no longer included. So, just replace the first line above with say
img = imread('ngc6543a.jpg');
and it will work. I used an older version of Matlab to make the above example and just copied it here.
On the scaling factor
When we plot the 2D Fourier transform magnitude, we need to scale the pixel values using log transform to expand the range of the dark pixels into the bright region so we can better see the transform. We use a c value in the equation
s = c log(1+r)
There is no known way to pre detrmine this scale that I know. Just need to
try different values to get on you like. I used 100 in the above example.