Detection of homogeneous area in term of connectivity in image - matlab

I am looking for some measurements to do to distinct between these two binary images (texte and noise).
Hough transform of the frequency domain don't tell me much (either in skeleton or in the original shape), as can be seen below !
in the spatial domain, I have try to measure, if a given pixel participate to line or curve, or participate to a random shape, and then measures the percentage of all pixels participating and not participating to normal shape (lines and curves) to distinguish between these images, but I didn't succeed, in implementation.
what do you think ?
I use matlab for test.
Thanks in advance

Looking at the skeleton images, one could notice how the noise image has lots of branches in it, as compared to the text image and this looks like one of the features that could be exploited. The experiment as code shown below soughts to verify the same, using the OP's images -
Experiment Code
%%// Experiment to research what features what might help us
%%// differentiate betwen noise and text images
%%// Read in the given images
img1 = imread('noise.png');
img2 = imread('text.png');
%%// Since the given images had the features as black and rest as white,
%%// we must invert them
img1 = ~im2bw(img1);
img2 = ~im2bw(img2);
%%// Remove the smaller blobs from both of the images which basically
%%// denote the actual noise in them
img1 = rmnoise(img1,60);
img2 = rmnoise(img2,60);
%// Get the skeleton images
img1 = bwmorph(img1,'skel',Inf);
img2 = bwmorph(img2,'skel',Inf);
%%// Find blobs branhpoints for each blob in both images
[L1, num1] = bwlabel(img1);
[L2, num2] = bwlabel(img2);
for k = 1:num1
img1_bpts_count(k) = nnz(bwmorph(L1==k,'branchpoints'));
end
for k = 1:num2
img2_bpts_count(k) = nnz(bwmorph(L2==k,'branchpoints'));
end
%%// Get the standard deviation of branch points count
img1_branchpts_std = std(img1_bpts_count)
img2_branchpts_std = std(img2_bpts_count)
Note: Above code uses a function - rmnoise shown below that is built based on the problem discussed at this link :
function NewImg = rmnoise(Img,threshold)
[L,num] = bwlabel( Img );
counts = sum(bsxfun(#eq,L(:),1:num));
B1 = bsxfun(#eq,L,permute(find(counts>threshold),[1 3 2]));
NewImg = sum(B1,3)>0;
return;
Output
img1_branchpts_std =
73.6230
img2_branchpts_std =
12.8417
One can see the big difference between the standard deviations of the two input images, suggesting this feature could be used.
Runs on some other samples
To make our theory a bit more concrete, let's use a pure text image and gradually add noise and see if the standard deviation of branch-points, naming it as check_value suggest anything on them.
(I) Pure text image
check_value = 1.7461
(II) Some added noise image
check_value = 30.1453
(III) Some more added noise image
check_value = 54.6446
Conclusion: As can be seen, this parameter provides quite a good indicator to decide on the nature of images.
Finalized Code
A script could be written to test for whether another input image would be a text or noise one, like this -
%%// Parameters
%%// 1. Decide this based on the typical image size and count of pixels
%%// in the biggest noise blob
rmnoise_threshold = 60;
%%// 2. Decide this based on the typical image size and how convoluted the
%%// noisy images are
branchpts_count_threshold = 50;
%%// Actual processing
%%// We are assuming input images as binary images with features as true
%%// and false in rest of the region
img1 = im2bw(imread(FILE));
img1 = rmnoise(img1,rmnoise_threshold);
img1 = bwmorph(img1,'skel',Inf);
[L1, num1] = bwlabel(img1);
for k = 1:num1
img1_bpts_count(k) = nnz(bwmorph(L1==k,'branchpoints'));
end
if std(img1_bpts_count) > branchpts_count_threshold
disp('This is a noise image');
else
disp('This is a text image');
end

And now what you suggest if we try to use the original shape instead of the skeleton, (to avoid the loss of information).
I try to measure for a given pixel, the elongation of the strokes (instead of straight branches) that past throughout that pixel, by counting the number of transitions from white to black in a clockwise.
I am thinking to use a circle with a radius, and for the origin the pixel in consideration, and store the pixels locating at the edge of the circle in an ordered list (clockwise) and then compute the number of transitions (black to white) from this list.
by increasing the radius of the circle we could trace the shape of elongated stokes and know his orientation.
this is a schema illustrating this.
the pixels that have a number of transitions equal to 0 or bigger than 2 (red ones) have to be classified as noise, and those that have 2 or 1 transition classified as normal.
What do you think of this approach !

Related

Edge linking in Matlab

I am trying to link the edges in images like the image below:
I've tried using dilation/erosion operations but the result isn't good. Is there any other way to link the edges?
Here is the original image:
The result you have might very well be good enough to apply the Hough transform to, which will identify your eight most important lines across the image.
I don't know if all your images are similar, but in the example you show it is easy to separate the gray lines from the green background. For example, the following code (using DIPimage, but easy to implement with other tools too) will distinguish the relatively bright gray from anything that is dark or colorful:
img = readim('https://i.stack.imgur.com/vmBiF.jpg');
img = colorspace(img,'hsv');
img = (0.5-img{2})*img{3}; % img{2} is the saturation channel, img{3} is the value (intensity) channel
img = clip(img); % set negative values to 0
Next, a Laplace of Gaussian filter (which is a line detector), some thresholding just above zero, and a selection of only the larger objects results in the detected lines:
img = -laplace(img,5); % LoG with sigma=5
img = img > 0.05; % 0.05 is just above 0
img = areaopening(img,1000); % remove objects smaller than 1000 pixels
Needless to say, this is a lot cheaper computationally than running a U-Net.

Template matching

I am trying to detect elevators on floor plans in MATLAB. The code I have now is not detecting elevators, it is instead just pointing at the edges of the image. I am expecting to detect all the elevators on a floor plan. Elevators are represented by a square or rectangle with an x inside, similar to the template image. I have attached the template, image and a result screenshot.
Template image:
Image:
Results:
Code:
template= rgb2gray(imread('ele7.png'));
image = rgb2gray(imread('floorplan.jpg'));
%imshowpair(image,template,'montage')
c = normxcorr2(template,image);% perform cross-correlation
figure, surf(c), shading flat
[ypeak, xpeak] = find(c==max(c(:)));%peak of correlation
%Compute translation from max location in correlation matrix, =padding
yoffSet = ypeak-size(template,1);
xoffSet = xpeak-size(template,2);
%Display matched area
figure
hAx = axes;
imshow(image,'Parent', hAx);
imrect(hAx, [xoffSet+1, yoffSet+1, size(template,2), size(template,1)]);
To check if everything runs smoothly, you should plot the correlation:
figure, surf(c)
As mention by #cris-luengo , it's easy to fail with the sizes of the image and so on. However, I've seen that you followed the tutorial on https://es.mathworks.com/help/images/ref/normxcorr2.html . Since both images, already black and white images (or 2-colour images), normxcorr2 works well with rgb images (with textures and objects, etc...). Thus, I think that is not a correct approach to use normxcorr2.
An approach I would consider is look for branches. Using Matlab's help and bwmorph:
BW = imread('circles.png');
imshow(BW);
BW1 = bwmorph(BW,'skel',Inf);
You first skeletonize the image, then you can use any of the functions that displayed on bwmorph's help (https://es.mathworks.com/help/images/ref/bwmorph.html). In this case, I'd search for branch points, i.e. crosslinks. It is as simple as:
BW2 = bwmorph(BW1,'branchpoints');
branchPointsPixels = find(BW2 == 1);
The indices of the branch points pixels, will be where it finds an X. However, it can be any rotated X (or +, ...). So you'll find more points that you desire, you would need to filter the points in order to get what you want.

How to select the largest contour in MATLAB

In my progress work, I have to detect a parasite. I have found the parasite using HSV and later made it into a grey image. Now I have done edge detection too. I need some code which tells MATLAB to find the largest contour (parasite) and make the rest of the area as black pixels.
You can select the "largest" contour by filling in the holes that each contour surrounds, figure out which shape gives you the largest area, then use the locations of the largest area and copy that over to a final image. As what Benoit_11 suggested, use regionprops - specifically the Area and PixelList flags. Something like this:
im = imclearborder(im2bw(imread('http://i.stack.imgur.com/a5Yi7.jpg')));
im_fill = imfill(im, 'holes');
s = regionprops(im_fill, 'Area', 'PixelList');
[~,ind] = max([s.Area]);
pix = sub2ind(size(im), s(ind).PixelList(:,2), s(ind).PixelList(:,1));
out = zeros(size(im));
out(pix) = im(pix);
imshow(out);
The first line of code reads in your image from StackOverflow directly. The image is also a RGB image for some reason, and so I convert this into binary through im2bw. There is also a white border that surrounds the image. You most likely had this image open in a figure and saved the image from the figure. I got rid of this by using imclearborder to remove the white border.
Next, we need to fill in the areas that the contour surround, so use imfill with the holes flag. Next, use regionprops to analyze the different filled objects in the image - specifically the Area and which pixels belong to each object in the filled image. Once we obtain these attributes, find the filled contour that gives you the biggest area, then access the correct regionprops element, extract out the pixel locations that belong to the object, then use these and copy over the pixels to an output image and display the results.
We get:
Alternatively, you can use the Perimeter flag (as what Benoit_11) suggested, and simply find the maximum perimeter which will correspond to the largest contour. This should still give you what you want. As such, simply replace the Area flag with Perimeter in the third and fourth lines of code and you should still get the same results.
Since my answer was pretty much all written out I'll give it to you anyway, but the idea is similar to #rayryeng's answer.
Basically I use the Perimeter and PixelIdxList flags during the call to regionprops and therefore get the linear indices of the pixels forming the largest contour, once the image border has been removed using imclearborder.
Here is the code:
clc
clear
BW = imclearborder(im2bw(imread('http://i.stack.imgur.com/a5Yi7.jpg')));
S= regionprops(BW, 'Perimeter','PixelIdxList');
[~,idx] = max([S.Perimeter]);
Indices = S(idx).PixelIdxList;
NewIm = false(size(BW));
NewIm(Indices) = 1;
imshow(NewIm)
And the output:
As you see there are many ways to achieve the same result haha.
This could be one approach -
%// Read in image as binary
im = im2bw(imread('http://i.stack.imgur.com/a5Yi7.jpg'));
im = im(40:320,90:375); %// clear out the whitish border you have
figure, imshow(im), title('Original image')
%// Fill all contours to get us filled blobs and then select the biggest one
outer_blob = imfill(im,'holes');
figure, imshow(outer_blob), title('Filled Blobs')
%// Select the biggest blob that will correspond to the biggest contour
outer_blob = biggest_blob(outer_blob);
%// Get the biggest contour from the biggest filled blob
out = outer_blob & im;
figure, imshow(out), title('Final output: Biggest Contour')
The function biggest_blob that is based on bsxfun is an alternative to what other answers posted here perform with regionprops. From my experience, I have found out this bsxfun based technique to be faster than regionprops. Here are few benchmarks comparing these two techniques for runtime performances on one of my previous answers.
Associated function -
function out = biggest_blob(BW)
%// Find and labels blobs in the binary image BW
[L, num] = bwlabel(BW, 8);
%// Count of pixels in each blob, basically should give area of each blob
counts = sum(bsxfun(#eq,L(:),1:num));
%// Get the label(ind) cooresponding to blob with the maximum area
%// which would be the biggest blob
[~,ind] = max(counts);
%// Get only the logical mask of the biggest blob by comparing all labels
%// to the label(ind) of the biggest blob
out = (L==ind);
return;
Debug images -

How do I denoise a simple grayscale image

Here is the original image with better vision: we can see a lot of noise around the main skeleton, the circle thing, which I want to remove them, and do not affect the main skeleton. I'm not sure if it called noise
I'm doing it to deblurring a image, and this image is the motion blur kernel which identify the camera motion when the camera capture a image.
ps: this image is the kernel for one case, and what I need is a general method in here. thank you for your help
there is a paper in CVPR2014 named "Separable Kernel for Image Deblurring" which talk about this, I want to extract main skeleton of the image to make the kernel more robust, sorry for the explaination here as my English is not good
and here is the ture grayscale image:
I want it to be like this:
How can I do it using Matlab?
here are some other kernel images:
As #rayryeng well explained, median filtering is the best option to clean noise in the image, which I realized when I had studied about image restoration. However, in your case, what you need to do seems to me not cleaning noise in the image. You want to more likely eliminate the sparks in the image.
Simply I applied single thresholding to your noisy image to eliminate sparks.
Try this:
desIm = imread('http://i.stack.imgur.com/jyYUx.png'); % // Your expected (desired) image
nIm = imread('http://i.stack.imgur.com/pXO0p.png'); % // Your original image
nImgray = rgb2gray(nIm);
T = graythresh(nImgray)*255; % // Thereshold value
S = size(nImgray);
R = zeros(S) + 5; % // Your expected image bluish so I try to close it
G = zeros(S) + 3; % // Your expected image bluish so I try to close it
B = zeros(S) + 20; % // Your expected image bluish so I try to close it
logInd = nImgray > T; % // Logical index of pixel exclude spark component
R(logInd) = nImgray(logInd); % // Get original pixels without sparks
G(logInd) = nImgray(logInd); % // Get original pixels without sparks
B(logInd) = nImgray(logInd); % // Get original pixels without sparks
rgbImage = cat(3, R, G, B); % // Concatenating Red Green Blue channels
figure,
subplot(1, 3, 1)
imshow(nIm); title('Original Image');
subplot(1, 3, 2)
imshow(desIm); title('Desired Image');
subplot(1, 3, 3)
imshow(uint8(rgbImage)); title('Restoration Result');
What I got is:
The only thing I can see that is different between the two images is that there is some quantization noise / error around the perimeter of the object. This resembles salt and pepper noise and the best way to remove that noise is to use median filtering. The median filter basically analyzes local overlapping pixel neighbourhoods in your image, sorts the intensities and chooses the median value as the output for each pixel neighbourhood. Salt and pepper noise corrupts image pixels by randomly selecting pixels and setting their intensities to either black (pepper) or white (salt). By employing the median filter, sorting the intensities puts these noisy pixels at the lower and higher ends and by choosing the median, you would get the best intensity that could have possibly been there.
To do median filtering in MATLAB, use the medfilt2 function. This is assuming you have the Image Processing Toolbox installed. If you don't, then what I am proposing won't work. Assuming that you do have it, you would call it in the following way:
out = medfilt2(im, [M N]);
im would be the image loaded in imread and M and N are the rows and columns of the size of the pixel neighbourhood you want to analyze. By choosing a 7 x 7 pixel neighbourhood (i.e. M = N = 7), and reading your image directly from StackOverflow, this is the result I get:
Compare this image with your original one:
If you also look at your desired output, this more or less mimics what you want.
Also, the code I used was the following... only three lines!
im = rgb2gray(imread('http://i.stack.imgur.com/pXO0p.png'));
out = medfilt2(im, [7 7]);
imshow(out);
The first line I had to convert your image into grayscale because the original image was in fact RGB. I had to use rgb2gray to do that. The second line performs median filtering on your image with a 7 x 7 neighbourhood and the final line shows the image in a separate window with imshow.
Want to implement median filtering yourself?
If you want to get an idea of how to actually write a median filtering algorithm yourself, check out my recent post here. A question poser asked to implement the filtering mechanism without using medfilt2, and I provided an answer.
Matlab Median Filter Code
Hope this helps.
Good luck!

Removing a smaller circle using imopen/imclose operation

I have a binary image with two circles of different radii and need to remove either of the circles using either an imopen or imclose operation. I have used the structuring element having the diameter greater than the smaller circle but less than the bigger circle but it doesn't seem to work for either operation.
My image:
My code:
c=imread('image.png');
se2=strel('disk',25);
c=imopen(c,se2);
figure, imshow(c,[]), title('new image');
I can see three things that will cause this algorithm not to run with the intended results:
Your image has the circles as black, but binary morphological operations assume that the objects are white. As such, what you are actually doing is performing the operations on the background instead. Therefore, you need to invert the image before you perform morphological operations on the image.
The disk structuring element is too small to remove any of the circles. You need to bump up your radius so that it's more. I did some tests and to get out the smaller circle, I had to bump up the radius to 45.
In addition, the disk structuring element that you specified is not a perfect circle. In fact, it would be a slight diamond shape. If you want to get a perfect circle, you need to specify an additional parameter to strel with the disk parameter which is 0 on top of the radius (45). Once you do this, you invert the image back to where it was, as the objects were black and not white.
Therefore, your code should look like:
%// Modified to read from StackOverflow
c = im2bw(imread('http://i.stack.imgur.com/7Hd0y.png'));
se = strel('disk', 45, 0); %// Change radius to 45 + add additional parameter of 0
c2 = imopen(~c, se); %// Invert the image before opening
c2 = ~c2; %// Invert back because original image was this way
imshow(c2); title('New Image');
This is the image I get:
bwlabel + bsxfun based solution
If you care for performance, let me suggest a method based on bwlabel and the very powerful vectorization tool - bsxfun that might be better than using image morphological operations. Now, your problem is basically a biggest blob finding problem and here's the code -
c = im2bw(imread('http://i.stack.imgur.com/7Hd0y.png'));
[L,num] = bwlabel( ~c );
counts = sum(bsxfun(#eq,L(:),1:num));
[~,ind] = max(counts);
c = ~(L==ind);
Benchmark
Benchmarking Code against imopen based method suggested here -
c = im2bw(imread('http://i.stack.imgur.com/7Hd0y.png'));
disp('----------------------------------- With IMOPEN')
tic
se = strel('disk', 45, 0); %// Change radius to 45 + add additional parameter of 0
c2 = imopen(~c, se); %// Invert the image before opening
c2 = ~c2; %// Invert back because original image was this way
toc
disp('----------------------------------- With BWLABEL + BSXFUN')
tic
[L,num] = bwlabel( ~c );
counts = sum(bsxfun(#eq,L(:),1:num));
[~,ind] = max(counts);
out = ~(L==ind);
toc
Results -
----------------------------------- With IMOPEN
Elapsed time is 0.195290 seconds.
----------------------------------- With BWLABEL + BSXFUN
Elapsed time is 0.020950 seconds.
Bonus analysis
Another reason why you might want to do away with morphological operations would be that imopen changes your blobs around the edges. You can visualize this, if you perform absolute difference between input and output images. So, if you do figure,imshow(imabsdiff(c,c2)), you would get this -
As you can see the bigger blob has changed around its edges. In a perfectly preserving case, only the smaller blob must be there.