Boundary around an object - matlab

I want to make a boundary around an object and the object is within a grayscale image.
The approach that I want to perform is to subtract the region inside the boundary from the background.
This was the code I used:
s=imread('C:\Users\Deepinder\Desktop\dd.jpg');
t=im2bw(s);
se=strel('disk',2);
f1=imerode(t,se);
CC=f1-t;
imshow(CC)
However, I am getting a completely black image as a result. What am I doing wrong?

I've come up with one solution, but I don't believe it is as accurate as you want, but it's something to start off with.
The image you are dealing with is actually quite complicated. This is the image that you have shown me:
What you would like to do is extract only the pixels that concern the face while making the background pixels black.
What we originally wanted to do was convert the image to black and white, then do an erosion and subtract this result with the original. This would work ONLY if the object had an overall brighter intensity than the background.
If you tried to do a straight up im2bw(), this would convert the image to grayscale first for colour images, then assume a threshold of 128. Anything larger than 128 would be white, while anything smaller is 0. This is the image that we would get followed by the accompanied code:
im = imread('dd.jpg');
imshow(im2bw(im));
As you can see, the background is also classified as white, and so our first approach won't work. What I did was decompose the image into separate channels (red, green and blue). This is what I get and here's the code to show it:
titles = {'Red', 'Green', 'Blue'};
for i = 1 : 3
subplot(1,3,i);
imshow(im(:,:,i));
title(titles{i});
end
If you take a look at the red and green channels, the red channel is very noisy and so we'll leave that alone. The green channel has the background being very similar to the face, and so we'll leave that one too. The best one we can use is the blue channel as there is a relatively good separation between the face intensity profile and the background.
By using impixelinfo and moving around the blue channel image, I saw that intensity profiles of the face varied roughly between 120 to 200. As such, let's create a binary mask that does such:
im = imread('dd.jpg');
faceGray = im(:,:,3);
faceBW = faceGray > 120 & faceGray < 200;
This is the output I get:
We're almost there. I want to get rid of the outline of the hair. What I did was an opening filter with a disk structuring element of radius 3. This should thin out any small objects while keeping the bigger predominant objects.
se = strel('disk', 2);
faceOpened = imopen(faceBW, se);
This is what I get:
Now let's go ahead and fill in the holes. imfill won't work because it only fills in regions that are closed. As such, I'm going to cheat a bit and do a closing filter with a larger size disk. I chose a radius of 30 this time.
se2 = strel('disk', 30);
faceClosed = imclose(faceOpened, se2);
This is what I get:
Now the final step is to use this mask and mask out all of the background pixels:
mask = repmat(faceClosed, [1 1 3]);
out = uint8(zeros(size(im)));
out(mask) = im(mask);
... and this is what I get:
This is by no means perfect, but it gives you something to work with. Also, the intensity profile around the neck is very similar to the face and so doing it my way can't avoid extracting the neck. What I would suggest you do is crop out the image so that you mostly see the face, then try and do some morphology or even edge detection to help you out in a similar fashion to what I did. Hope this helps!

I would recommend to convert your image to HSV space with rgb2hsv function. Then you can use the first hue layer to make a mask.
im = imread(filename);
im2 = rgb2hsv(im);
mask = im2(:,:,1)>0.4 & im2(:,:,1)<0.6;
im3 = im;
im3(repmat(mask,[1 1 3])) = 255;
imshow(im3)
You can play more with this mask to fill a few gaps. I don't have the toolbox to test it throughly.

Related

How to create an inverse gray scale?

I have an image with dark blue spots on a black background. I want to convert this to inverse gray scale. By inverse, I mean, I want the black ground to be white.
When I convert it to gray scale, it makes everything look black and it makes it very hard to differentiate.
Is there a way to do an inverse gray scale where the black background takes the lighter shades?
Or, another preferable option is to represent the blue as white and the black as black.
I am using img = rgb2gray(img); in MATLAB for now.
From mathworks site:
IM2 = imcomplement(IM)
Is there a way to do an inverse gray scale where the black
background takes the lighter shades?
Based on your image description I created an image sample.png:
img1 = imread('sample.png'); % Read rgb image from graphics file.
imshow(img1); % Display image.
Then, I used the imcomplement function to obtain the complement of the original image (as suggested in this answer).
img2 = imcomplement(img1); % Complement image.
imshow(img2); % Display image.
This is the result:
Or, another preferable option is to represent the blue as white and
the black as black.
In this case, the simplest option is to work with the blue channel. Now, depending on your needs, there are two approaches you can use:
Approach 1: Convert the blue channel to a binary image (B&W)
This comment suggests using the logical operation img(:,:,3) > 0, which will return a binary array of the blue channel, where every non-zero valued pixel will be mapped to 1 (white), and the rest of pixels will have a value of 0 (black).
While this approach is simple and valid, binary images have the big disadvantage of loosing intensity information. This can alter the perceptual properties of your image. Have a look at the code:
img3 = img1(:, :, 3) > 0; % Convert blue channel to binary image.
imshow(img3); % Display image.
This is the result:
Notice that the round shaped spots in the original image have become octagon shaped in the binary image, due to the loss of intensity information.
Approach 2: Convert the blue channel to grayscale image
A better approach is to use a grayscale image, because the intensity information is preserved.
The imshow function offers the imshow(I,[low high]) overload, which adjusts the color axis scaling of the grayscale image through the DisplayRange parameter.
One very cool feature of this overload, is that we can let imshow do the work for us.
From the documentation:
If you specify an empty matrix ([]), imshow uses [min(I(:)) max(I(:))]. In other words, use the minimum value in I as black, and the maximum value as white.
Have a look at the code:
img4 = img1(:, :, 3); % Extract blue channel.
imshow(img4, []); % Display image.
This is the result:
Notice that the round shape of the spots is preserved exactly as in the original image.

remove background or mark filled areas from imfill

I'm doing some image segmentation in matlab of grayscale images taken from a drone using a thermo sensitive camera. The idea is that you should be able to put in a video whereafter it analyzes every frame and give a new video as output, now where each person is marked, clustered and a total count in the frame is given. So far what I am doing to remove the background is first imtophat and then some threshold on top of this I build some analysis to identify the people from e.g. fences, houses etc. However this threshold is way to static, so once there is a shift in outdoor temperature or the layer changes e.g. from grass to tarmac then I either get to many things in the picture or I remove some of the people. So what I am ultimately looking for is a way to get rid of the background. So what I have left is buildings, cars, people etc..
This is the ultimate goal and a solution to this would be highly appreciated.
What I tried to do was to first use the following code on the first picture (where pic1 is the original picture):
%Make it double
pic2 = double(pic1);
%Remove some noise
pic2 = wiener2(pic2);
%Make the pedestrians larger
pic2 = imdilate(pic2,strel('disk',5));
%In case of shadows take these to some minimum
pic3 = pic2.*(pic2>mean(mean(pic2))) + mean(mean(pic2))*(pic2<mean(mean(pic2)));
%Remove some of the background
pic4 = imtophat(pic3,strel('disk',10));
%Make the edges stand out.
hy = fspecial('sobel');
hx = hy';
Iy = imfilter(gaussian, hy, 'replicate');
Ix = imfilter(gaussian, hx, 'replicate');
gradmag = sqrt(Ix.^2 + Iy.^2);
%Threshold the edges
BW = gradmag>100;
%Close the circles
BW2 = imclose(BW1,strel('disk',5))
Now I have a binary image of the edges of the objects in the picture. And I want to fill out the pedestrians, such that I have an initial guess of where they are and how they look. So I apply imfill.
[BW3] = imfill(BW2);
Then what I want is the coordinates of all the pixels that matlab have turned white for me. How do I get that? I have tried with [BW3,locations] = infill(BW2), but this does not work (as I want it to.)
As testing you can use the attached picture. Also if you are trying the solve the ultimate problem at the top, then I have no problem of getting the house, the cars and the pedestrians out - the house and the cars I can perfectly fine sort out if they appear whole.
To get the pixel that imfill changes for you, compare the before and after image and use find to get the coordinates of the points whose values have been changed.
diffimg = (BW2 ~= Bw3);
[y, x] = find(diffimg);

Unite endpoints of edge with line

I'm trying to make an object recognition program using a k-NN classifier. I've got a bunch of images for the training part of the classifier and a bunch of images to recognize. Those images are in grayscale and there's an object (only its edge) per image. I need to calculate their center of mass so I use
img=im2bw(img)
and then regionprops(img,'centroid').
The problem is that some of those edges aren't closed so regionprops doesn't work then. I tried eroding the image (the edge is black, white background) but the endlines of those edges are too apart from eachother. I tried using bwmorph function to do so but still can't make it work.
Any ideas?
EDIT
I'm adding some images in case anyone wants to try:
Use morphological operation
Use a closing operation to make your structures filled.
1. As first step prepare your image data
im = imread('your image.jpg');
% Get first channel as gray scale information
im = im(:,:,1);
% Threshold it for simplicyty, you may work on grayscale too.
im1 = logical(im > 128);
2. Use a simple block shaped structuring element
The structuring element is defined by:
strel=ones(3,3);
You may use disk shaped elements or whatever gives the best result to you.
3. Apply structuring element a couple of times
Apply the strel a couple of times with an erosion operator to your original image to close your figure:
for i=1:20
im1 = imerode(im1,strel);
end
4. Dilate the image to get back to original shape
Next step is to dilate the image to get back to your original outer shape:
for i=1:20
im1 = imdilate(im1,strel);
end
Final result
The final result should be suitable to get a sufficiently precise center or gravity.

Creating intensity band across image border using matlab

I have this image (8 bit, pseudo-colored, gray-scale):
And I want to create an intensity band of a specific measure around it's border.
I tried erosion and other mathematical operations, including filtering to achieve the desired band but the actual image intensity changes as soon as I use erosion to cut part of the border.
My code so far looks like:
clear all
clc
x=imread('8-BIT COPY OF EGFP001.tif');
imshow(x);
y = imerode(x,strel('disk',2));
y1=imerode(y,strel('disk',7));
z=y-y1;
figure
z(z<30)=0
imshow(z)
The main problem I am encountering using this is that it somewhat changes the intensity of the original images as follows:
So my question is, how do I create such a band across image border without changing any other attribute of the original image?
Going with what beaker was talking about and what you would like done, I would personally convert your image into binary where false represents the background and true represents the foreground. When you're done, you then erode this image using a good structuring element that preserves the roundness of the contours of your objects (disk in your example).
The output of this would be the interior of the large object that is in the image. What you can do is use this mask and set these locations in the image to black so that you can preserve the outer band. As such, try doing something like this:
%// Read in image (directly from StackOverflow) and pseudo-colour the image
[im,map] = imread('http://i.stack.imgur.com/OxFwB.png');
out = ind2rgb(im, map);
%// Threshold the grayscale version
im_b = im > 10;
%// Create structuring element that removes border
se = strel('disk',7);
%// Erode thresholded image to get final mask
erode_b = imerode(im_b, se);
%// Duplicate mask in 3D
mask_3D = cat(3, erode_b, erode_b, erode_b);
%// Find indices that are true and black out result
final = out;
final(mask_3D) = 0;
figure;
imshow(final);
Let's go through the code slowly. The first two lines take your PNG image, which contains a grayscale image and a colour map and we read both of these into MATLAB. Next, we use ind2rgb to convert the image into its pseudo-coloured version. Once we do this, we use the grayscale image and threshold the image so that we capture all of the object pixels. I threshold the image with a value of 10 to escape some quantization noise that is seen in the image. This binary image is what we will operate on to determine those pixels we want to set to 0 to get the outer border.
Next, we declare a structuring element that is a disk of a radius of 7, then erode the mask. Once I'm done, I duplicate this mask in 3D so that it has the same number of channels as the pseudo-coloured image, then use the locations of the mask to set the values that are internal to the object to 0. The result would be the original image, but having the outer contours of all of the objects remain.
The result I get is:

How to remove the thick border produced when applying ADAPTIVETHRESH - Wellner's adaptive thresholding

With reference to the mentioned links below:-
Image1:
Image2:
Image 2 is obtained after the application of adapthisteq followed by Wellner's adapive threshold
Can somebody help me in removing that thick border please, because when processing the image, the coordinates for the image border is also being extracted. I have tried the imclearborder but those veins touching the border is also getting removed.
Also, I am having the impression that the vein patterns in image 2 has increased in size when compared to image 1.
Thank You.
The images you provided aren't the same size. But the below code is the general idea:
Code:
hand = imread('hand.png'); % this the hand
hand = hand(1:235,1:309);
thresh = imread('thresh.png'); % this is the "veined" image with the large border
thresh = thresh(:,:,1);
thresh(hand < 100) = 256;
figure, imshow(thresh)
Output:
Basically, just do a simple threshold on the fist. Select these points through logical indexing. Then, set the value of these indices in the "veined" picture to the white value (either 1 or 256 depending if it's logical or not).
Also, the slight black bordered region to the right will go away if the images you provide are the same size and aligned. I'd also recommend using imdilate with imerode to get rid of the small bits.