Matlab fill shapes by white - matlab

As you see, I have shapes and their white boundaries. I want to fill the shapes in white color.
The input is:
I would like to get this output:
Can anybody help me please with this code? it doesn't change the black ellipses to white.
Thanks alot :]]
I = imread('untitled4.bmp');
Ibw = im2bw(I);
CC = bwconncomp(Ibw); %Ibw is my binary image
stats = regionprops(CC,'pixellist');
% pass all over the stats
for i=1:length(stats),
size = length(stats(i).PixelList);
% check only the relevant stats (the black ellipses)
if size >150 && size < 600
% fill the black pixel by white
x = round(mean(stats(i).PixelList(:,2)));
y = round(mean(stats(i).PixelList(:,1)));
Ibw = imfill(Ibw, [x, y]);
end;
end;
imshow(Ibw);

Your code can be improved and simplified as follows. First, negating Ibw and using BWCONNCOMP to find 4-connected components will give you indices for each black region. Second, sorting the connected regions by the number of pixels in them and choosing all but the largest two will give you indices for all the smaller circular regions. Finally, the linear indices of these smaller regions can be collected and used to fill in the regions with white. Here's the code (quite a bit shorter and not requiring any loops):
I = imread('untitled4.bmp');
Ibw = im2bw(I);
CC = bwconncomp(~Ibw, 4);
[~, sortIndex] = sort(cellfun('prodofsize', CC.PixelIdxList));
Ifilled = Ibw;
Ifilled(vertcat(CC.PixelIdxList{sortIndex(1:end-2)})) = true;
imshow(Ifilled);
And here's the resulting image:

If your images are all black&white, and you have the image processing toolkit, then this looks like what you need:
http://www.mathworks.co.uk/help/toolbox/images/ref/imfill.html
Something like:
imfill(image, [startX, startY])
where startX, startY is a pixel in the area that you want to fill.

Related

How to remove white annotations from image?

I try to remove the white annotations of this image (the numbers and arrows), as well as the black grid, with MATLAB:
I tried to compute, for each pixel, the mode of neighbors, but this process is very slow and I get poor results.
How can I obtain an image like this one?
Thank you for your time.
The general name for such a task is inpainting. If you search for that you will find better methods than what I'm showing here. This is no more than a proof of concept. I'm using DIPimage 3 (because I'm an author and it's easy for me to use).
First we need to create a mask for the regions that we want to remove (inpaint). It is easy to find pixels where all three channels have a high value (white) or a low value (black):
img = readim('https://i.stack.imgur.com/16r9N.png');
% Find a mask for the areas to remove
whitemask = min(img,'tensor') > 50;
blackmask = max(img,'tensor') < 30;
mask = whitemask | blackmask;
This mask doesn't capture all of the black grid, if we increase the threshold we will also remove the dark region of sea off the coast of Spain. And it also captures the white outline of the coasts. We can do a little bit better than this with some additional filtering:
% Find a mask for the areas to remove
whitemask = min(img,'tensor') > 50;
whitemask = whitemask - pathopening(whitemask,50);
blackmask = max(img,'tensor');
blackmask2 = blackmask < 80;
blackmask2 = blackmask2 - areaopening(blackmask2,6);
blackmask = blackmask < 30 | blackmask2;
mask = whitemask | blackmask;
This produces the following mask:
Still far from perfect, but a good start for our proof of concept.
One simple inpainting method uses normalized convolution: using the inverse of the mask we made, convolve the image multiplied by the mask, and convolve the mask separately. The ratio of these two results is a smoothed image that doesn't take the masked pixels into account. Finally, we replace the pixels in the original image under the mask with the values from this normalized convolution:
% Solution 1: normalized convolution
smooth = gaussf(img * ~mask, 2) / gaussf(~mask, 2);
img(mask) = smooth(mask);
An alternative solution applies a closing on the image multiplied by the mask (note that this multiplication makes the pixels we don't want completely black; the closing will spread the surrounding colors over the black areas):
% Solution 2: morphology
smooth = iterate('closing',img * ~mask, 13);
img(mask) = smooth(mask);

Finding area in image with maximum variation of pixels

I am struggling with some algorithm to extract the region from an image which has the maximum change in pixels. I got the following image after preprocessing.
I did following steps of pre-processing
x = imread('test2.jpg');
gray_x = rgb2gray(x);
I = medfilt2(gray_x,[3 3]);
gray_x = I;
%%
canny_x = edge(gray_x,'canny',0.3);
figure,imshow(canny_x);
%%
s = strel('disk',3);
si = imdilate(canny_x,s);
%figure5
figure; imshow(si);
se = imerode(canny_x,s);title('dilation');
%figure6
figure; imshow(se);title('Erodsion');
I = imsubtract(si,se);
%figure7
figure; imshow(I);
Basically what I am struggling for, is to make weapon detection system using Image processing. I want to localize possible area's to be weapon so that I could feed them to my classifier to identify if it is a weapon or not. Any suggestions? Thank you
A possible solution could be:
Find corner points in the image (Harris corner points, etc)
Set value of all the corner points to white while remaining image will be black
Take a rectangular window and traverse it over the whole image
sum all the white pixels in that rectangular window
select that region whose sum is maximum of all regions

Finding dark purple pixels in an image

I am doing a research for my higher studies in automation. I have done the automation part of the microscope but I need help in MATLAB. An example of what I would like to segment is shown here:
I need to extract the dark purple pixels from this image and only display that in a figure. It is almost like colour based segmentation but I just want to only take the dark purple pixel from the whole image.
What would I do in this case?
Here's something to get you started. Let's go with the theme of colour segmentation where you only want to extract pixels that are of a deep purple. I would like to point you to the HSV colour space before we get started. The HSV colour space is ideal for representing colours in a way that is most intuitive to humans. We tend to describe colours by their dominant colour, followed by attributes such as how washed out or how pure the colour is, and how bright or dark the colour is. The dominant colour is represented by the Hue, the appearance of how washed out or how pure the colour is is represented by the Saturation and the intensity of the colour is represented by the Value, and hence Hue-Saturation-Value, or the HSV colour space.
We can transform a RGB image so that it becomes HSV by rgb2hsv. This will return a 3D matrix that has the hue, saturation and value as 2D slices in a 3D matrix, much like a RGB image where each slices represents the red, green and blue channels. Let's see what each component looks like once we transform the image into HSV:
im = imread('https://www.cdc.gov/dpdx/images/malaria/ovale/Po_gametocyte_thickB.jpg');
hsv = rgb2hsv(im2double(im));
figure;
for idx = 1 : 3
subplot(1,3,idx);
imshow(hsv(:,:,idx));
end
The first line of code reads in an image from a URL. I'm going to use the one that Hoki referred you to, as it's the most simplest one to deal with. For self-containment, this is what the original image looks like:
Once we do this, we convert the image into the HSV colour space. It is important that you convert the image to double precision and you normalize each component to [0,1], and that is performed by im2double. Next, we spawn a new figure, and place each component in a single row over three columns. The first column represents the hue, next column the saturation and finally the last column being the value. This is the figure that we see:
With the first figure, it looks like the dominant colour is purple, whether it's a light shade or a dark shade of the colour, so the hue won't help us here. If you look at a HSV colour wheel:
(source: hobbitsandhobos.com)
Normalize the wheel so that it falls between [0,1] instead of 0 to 360 degrees. The hue is actually represented as degrees due to the nature of the colour space, but MATLAB normalizes this to [0,1]. You can see that purple falls within a hue of [0.6,0.8], which corresponds to the first figure I showed you that displays the hue for our image. If you examine the pixels around the image, they fluctuate between this range. Therefore, the hue won't help us much here.
What will certainly help us are the saturation and value components. If you take a look, the deep purple pixels have a higher saturation than the rest of the background, which makes sense because the deep purple has a much more pure version of purple than the rest of the background. For the value, you can see that the brightness of the dark purple is darker than the background.
We can use these two points as an exploit to segment out the purple colour in the image. The easiest thing to do would be to threshold the saturation and value planes so that any values that are within a certain range you keep while those that are outside you throw away. Therefore, you can do something like this:
sThresh = hsv(:,:,2) > 0.6 & hsv(:,:,2) < 0.9;
vThresh = hsv(:,:,3) > 0.4 & hsv(:,:,3) < 0.65;
I used impixelinfo and I hovered my mouse over the saturation and value components to examine what the values were for the deep purple regions. It looks like those pixels that are deep purple have a saturation value between 0.6 and 0.9, while the value component has values between 0.4 and 0.65. The above code will create two binary masks where true means that the pixel satisfies our criteria while false means it doesn't. Because I want to combine both things together and not leave any stone unturned, let's logical OR the masks together for the final result:
figure;
result = sThresh | vThresh;
imshow(result);
We will also show the result too. This is what we get:
As you can see, this does a pretty good job, but we have remnants of the red arrow that we don't want in the final result. To do a bit of cleanup, we can use morphology - specifically an opening filter of a small window so that we don't affect the pixels that we want as much. We can use imopen to perform our opening operation for us. A morphological opening removes isolated pixels that appear around your image. You use what is called a structuring element that is used to look at local neighbourhoods of your image. For the basics, any pixel regions that are as small as the shape that is contained within the structuring element get removed. Because we want to preserve the shape of the other objects, we can try using a 5 x 5 disk structuring element to clean these pixels up:
figure;
se = strel('disk', 2, 0);
final = imopen(result, se);
imshow(final);
This is what we get:
Not bad! There are some holes that we need to patch up, so let's fill in those holes with imfill:
figure;
final_noholes = imfill(final, 'holes');
imshow(final_noholes);
This is what we get:
OK! So we have our mask. The last thing we need to do is present the image so that you only show the deep purple colours from the original image, and nothing else. That can easily be achieved with bsxfun:
figure;
out = bsxfun(#times, im, uint8(final_noholes));
imshow(out);
The above operation takes your mask, and multiplies every pixel in your image by this mask. One small thing I'd like to point out is that the mask we found in the previous step needs to be cast to uint8, because bsxfun requires that the multiplication (or whatever operation you perform) need to be the same type. We replicate this mask in 3D so that you mask out the unwanted RGB pixels and only keep the ones you are looking for.
This is what we finally get:
As you can see, it isn't perfect, but it's certainly enough to get you started. Those thresholds are what are important, but with some very simple thresholding, I extracted most of the purple pixels out.
To make it easier for you, here's the code that I wrote above that can easily be copied and pasted into MATLAB for you to run:
clear all; close all; clc;
im = imread('https://www.cdc.gov/dpdx/images/malaria/ovale/Po_gametocyte_thickB.jpg');
hsv = rgb2hsv(im2double(im));
figure;
for idx = 1 : 3
subplot(1,3,idx);
imshow(hsv(:,:,idx));
end
sThresh = hsv(:,:,2) > 0.6 & hsv(:,:,2) < 0.9;
vThresh = hsv(:,:,3) > 0.4 & hsv(:,:,3) < 0.65;
figure;
result = sThresh | vThresh;
imshow(result);
figure;
se = strel('disk', 2, 0);
final = imopen(result, se);
imshow(final);
figure;
final_noholes = imfill(final, 'holes');
imshow(final_noholes);
figure;
out = bsxfun(#times, im, uint8(final_noholes));
imshow(out);
Good luck!
Try this:
function main
clc,clear
A = imread('https://www.cdc.gov/dpdx/images/malaria/ovale/Po_gametocyte_thickB.jpg');
subplot(1,2,1)
imshow(A)
RGB = [230 210 200]; % color you want
e = 40; % color shift
B = pix_in(A,RGB,e);
B = B + 255.*uint8(~B); % choosing white background
subplot(1,2,2)
imshow(B)
end
function B = pix_in(A,RGB,e)
% select specific pixels in image
% A - color image (3D matrix uint8)
% RGB - [R G B] - color to select
% e - color shift/deviation
A = double(A); % for same class operations (RGB - double)
[m, n, ~] = size(A);
RGB = reshape(RGB,1,1,3);
RGB = repmat(RGB,m,n,1); % creating 3D matrix
b = abs(A-RGB) < e; % logical 3D
b = sum(b,3) == 3; % if [R,G,B] of a pixel in range
B = A.*repmat(b,1,1,3); % selecting pixels those in range
B = uint8(B);
end

Changing image aspect ratio of interpolated RGB image. Square to rectangular

I have some code which takes a fish eye images and converts it to a rectangular image in each RGB channels. I am having trouble with the fact the the output image is square instead of rectangular. (this means that the image is distorted, compressed horizontally.) I have tried changing the output matrix to a more suitable format, without success. Besides this i have also discovered that for the code to work the input image must be square like 500x500. Any idea how to solve this issue? This is the code:
The code is inspired by Prakash Manandhar "Polar To/From Rectangular Transform of Images" file exchange on mathworks.
EDIT. Code now works.
function imP = FISHCOLOR2(imR)
rMin=0.1;
rMax=1;
[Mr, Nr, Dr] = size(imR); % size of rectangular image
xRc = (Mr+1)/2; % co-ordinates of the center of the image
yRc = (Nr+1)/2;
sx = (Mr-1)/2; % scale factors
sy = (Nr-1)/2;
reduced_dim = min(size(imR,1),size(imR,2));
imR = imresize(imR,[reduced_dim reduced_dim]);
M=size(imR,1);N=size(imR,2);
dr = (rMax - rMin)/(M-1);
dth = 2*pi/N;
r=rMin:dr:rMin+(M-1)*dr;
th=(0:dth:(N-1)*dth)';
[r,th]=meshgrid(r,th);
x=r.*cos(th);
y=r.*sin(th);
xR = x*sx + xRc;
yR = y*sy + yRc;
for k=1:Dr % colors
imP(:,:,k) = interp2(imR(:,:,k), xR, yR); % add k channel
end
imP = imresize(imP,[size(imP,1), size(imP,2)/3]);
imP = imrotate(imP,270);
SOLVED
Input image <- Image link
Output image <- Image link
PART A
To remove the requirement of a square input image, you may resize the input image into a square one with this -
%%// Resize the input image to make it square
reduced_dim = min(size(imR,1),size(imR,2));
imR = imresize(imR,[reduced_dim reduced_dim]);
Few points I would like to raise here though about this image-resizing to make it a square image. This was a quick and dirty approach and distorts the image for a non-square image, which you may not want if the image is not too "squarish". In many of those non-squarish images, you would find blackish borders across the boundaries of the image. If you can remove that using some sort of image processing algorithm or just manual photoshoping, then it would be ideal. After that even if the image is not square, imresize could be considered a safe option.
PART B
Now, after doing the main processing of flattening out the fisheye image,
at the end of your code, it seemed like the image has to be rotated
90 degrees clockwise or counter-clockwise depending on if the fisheye
image have objects inwardly or outwardly respectively.
%%// Rotating image
imP = imrotate(imP,-90); %%// When projected inwardly
imP = imrotate(imP,-90); %%// When projected outwardly
Note that the flattened image must have the height equal to the half of the
size of the input square image, that is the radius of the image.
Thus, the final output image must have number of rows as - size(imP,2)/2
Since you are flattening out a fisheye image, I assumed that the width
of the flattened image must be 2*PI times the height of it. So, I tried this -
imP = imresize(imP,[size(imP,2)/2 pi*size(imP,2)]);
But the results looked too flattened out. So, the next logical experimental
value looked like PI times the height, i.e. -
imP = imresize(imP,[size(imP,2)/2 pi*size(imP,2)/2]);
Results in this case looked good.
I'm not very experienced in the finer points of image processing in MATLAB, but depending on the exact operation of the imP fill mechanism, you may get what you're looking for by doing the following. Change:
M = size(imR, 1);
N = size(imR, 2);
To:
verticalScaleFactor = 0.5;
M = size(imR, 1) * verticalScaleFactor;
N = size(imR, 2);
If my hunch is right, you should be able to tune that scale factor to get the image just right. It may, however, break your code. Let me know if it doesn't work, and edit your post to flesh out exactly what each section of code does. Then we should be able to give it another shot. Good luck!
This is the code which works.
function imP = FISHCOLOR2(imR)
rMin=0.1;
rMax=1;
[Mr, Nr, Dr] = size(imR); % size of rectangular image
xRc = (Mr+1)/2; % co-ordinates of the center of the image
yRc = (Nr+1)/2;
sx = (Mr-1)/2; % scale factors
sy = (Nr-1)/2;
reduced_dim = min(size(imR,1),size(imR,2));
imR = imresize(imR,[reduced_dim reduced_dim]);
M=size(imR,1);N=size(imR,2);
dr = (rMax - rMin)/(M-1);
dth = 2*pi/N;
r=rMin:dr:rMin+(M-1)*dr;
th=(0:dth:(N-1)*dth)';
[r,th]=meshgrid(r,th);
x=r.*cos(th);
y=r.*sin(th);
xR = x*sx + xRc;
yR = y*sy + yRc;
for k=1:Dr % colors
imP(:,:,k) = interp2(imR(:,:,k), xR, yR); % add k channel
end
imP = imresize(imP,[size(imP,1), size(imP,2)/3]);
imP1 = imrotate(imP1,270);

Stretching an ellipse in an image to form a circle

I want to stretch an elliptical object in an image until it forms a circle. My program currently inputs an image with an elliptical object (eg. coin at an angle), thresholds and binarizes it, isolates the region of interest using edge-detect/bwboundaries(), and performs regionprops() to calculate major/minor axis lengths.
Essentially, I want to use the 'MajorAxisLength' as the diameter and stretch the object on the minor axis to form a circle. Any suggestions on how I should approach this would be greatly appreciated. I have appended some code for your perusal (unfortunately I don't have enough reputation to upload an image, the binarized image looks like a white ellipse on a black background).
EDIT: I'd also like to apply this technique to the gray-scale version of the image, to examine what the stretch looks like.
code snippet:
rgbImage = imread(fullFileName);
redChannel = rgbImage(:, :, 1);
binaryImage = redChannel < 90;
labeledImage = bwlabel(binaryImage);
area_measurements = regionprops(labeledImage,'Area');
allAreas = [area_measurements.Area];
biggestBlobIndex = find(allAreas == max(allAreas));
keeperBlobsImage = ismember(labeledImage, biggestBlobIndex);
measurements = regionprops(keeperBlobsImage,'Area','MajorAxisLength','MinorAxisLength')
You know the diameter of the circle and you know the center is the location where the major and minor axes intersect. Thus, just compute the radius r from the diameter, and for every pixel in your image, check to see if that pixel's Euclidean distance from the cirlce's center is less than r. If so, color the pixel white. Otherwise, leave it alone.
[M,N] = size(redChannel);
new_image = zeros(M,N);
for ii=1:M
for jj=1:N
if( sqrt((jj-center_x)^2 + (ii-center_y)^2) <= radius )
new_image(ii,jj) = 1.0;
end
end
end
This can probably be optimzed by using the meshgrid function combined with logical indices to avoid the loops.
I finally managed to figure out the transform required thanks to a lot of help on the matlab forums. I thought I'd post it here, in case anyone else needed it.
stats = regionprops(keeperBlobsImage, 'MajorAxisLength','MinorAxisLength','Centroid','Orientation');
alpha = pi/180 * stats(1).Orientation;
Q = [cos(alpha), -sin(alpha); sin(alpha), cos(alpha)];
x0 = stats(1).Centroid.';
a = stats(1).MajorAxisLength;
b = stats(1).MinorAxisLength;
S = diag([1, a/b]);
C = Q*S*Q';
d = (eye(2) - C)*x0;
tform = maketform('affine', [C d; 0 0 1]');
Im2 = imtransform(redChannel, tform);
subplot(2, 3, 5);
imshow(Im2);