How to choose correct value range of green colour from histogram charts - matlab

I tried to segment just leaves from plant image. I spend 2 weeks googling about how to extract correct range value for green colour from the histogram of the hue, saturation and value colour space. I used the following codes:
hsv=rgb2hsv(INSUM);
H1=hsv(:,:,1);
figure, imshow(H1);
H2=hsv(:,:,2);
figure, imshow(H2);
H3=hsv(:,:,3);
figure, imshow(H3)
H1(H1(:)==0)=[];
H2(H2(:)==0)=[];
H3(H3(:)==0)=[];
figure,imhist(H1);
figure,imhist(H2);
figure,imhist(H3);
limitUpperH = 0.3; limitLowerH = 0.19;
limitUpperS = 1; limitLowerS = 0.95;
limitUpperV = 0.6; limitLowerV = 0.02;
LOGICH = (hsv(:,:,1)<limitUpperH) & (hsv(:,:,1)>limitLowerH);
LOGICS = (hsv(:,:,2)<limitUpperS) & (hsv(:,:,2)>limitLowerS);
LOGICV = (hsv(:,:,3)<limitUpperV) & (hsv(:,:,3)>limitLowerV);
LOGIC = LOGICH & LOGICS & LOGICV;
figure() imshow(LOGIC);
I choose the limitUpper and limitLower for the H, S and V works for just one image by trial and error and they give good result for just one image, but when I apply these values on other images it doesn’t work.
I read many things and saw many examples about the HSV and image histogram but none say how to extract green range of h, s and v from histogram chart. Can any one help me?

Related

How to "disturb" an image with noise, but keep the orginal undisturbed pixel intact, and write to file in imagesc

For simple explanation of the title, imagine you're using a Photoshop layers, where noise is on the top layer.
% load a test image
I = rgb2gray(imread('peppers.png'));
% recreate image
cmap = colormap(); % grab current colormap
ncolors = size(cmap,1);
% do what imagesc does
Iind = double(I) - double(min(I(:)));
Iind = Iind / max(Iind(:));
% quantize image
Iind = round(Iind * ncolors + 0.5);
Iind(Iind > ncolors) = ncolors;
Iind(Iind < 1) = 1;
% convert to RGB from indexed image using cmap as palette
Irgb = ind2rgb(Iind,cmap);
imwrite(Irgb, 'filename.bmp');
These code will scale the colormap and write to file. However, by adding a artificial noise at a certain pixel location after loaded the image:
I(50,50) = rand(1);
This will generate a completely different colormap visually, and the one with the noise will look a little washed out. Top image is the original and bottom is with noise.
EDIT: The below image is the cropped image on the top left corner. Left is original and right is with noise. On the right you can see the color washed out a little bit, and if you look closely you can see a noise (dark blue color, position (50,50)).
Any idea how to still maintain the original after noise was added? Thanks in advance!
The issue is that you're adding noise (between 0 and 1) to the original image which has min = 8 and max = 255. This is reducing the new min to either 0 or 1 (the input image is of type uint8, and it seems MATLAB rounds the random number) which stretches out the colourmap slightly. There are two options to avoid this.
Add random noise in the range of the original image
I(50,50) = randi([min(I(:)), max(I(:))]);
Add random noise after scaling the original image
Irgb(50,50) = rand(1);

Why can't I colour my segmented region from the original image

I have the following code:
close all;
star = imread('/Users/name/Desktop/folder/pics/OnTheBeach.png');
blrtype = fspecial('average',[3 3]);
blurred = imfilter(star, blrtype);
[rows,cols,planes] = size(star);
R = star(:,:,1); G = star(:,:,2); B = star(:,:,3);
starS = zeros(rows,cols);
ind = find(R > 190 & R < 240 & G > 100 & G < 170 & B > 20 & B < 160);
starS(ind) = 1;
K = imfill(starS,'holes');
stats = regionprops(logical(K), 'Area', 'Solidity');
ind = ([stats.Area] > 250 & [stats.Solidity] > 0.1);
L = bwlabel(K);
result = ismember(L,find(ind));
Up to this point I load an image, blur to filter out some noise, do colour segmentation to find the specific objects which fall in that range, then create a binary image that has value 1 for the object's colour, and 0 for all other stuff. Finally I do region filtering to remove any clutter that was left in the image so I'm only left with the objects I'm looking for.
Now I want to recolour the original image based on the segmentation mask to change the colour of the starfish. I want to create Red,Green,Blue channels, assign value to them then lay the mask over the image. (To have red starfishes for example)
red = star;
red(starS) = starS(:,:,255);
green = star;
green(starS) = starS(:,:,0);
blue = star;
blue(starS) = star(:,:,0);
out = cat(3, red, green, blue);
imshow(out);
This gives me an error: Index exceeds matrix dimensions.
Error in Project4 (line 28)
red(starS) = starS(:,:,255);
What is wrong with my current approach?
Your code is kinda confusing... I don't understand whether the mask you want to use is starS or result since both look like 2d indexers. In your second code snippet you used starS, but the mask you posted in your question is result.
Anyway, no matter what your desired mask is, all you have to do is to use the imoverlay function. Here is a small example based on your code:
out = imoverlay(star,result,[1 0 0]);
imshow(out);
and here is the output:
If the opaque mask of imoverlay suggested by Tommaso is not what you're after, you can modify the RGB values of the input to cast a hue over the selected pixels without saturating them. It is only slightly more involved.
I = find(result);
gives you an index of the pixels in the 2D image. However, star is 3D. Those indices will point at the same pixels, but only at the first 2D slice. That is, if I points at pixel (x,y), it is equivalently pointing to pixel (x,y,1). That is the red component of the pixel. To index (x,y,2) and (x,y,2), the green and blue components, you need to increment I by numel(result) and 2*numel(result). That is, star(I) accesses the red component of the selected pixels, star(I+numel(result)) accesses the green component, and star(I+2*numel(result)) accesses the blue component.
Now that we can access these values, how do we modify their color?
This is what imoverlay does:
I = find(result);
out = star;
out(I) = 255; % red channel
I = I + numel(result);
out(I) = 0; % green channel
I = I + numel(result);
out(I) = 0; % blue channel
Instead, you can increase the brightness of the red proportionally, and decrease the green and blue. This will change the hue, increase saturation, and preserve the changes in intensity within the stars. I suggest the gamma function, because it will not cause strong saturation artefacts:
I = find(result);
out = double(star)/255;
out(I) = out(I).^0.5; % red channel
I = I + numel(result);
out(I) = out(I).^1.5; % green channel
I = I + numel(result);
out(I) = out(I).^1.5; % blue channel
imshow(out)
By increasing the 1.5 and decreasing the 0.5 you can make the effect stronger.

Matlab - How to detect green color on image?

I'm working in project that basically I have to detect the threes on image and delete the other information. I used HSV as segmentation and the function regionprops to detect each element. It works fine, but in same cases that has house roofs, they aren't deleted because the value of Hue is similar to the threes. So far, this is the result:
To remove the roofs, I thought that maybe is possible detecting the color green in each region detected. If the region dont have 70% of green (for example) that region is deleted. How can I do that? How Can I detect only the green color of the image?
Solution Explanation
Evaluating the level of green in a patch is an interesting idea. I suggest the following approach:
convert your patches from RGB to HSV color system. In the HSV color system it is easier to evaluate the hue (or - the color) of each pixel, by examining the first channel.
Find the range for green color in the hue system. In our case it is about [65/360,170/360], as can be seen here:
for each patch, calculate how many pixels have the hue value which is in the green range, and divide by the size of the connected component.
Code Expamle
The following function evaluate the "level of greenness" in a patch:
function [ res ] = evaluateLevelOfGreen( rgbPatch )
%EVALUATELEVELOFGREEN Summary of this function goes here
% Detailed explanation goes here
%determines the green threshold in the hue channel
GREEN_RANGE = [65,170]/360;
INTENSITY_T = 0.1;
%converts to HSV color space
hsv = rgb2hsv(rgbPatch);
%generate a region of intereset (only areas which aren't black)
relevanceMask = rgb2gray(rgbPatch)>0;
%finds pixels within the specified range in the H and V channels
greenAreasMask = hsv(:,:,1)>GREEN_RANGE(1) & hsv(:,:,1) < GREEN_RANGE(2) & hsv(:,:,3) > INTENSITY_T;
%returns the mean in thie relevance mask
res = sum(greenAreasMask(:)) / sum(relevanceMask(:));
end
Results
When using on green patches:
greenPatch1 = imread('g1.PNG');
evaluateLevelOfGreen(greenPatch1)
greenPatch2 = imread('g2.PNG');
evaluateLevelOfGreen(greenPatch2)
greenPatch3 = imread('g3.PNG');
evaluateLevelOfGreen(greenPatch3)
Results:
ans = 0.8230
ans = 0.8340
ans = 0.6030
when using on non green patches:
nonGreenPatch1 = imread('ng1.PNG');
evaluateLevelOfGreen(nonGreenPatch1)
result:
ans = 0.0197

Finding dark purple pixels in an image

I am doing a research for my higher studies in automation. I have done the automation part of the microscope but I need help in MATLAB. An example of what I would like to segment is shown here:
I need to extract the dark purple pixels from this image and only display that in a figure. It is almost like colour based segmentation but I just want to only take the dark purple pixel from the whole image.
What would I do in this case?
Here's something to get you started. Let's go with the theme of colour segmentation where you only want to extract pixels that are of a deep purple. I would like to point you to the HSV colour space before we get started. The HSV colour space is ideal for representing colours in a way that is most intuitive to humans. We tend to describe colours by their dominant colour, followed by attributes such as how washed out or how pure the colour is, and how bright or dark the colour is. The dominant colour is represented by the Hue, the appearance of how washed out or how pure the colour is is represented by the Saturation and the intensity of the colour is represented by the Value, and hence Hue-Saturation-Value, or the HSV colour space.
We can transform a RGB image so that it becomes HSV by rgb2hsv. This will return a 3D matrix that has the hue, saturation and value as 2D slices in a 3D matrix, much like a RGB image where each slices represents the red, green and blue channels. Let's see what each component looks like once we transform the image into HSV:
im = imread('https://www.cdc.gov/dpdx/images/malaria/ovale/Po_gametocyte_thickB.jpg');
hsv = rgb2hsv(im2double(im));
figure;
for idx = 1 : 3
subplot(1,3,idx);
imshow(hsv(:,:,idx));
end
The first line of code reads in an image from a URL. I'm going to use the one that Hoki referred you to, as it's the most simplest one to deal with. For self-containment, this is what the original image looks like:
Once we do this, we convert the image into the HSV colour space. It is important that you convert the image to double precision and you normalize each component to [0,1], and that is performed by im2double. Next, we spawn a new figure, and place each component in a single row over three columns. The first column represents the hue, next column the saturation and finally the last column being the value. This is the figure that we see:
With the first figure, it looks like the dominant colour is purple, whether it's a light shade or a dark shade of the colour, so the hue won't help us here. If you look at a HSV colour wheel:
(source: hobbitsandhobos.com)
Normalize the wheel so that it falls between [0,1] instead of 0 to 360 degrees. The hue is actually represented as degrees due to the nature of the colour space, but MATLAB normalizes this to [0,1]. You can see that purple falls within a hue of [0.6,0.8], which corresponds to the first figure I showed you that displays the hue for our image. If you examine the pixels around the image, they fluctuate between this range. Therefore, the hue won't help us much here.
What will certainly help us are the saturation and value components. If you take a look, the deep purple pixels have a higher saturation than the rest of the background, which makes sense because the deep purple has a much more pure version of purple than the rest of the background. For the value, you can see that the brightness of the dark purple is darker than the background.
We can use these two points as an exploit to segment out the purple colour in the image. The easiest thing to do would be to threshold the saturation and value planes so that any values that are within a certain range you keep while those that are outside you throw away. Therefore, you can do something like this:
sThresh = hsv(:,:,2) > 0.6 & hsv(:,:,2) < 0.9;
vThresh = hsv(:,:,3) > 0.4 & hsv(:,:,3) < 0.65;
I used impixelinfo and I hovered my mouse over the saturation and value components to examine what the values were for the deep purple regions. It looks like those pixels that are deep purple have a saturation value between 0.6 and 0.9, while the value component has values between 0.4 and 0.65. The above code will create two binary masks where true means that the pixel satisfies our criteria while false means it doesn't. Because I want to combine both things together and not leave any stone unturned, let's logical OR the masks together for the final result:
figure;
result = sThresh | vThresh;
imshow(result);
We will also show the result too. This is what we get:
As you can see, this does a pretty good job, but we have remnants of the red arrow that we don't want in the final result. To do a bit of cleanup, we can use morphology - specifically an opening filter of a small window so that we don't affect the pixels that we want as much. We can use imopen to perform our opening operation for us. A morphological opening removes isolated pixels that appear around your image. You use what is called a structuring element that is used to look at local neighbourhoods of your image. For the basics, any pixel regions that are as small as the shape that is contained within the structuring element get removed. Because we want to preserve the shape of the other objects, we can try using a 5 x 5 disk structuring element to clean these pixels up:
figure;
se = strel('disk', 2, 0);
final = imopen(result, se);
imshow(final);
This is what we get:
Not bad! There are some holes that we need to patch up, so let's fill in those holes with imfill:
figure;
final_noholes = imfill(final, 'holes');
imshow(final_noholes);
This is what we get:
OK! So we have our mask. The last thing we need to do is present the image so that you only show the deep purple colours from the original image, and nothing else. That can easily be achieved with bsxfun:
figure;
out = bsxfun(#times, im, uint8(final_noholes));
imshow(out);
The above operation takes your mask, and multiplies every pixel in your image by this mask. One small thing I'd like to point out is that the mask we found in the previous step needs to be cast to uint8, because bsxfun requires that the multiplication (or whatever operation you perform) need to be the same type. We replicate this mask in 3D so that you mask out the unwanted RGB pixels and only keep the ones you are looking for.
This is what we finally get:
As you can see, it isn't perfect, but it's certainly enough to get you started. Those thresholds are what are important, but with some very simple thresholding, I extracted most of the purple pixels out.
To make it easier for you, here's the code that I wrote above that can easily be copied and pasted into MATLAB for you to run:
clear all; close all; clc;
im = imread('https://www.cdc.gov/dpdx/images/malaria/ovale/Po_gametocyte_thickB.jpg');
hsv = rgb2hsv(im2double(im));
figure;
for idx = 1 : 3
subplot(1,3,idx);
imshow(hsv(:,:,idx));
end
sThresh = hsv(:,:,2) > 0.6 & hsv(:,:,2) < 0.9;
vThresh = hsv(:,:,3) > 0.4 & hsv(:,:,3) < 0.65;
figure;
result = sThresh | vThresh;
imshow(result);
figure;
se = strel('disk', 2, 0);
final = imopen(result, se);
imshow(final);
figure;
final_noholes = imfill(final, 'holes');
imshow(final_noholes);
figure;
out = bsxfun(#times, im, uint8(final_noholes));
imshow(out);
Good luck!
Try this:
function main
clc,clear
A = imread('https://www.cdc.gov/dpdx/images/malaria/ovale/Po_gametocyte_thickB.jpg');
subplot(1,2,1)
imshow(A)
RGB = [230 210 200]; % color you want
e = 40; % color shift
B = pix_in(A,RGB,e);
B = B + 255.*uint8(~B); % choosing white background
subplot(1,2,2)
imshow(B)
end
function B = pix_in(A,RGB,e)
% select specific pixels in image
% A - color image (3D matrix uint8)
% RGB - [R G B] - color to select
% e - color shift/deviation
A = double(A); % for same class operations (RGB - double)
[m, n, ~] = size(A);
RGB = reshape(RGB,1,1,3);
RGB = repmat(RGB,m,n,1); % creating 3D matrix
b = abs(A-RGB) < e; % logical 3D
b = sum(b,3) == 3; % if [R,G,B] of a pixel in range
B = A.*repmat(b,1,1,3); % selecting pixels those in range
B = uint8(B);
end

Matlab fill shapes by white

As you see, I have shapes and their white boundaries. I want to fill the shapes in white color.
The input is:
I would like to get this output:
Can anybody help me please with this code? it doesn't change the black ellipses to white.
Thanks alot :]]
I = imread('untitled4.bmp');
Ibw = im2bw(I);
CC = bwconncomp(Ibw); %Ibw is my binary image
stats = regionprops(CC,'pixellist');
% pass all over the stats
for i=1:length(stats),
size = length(stats(i).PixelList);
% check only the relevant stats (the black ellipses)
if size >150 && size < 600
% fill the black pixel by white
x = round(mean(stats(i).PixelList(:,2)));
y = round(mean(stats(i).PixelList(:,1)));
Ibw = imfill(Ibw, [x, y]);
end;
end;
imshow(Ibw);
Your code can be improved and simplified as follows. First, negating Ibw and using BWCONNCOMP to find 4-connected components will give you indices for each black region. Second, sorting the connected regions by the number of pixels in them and choosing all but the largest two will give you indices for all the smaller circular regions. Finally, the linear indices of these smaller regions can be collected and used to fill in the regions with white. Here's the code (quite a bit shorter and not requiring any loops):
I = imread('untitled4.bmp');
Ibw = im2bw(I);
CC = bwconncomp(~Ibw, 4);
[~, sortIndex] = sort(cellfun('prodofsize', CC.PixelIdxList));
Ifilled = Ibw;
Ifilled(vertcat(CC.PixelIdxList{sortIndex(1:end-2)})) = true;
imshow(Ifilled);
And here's the resulting image:
If your images are all black&white, and you have the image processing toolkit, then this looks like what you need:
http://www.mathworks.co.uk/help/toolbox/images/ref/imfill.html
Something like:
imfill(image, [startX, startY])
where startX, startY is a pixel in the area that you want to fill.