Colour Segmentation by l*a*b - matlab

I'm using the code on the MatLab website, "Color-Based Segmentation Using the Lab* Color Space":
http://www.mathworks.com/help/images/examples/color-based-segmentation-using-the-l-a-b-color-space.html
So I'm trying to select some areas myself instead of using the "load region_coordinates", using roipoly(fabric), but i get stuck. How do I save coordinates of the polygon I just drew? I'm actually following advice from lennon310 at Solution II, bottom of page:
A few questions about color segmentation with L*a*b*
I'm not sure when to save region_coordinates and do size(region_coordinates,1)
I made the following changes (Step 1):
1) Removed "load region_coordinates"
2) Added "region_coordinates = roipoly(fabric);"
Here's the code:
`
%% Step 1
fabric = imread(file);
figure(1); %Create figure window. "If h is not the handle and is not the Number property value of an existing figure, but is an integer, then figure(h) creates a figure object and assigns its Number property the value h."
imshow(fabric)
title('fabric')
%load regioncoordinates; % 6 marices(?) labelled val(:,:,1-6), 5x2 (row x column)
region_coordinates = roipoly(fabric);
nColors = 6;
sample_regions = false([size(fabric,1) size(fabric,2) nColors]); %Initializing an Image Dimension, 3x3 (:,:,:) to zero? Zeros() for arrays only I guess.
%Size one is column, size two is row?
for count = 1:nColors
sample_regions(:,:,count) = roipoly(fabric,region_coordinates(:,1,count),region_coordinates(:,2,count));
end
figure, imshow(sample_regions(:,:,2)),title('sample region for red');
%%Step 2
% Convert your fabric RGB image into an L*a*b* image using rgb2lab .
lab_fabric = rgb2lab(fabric);
%Calculate the mean a* and b* value for each area that you extracted with roipoly. These values serve as your color markers in a*b* space.
a = lab_fabric(:,:,2);
b = lab_fabric(:,:,3);
color_markers = zeros([nColors, 2]);%... I think this is initializing a 6x2 blank(0) array for colour storage. 6 for colours, 2 for a&b colourspace.
for count = 1:nColors
color_markers(count,1) = mean2(a(sample_regions(:,:,count))); %Label for repmat, Marker for
color_markers(count,2) = mean2(b(sample_regions(:,:,count)));
end
%For example, the average color of the red sample region in a*b* space is
fprintf('[%0.3f,%0.3f] \n',color_markers(2,1),color_markers(2,2));
%% Step 3: Classify Each Pixel Using the Nearest Neighbor Rule
%
color_labels = 0:nColors-1;
% Initialize matrices to be used in the nearest neighbor classification.
a = double(a);
b = double(b);
distance = zeros([size(a), nColors]);
%Perform classification, Elucidean Distance.
for count = 1:nColors
distance(:,:,count) = ( (a - color_markers(count,1)).^2 + (b - color_markers(count,2)).^2 ).^0.5;
end
[~, label] = min(distance,[],3);
label = color_labels(label);
clear distance;
%% Step 4: Display Results of Nearest Neighbor Classification
%
% The label matrix contains a color label for each pixel in the fabric image.
% Use the label matrix to separate objects in the original fabric image by color.
rgb_label = repmat(label,[1 1 3]);
segmented_images = zeros([size(fabric), nColors],'uint8');
for count = 1:nColors
color = fabric;
color(rgb_label ~= color_labels(count)) = 0;
segmented_images(:,:,:,count) = color;
end
%figure, imshow(segmented_images(:,:,:,1)), title('Background of Fabric');
%Looks different somehow.
figure, imshow(segmented_images(:,:,:,2)), title('red objects');
figure, imshow(segmented_images(:,:,:,3)), title('green objects');
figure, imshow(segmented_images(:,:,:,4)), title('purple objects');
figure, imshow(segmented_images(:,:,:,5)), title('magenta objects');
figure, imshow(segmented_images(:,:,:,6)), title('yellow objects');
`

You can retrieve the coordinates of the polygon using output arguments in the call to roipoly. You can then get a binary mask of the polygon, as well as vertices coordinates if you want.
Simple example demonstrating:
clear
clc
close all
A = imread('cameraman.tif');
figure;
imshow(A)
%// The vertices of the polygon are stored in xi and yi;
%// PolyMask is a binary image where pixels == 1 are white.
[polyMask, xi, yi] = roipoly(A);
This looks like this:
And if you wish to see the vertices with the binary mask:
%// display polymask
imshow(polyMask)
hold on
%// Highlight vertices in red
scatter(xi,yi,60,'r')
hold off
Which gives the following:
So to summarize:
1) The polygon's vertices are stored in xi and yi.
2) You can plot the binaryMask of the polygon using imshow(polyMask).
3) If you need the coordinates of the white pixels, you can use something like this:
[row_white,col_white] = find(polyMask == 1);
You're then good to go. Hope that helps!

Related

How to draw a line between two coordinates of an image permanently in Matlab? [duplicate]

This question already has answers here:
MATLAB: Drawing a line over a black and white image
(5 answers)
Closed 4 years ago.
I have a set of points that I want to connect sequentially. Suppose the points are (A1,A2,A3,...A9); I want to connect A1 to A2, A2 to A3 and so on and finally connect A9 to A1.
All I need is to know a function that would help me connect A1 to A2, I could do the rest using for loops.
I know connecting two points is a question that has been asked here several times before but I couldn't find the answer I required. Several of the solutions suggest using "plot" and "line" but these functions overlay the results on the image and don't actually make any changes to the original image.
I did try them out and managed to save the resulting figure using the "saveas" and "print" functions but the image doesn't get saved in the proper format and there are a lot of problems using the parameters for these functions. Besides, I don't really want to save the image, it's just an unnecessary overhead I was willing to add if I could get the desired image with the lines.
I've also tried "imline" to draw lines but it seems to be interactive.
This particular question reflects my problem perfectly but when I ran the code snippets given as solutions, all of them gave a set of dots in the resulting image.
I tried the above mentioned codes in the link with this image that I found here.
A dotted line was an output for all three code snippets in the link above.
For example, I ran the first code like this:
I = imread('berries_copy.png');
grayImage=rgb2gray(I);
img =false(size(grayImage,1), size(grayImage,2));
I wrote the above piece of code just to get a black image for the following operations:
x = [500 470]; % x coordinates
y = [300 310]; % y coordinates
nPoints = max(abs(diff(x)), abs(diff(y)))+1; % Number of points in line
rIndex = round(linspace(y(1), y(2), nPoints)); % Row indices
cIndex = round(linspace(x(1), x(2), nPoints)); % Column indices
index = sub2ind(size(img), rIndex, cIndex); % Linear indices
img(index) = 255; % Set the line points to white
imshow(img); % Display the image
This is the resulting image for the above code as well as the other two, which as you can see, is just a few points on a black background which isn't the desired output.
I changed the code and used the "plot" function for the same to get this output which is what I want. Is there anyway I can change the dotted output into a solid line?
Or if could anyone suggest a simple function or a method that would draw a line from A1 to A2 and would actually make a change in the input image, I'd be grateful. (I really hope this is just me being a novice rather than Matlab not having a simple function to draw a line in an image.)
P.S. I don't have the Computer Vision toolbox and if possible, I'd like to find a solution that doesn't involve it.
Your first problem is that you are creating a blank image the same size as the image you load with this line:
img =false(size(grayImage,1), size(grayImage,2));
When you add the line to it, you get a black image with a white line on it, as expected.
Your second problem is that you are trying to apply a solution for grayscale intensity images to an RGB (Truecolor) image, which requires you to modify the data at the given indices for all three color planes (red, green, and blue). Here's how you can modify the grayscale solution from my other answer:
img = imread('berries_copy.png'); % Load image
[R, C, D] = size(img); % Get dimension sizes, D should be 3
x = [500 470]; % x coordinates
y = [300 310]; % y coordinates
nPoints = max(abs(diff(x)), abs(diff(y)))+1; % Number of points in line
rIndex = round(linspace(y(1), y(2), nPoints)); % Row indices
cIndex = round(linspace(x(1), x(2), nPoints)); % Column indices
index = sub2ind([R C], rIndex, cIndex); % Linear indices
img(index) = 255; % Modify red plane
img(index+R*C) = 255; % Modify green plane
img(index+2*R*C) = 255; % Modify blue plane
imshow(img); % Display image
imwrite(img, 'berries_line.png'); % Save image, if desired
And the resulting image (note the white line above the berry in the bottom right corner):
You can use the Bresenham Algorithm. Of course it has been implemented and you can find it here: Bresenham optimized for Matlab. This algorithm selects the pixels approximating a line.
A simple example, using your variable name could be:
I = rgb2gray(imread('peppers.png'));
A1 = [1 1];
A2 = [40 40];
[x y] = bresenham(A1(1),A1(2),A2(1),A2(2));
ind = sub2ind(size(I),x,y);
I(ind) = 255;
imshow(I)
You can use imshow to display a image and then use plot to plot lines and save the figure. Check the below code:
I = imread('peppers.png') ;
imshow(I)
hold on
[m,n,p] = size(I) ;
%% Get random points A1, A2,..A10
N = 9 ;
x = (n-1)*rand(1,N)+1 ;
y = (m-1)*rand(1,N)+1 ;
P = [x; y]; % coordinates / points
c = mean(P,2); % mean/ central point
d = P-c ; % vectors connecting the central point and the given points
th = atan2(d(2,:),d(1,:)); % angle above x axis
[th, idx] = sort(th); % sorting the angles
P = P(:,idx); % sorting the given points
P = [P P(:,1)]; % add the first at the end to close the polygon
plot( P(1,:), P(2,:), 'k');
saveas(gcf,'image.png')

Speeding up Octave / Matlab plot

Gnoivce and Hartmut helped a lot with this code but it takes a while to run.
The CData property in the bar command doesn't seem to be implemented in the Octave 4.0-4.2.1 version which I'm using. The work around for this was to plot all the single bars individually and set an individual color for each individual bar. People helped me out and got me this far but it takes 5 minutes for the plot to show does anyone know a way of speeding this up?
The following code runs:
marbles.jpg image file used below:
clear all,clf reset,tic,clc
rgbImage = imread('/tmp/marbles.jpg');
hsvImage = rgb2hsv(rgbImage); % Convert the image to HSV space
hPlane = 360.*hsvImage(:, :, 1); % Get the hue plane scaled from 0 to 360
binEdges = 0:360; %# Edges of histogram bins
N = histc(hPlane(:),binEdges); %# Bin the pixel hues from above
C = colormap(hsv(360)); %# create an HSV color map with 360 points
stepsize = 1; % stepsize 1 runs for a while...
for n=binEdges(2:stepsize:end) %# Plot the histogram, one bar each
if (n==1), hold on, end
h=bar(n,N(n));
set(h,'FaceColor',C(n,:)); %# set the bar color individually
end
axis([0 360 0 max(N)]); %# Change the axes limits
set(gca,'Color','k'); %# Change the axes background color
set(gcf,'Pos',[50 400 560 200]); %# Change the figure size
xlabel('HSV hue (in degrees)'); %# Add an x label
ylabel('Bin counts'); %# Add a y label
fprintf('\nfinally Done-elapsed time -%4.4fsec- or -%4.4fmins- or -%4.4fhours-\n',toc,toc/60,toc/3600);
Plot created after 5 mins:
To see original question original question
I'm guessing the loop is the bottleneck in your code that is taking so long? You could remove the loop and create the plot with one call to bar, then call set to modify the hggroup object and its child patch object:
h = bar(binEdges(1:end-1), N(1:end-1), 'histc'); % hggroup object
set(h, 'FaceColor', 'flat', 'EdgeColor', 'none');
hPatch = get(h, 'Children'); % patch object
set(hPatch, 'CData', 1:360, 'CDataMapping', 'direct');
Repeating your code with this fix renders right away for me in Octave 4.0.3:
As I suggested in a comment, I would use image (takes 0.12s on my system for your image).
EDIT: more comments, fix little bug, allow to create bins with stepsize > 1
img_fn = "17S9PUK.jpg";
if (! exist (img_fn, "file"))
disp ("downloading image from imgur.com...");
fflush (stdout);
urlwrite ("http://i.imgur.com/17S9PUK.jpg", "17S9PUK.jpg");
endif
rgbImage = imread (img_fn);
## for debugging so the matrixes fit on screen
if (0)
pkg load image
rgbImage = imresize (rgbImage, [6 8]);
endif
hsvImage = rgb2hsv(rgbImage);
hPlane = 360 .* hsvImage(:, :, 1);
## create bins, I've choosen 2 step to "smooth" the result
binEdges = 1:2:360;
N = histc (hPlane(:), binEdges)';
cm = permute (hsv (numel (binEdges)), [3 1 2]);
## Create an image with x = hue
img = repmat (cm, max(N), 1);
## Create binary mask which is used to black "img" dependent on N
sp = sparse (N(N > 0), (1:360)(N > 0), true, max(N), numel (binEdges));
mask = full (cumsum (flipud (sp)));
## extend mask in depth to suppress RGB
mask = repmat (mask, [1 1 3]);
## use inverted mask to "black out" pixels < N
img(logical (1 - flipud (mask))) = 0;
## show image
image (binEdges, 1:max(N), img)
set (gca, "ydir", "normal");
xlabel('HSV hue (in degrees)');
ylabel('Bin counts');
## print it for stackoverflow
print ("out.png")
Same as above but with bin width 1 (Elapsed time is 0.167423 seconds.)
binEdges = 1:360;

Find the real time co-ordinates of the four points marked in red in the image

To be exact I need the four end points of the road in the image below.
I used find[x y]. It does not provide satisfying result in real time.
I'm assuming the images are already annotated. In this case we just find the marked points and extract coordinates (if you need to find the red points dynamically through code, this won't work at all)
The first thing you have to do is find a good feature to use for segmentation. See my SO answer here what-should-i-use-hsv-hsb-or-rgb-and-why for code and details. That produces the following image:
we can see that saturation (and a few others) are good candidate colors spaces. So now you must transfer your image to the new color space and do thresholding to find your points.
Points are obtained using matlab's region properties looking specifically for the centroid. At that point you are done.
Here is complete code and results
im = imread('http://i.stack.imgur.com/eajRb.jpg');
HUE = 1;
SATURATION = 2;
BRIGHTNESS = 3;
%see https://stackoverflow.com/questions/30022377/what-should-i-use-hsv-hsb-or-rgb-and-why/30036455#30036455
ViewColoredSpaces(im)
%convert image to hsv
him = rgb2hsv(im);
%threshold, all rows, all columns,
my_threshold = 0.8; %determined empirically
thresh_sat = him(:,:,SATURATION) > my_threshold;
%remove small blobs using a 3 pixel disk
se = strel('disk',3');
cleaned_sat = imopen(thresh_sat, se);% imopen = imdilate(imerode(im,se),se)
%find the centroids of the remaining blobs
s = regionprops(cleaned_sat, 'centroid');
centroids = cat(1, s.Centroid);
%plot the results
figure();
subplot(2,2,1) ;imshow(thresh_sat) ;title('Thresholded saturation channel')
subplot(2,2,2) ;imshow(cleaned_sat);title('After morpphological opening')
subplot(2,2,3:4);imshow(im) ;title('Annotated img')
hold on
for (curr_centroid = 1:1:size(centroids,1))
%prints coordinate
x = round(centroids(curr_centroid,1));
y = round(centroids(curr_centroid,2));
text(x,y,sprintf('[%d,%d]',x,y),'Color','y');
end
%plots centroids
scatter(centroids(:,1),centroids(:,2),[],'y')
hold off
%prints out centroids
centroids
centroids =
7.4593 143.0000
383.0000 87.9911
435.3106 355.9255
494.6491 91.1491
Some sample code would make it much easier to tailor a specific solution to your problem.
One solution to this general problem is using impoint.
Something like
h = figure();
ax = gca;
% ... drawing your image
points = {};
points = [points; impoint(ax,initialX,initialY)];
% ... generate more points
indx = 1 % or whatever point you care about
[currentX,currentY] = getPosition(points{indx});
should do the trick.
Edit: First argument of impoint is an axis object, not a figure object.

Wrong number of objects being returned

I have this MATLab code to count the number of objects in the image. There are two objects in the image I am choosing (a car and a cyclist). However, the program is returning a wrong output saying there are 0 objects. Can someone find the error in the code? Thanks.
The logic behind the code is:
1. Take two input images are given, one without objects and one with objects.
2. Convert the input images from RGB to Gray scale.
3. Compare the two images and find the difference.
4. Convert the image obtained to binary.
5. In the image, only open the blobs whose area is greater than 4000.
6. Display the count and density.
clc;
MV = imread('car.png'); %To read image
MV1 = imread('backgnd.png');
A = double(rgb2gray(MV)); %convert to gray
B= double(rgb2gray(MV1)); %convert 2nd image to gray
[height, width] = size(A); %image size?
h1 = figure(1);
%Foreground Detection
thresh=11;
fr_diff = abs(A-B);
for j = 1:width
for k = 1:height
if (fr_diff(k,j)>thresh)
fg(k,j) = A(k,j);
else
fg(k,j) = 0;
end
end
end
subplot(2,2,1) , imagesc(MV), title ({'Orignal Frame'});
subplot(2,2,2) , imshow(mat2gray(A)), title ('converted Frame');
subplot(2,2,3) , imshow(mat2gray(B)), title ('BACKGND Frame ');
sd=imadjust(fg); % adjust the image intensity values to the color map
level=graythresh(sd);
m=imnoise(sd,'gaussian',0,0.025); % apply Gaussian noise
k=wiener2(m,[5,5]); %filtering using Weiner filter
bw=im2bw(k,level);
bw2=imfill(bw,'holes');
bw3 = bwareaopen(bw2,5000);
labeled = bwlabel(bw3,8);
cc=bwconncomp(bw3);
Densityoftraffic = cc.NumObjects/(size(bw3,1)*size(bw3,2));
blobMeasurements = regionprops(labeled,'all');
numberofcars = size(blobMeasurements, 1);
subplot(2,2,4) , imagesc(labeled), title ({'Foreground'});
hold off;
disp(numberofcars); % display number of cars
disp(Densityoftraffic); %display number of vehicles
An empty image(of a road) with no objects(vehicles) in it
An image of the same road but with 2 objects(car and cyclist) in it
Try This it will help you in an optimize manner
clc
clear all
close all
im1 = imread('image1.png');
im2 = imread('image2.png');
gray1 = double(rgb2gray(im1));
gray2 = double(rgb2gray(im2));
absDif = mat2gray(abs(gray1 - gray2));
figure,imshow(absDif,[])
absDfbw = im2bw(absDif,0.9*graythresh(absDif));
figure,imshow(absDfbw,[])
absDfbw = bwareaopen(absDfbw,25);
absDfbw = imclose(absDfbw,strel('disk',5));
figure,imshow(absDfbw,[])
Results are:
Thank You

Accurately detect color regions in an image using K-means clustering

I'm using K-means clustering in color-based image segmentation. I have a 2D image which has 3 colors, black, white, and green. Here is the image,
I want K-means to produce 3 clusters, one represents the green color region, the second one represents the white region, and the last one represents the black region.
Here is the code I used,
%Clustering color regions in an image.
%Step 1: read the image using imread, and show it using imshow.
img = (imread('img.jpg'));
figure, imshow(img), title('X axis rock cut'); %figure is for creating a figure window.
text(size(img,2),size(img,1)+15,...
'Unconventional shale x axis cut', ...
'FontSize',7,'HorizontalAlignment','right');
%Step 2: Convert Image from RGB Color Space to L*a*b* Color Space
conversionform = makecform('srgb2lab'); %the form of the conversion is defined as from rgb to l a b
lab_img = applycform(img,conversionform); %converting the rgb image to l a b image using the conversion form defined above.
%Step 3: Classify the Colors in 'a*b*' Space Using K-Means Clustering
ab = double(lab_img(:,:,2:3));
nrows = size(ab,1);
ncols = size(ab,2);
ab = reshape(ab,nrows*ncols,2);
nColors = 3;
% repeat the clustering 3 times to avoid local minima
[cluster_idx, cluster_center] = kmeans(ab,nColors,'distance','sqEuclidean', ...
'Replicates',3);
%Step 4: Label Every Pixel in the Image Using the Results from KMEANS
%For every object in your input, kmeans returns an index corresponding to a cluster. The cluster_center output from kmeans will be used later in the example. Label every pixel in the image with its cluster_index.
pixel_labels = reshape(cluster_idx,nrows,ncols);
figure, imshow(pixel_labels,[]), title('image labeled by cluster index');
segmented_images = cell(1,3);
rgb_label = repmat(pixel_labels,[1 1 3]);
for k = 1:nColors
color = img;
color(rgb_label ~= k) = 0;
segmented_images{k} = color;
end
figure, imshow(segmented_images{1}), title('objects in cluster 1');
figure, imshow(segmented_images{2}), title('objects in cluster 2');
figure, imshow(segmented_images{3}), title('objects in cluster 3');
But I'm not getting the results as required. I get one cluster with green regions, one cluster with green region boundaries, and one with gray, black, and white colors. Here are the resulting clusters.
The aim of doing this is that after getting the correct clustering results, I want to count the number of pixels in every region using the concept of connected components.
So, my aim is to know how many pixels there are in every color region. I tried another simpler way by getting the matrix of the 2D image and trying to figure out the number of pixels for every color. However, I found more than 3 RGB colors in the matrix, maybe because pixels of the same color have a slightly different color levels. That's why I went to image segmentation.
Can anyone please tell me how to fix the code above in order to get the required results?
I would also appreciate it if you give me hints on how to do this in an easier way, if there is any.
EDIT: Here is a code I made to iterate over every pixel in the image. Please notice I use 4 colors red, yellow, blue, and white instead of green, white, and black, but the idea is the same. rgb2name is the function that returns the color name given RGB color.
im= imread ('img.jpg');
[a b c] = size (im);
%disp ([a b]);
yellow=0;
blue=0;
white=0;
red=0;
for i=1:a
for j=1:b
x= impixel(im, i, j)/255 ;
color= rgb2name (x);
if (~isempty (strfind (color, 'yellow')))
yellow= yellow+1;
elseif (~isempty (strfind(color, 'red')))
red= red+1;
elseif (~isempty (strfind (color, 'blue')))
blue= blue+1;
elseif (~isempty (strfind (color, 'white')))
white= white+1;
else
%disp ('warning'); break;
end
disp (color);
disp (i);
end
end
disp (yellow)
disp (red)
disp (blue)
disp (white)
Thank You.
I thought this problem was very interesting, so I apologize ahead of time if the answer is a little overboard. In short, k-means is the right strategy, in general, for problems where you want to segment an image into a discrete color space. But, your example image, which contains primarily only three colors, each of which is well separated in color space, is easily segmented using only a histogram. See below for segmenting using thresholds.
You can easily get the pixel counts by summing each matrix. e.g., bCount = sum(blackPixels(:))
filename = '379NJ.png';
x = imread(filename);
x = double(x); % cast to floating point
x = x/max(max(max(x))); % normalize
% take histogram of green dimension
g = x(:, :, 2);
c = hist(g(:), 2^8);
% smooth the hist count
c = [zeros(1, 10), c, zeros(1, 10)];
N = 4;
for i = N+1:length(c) - N;
d(i - N) = mean(c(i -N:i));
end
d = circshift(d, [1, N/2]);
% as seen in histogram, the three colors fall nicely into 3 peaks
figure, plot(c, '.-');
[~, clusterCenters] = findpeaks(d, 'MinPeakHeight', 1e3);
% set the threshold halfway between peaks
boundaries = [floor((clusterCenters(2) - clusterCenters(1))/2), ...
clusterCenters(2) + floor((clusterCenters(3) - clusterCenters(2))/2)];
thresh1 = boundaries(1)*ones(size(g))/255;
thresh2 = boundaries(2)*ones(size(g))/255;
% categorize based on threshold
blackPixels = g < thresh1;
greenPixels = g >= thresh1 & g < thresh2;
whitePixels = g >= thresh2;
This is my approach to count the number of pixels in every region. Given that (as discussed in the comments):
the value (RGB) and the number (K) of colors are known a priori
compression artifacts and anti-aliasing generated additional colors, that must be considered as the nearest-neighbor among the K know colors.
Since you know a priori the colors, you don't need k-means. It could actually lead to bad results as in your question. The approach of #crowdedComputeeer take care of this aspect.
You can compute nearest neighbor with pdist2 directly on the pixel values. There's no need to use the really slow function that looks for the color name.
Here is the code. You can change the number and values of colors simply modifying the variable colors. This will compute the number of pixels in each color, and output the masks.
img = (imread('path_to_image'));
colors = [ 0 0 0; % black
0 1 0; % green
1 1 1]; % white
% % You can change the colors
% colors = [ 0 0 1; % red
% 1 1 0; % yellow
% 1 0 0; % blue
% 1 1 1]; % white
% Find nearest neighbour color
list = double(reshape(img, [], 3)) / 255;
[~, IDX] = pdist2(colors, list, 'euclidean', 'Smallest', 1);
% IDX contains the indices to the nearest element
N = zeros(size(colors, 1), 1);
for i = 1 : size(colors, 1)
% Count the number of pixels for each color
N(i) = sum( IDX == i );
end
% This will display the number of pixels for each color
disp(N);
% Eventually build the masks
indices = reshape(IDX, [size(img,1), size(img,2)]);
figure();
szc = size(colors,1);
for i = 1 : szc
subplot(1,szc,i);
imagesc(indices == i);
end
Resulting counts:
97554 % black
16894 % green
31852 % white
Resulting masks:
Maybe this project could help, please take a try.