Select second largest area in image - matlab

I use bwareaopen to remove small objects. Is there a function to remove the big objects? I'm trying to adapt bwareaopen however haven't been successful so far. Thanks
For ref: Here's a link to the help of bwareaopen.

I found an easy way to tackle this problem described here:
"To keep only objects between, say, 30 pixels and 50 pixels in area, you can use the BWAREAOPEN command, like this:"
LB = 30;
UB = 50;
Iout = xor(bwareaopen(I,LB), bwareaopen(I,UB));

Another way if you don't want to use bwareaopen is to use regionprops, specifically with the Area and PixelIdxList attributes, filter out the elements that don't conform to the area range you want, then use the remaining elements and create a new mask. Area captures the total area of each shape while PixelIdxList captures the column major linear indices of the locations inside the image that belong to each shape. You would use the Area attribute to perform your filtering while you would use the PixelIdxList attribute to create a new output image and set these locations to true that are within the desired area range:
% Specify lower and upper bounds
LB = 30;
UB = 50;
% Run regionprops
s = regionprops(I, 'Area', 'PixelIdxList');
% Get all of the areas for each shape
areas = [s.Area];
% Remove elements from output of regionprops
% that are not within the range
s = s(areas >= LB & areas <= UB);
% Get the column-major locations of the shapes
% that have passed the check
idx = {s.PixelIdxList};
idx = cat(1, idx{:});
% Create an output image with the passed shapes
Iout = false(size(I));
Iout(idx) = true;

Related

How can I traverse through pixels?

Suppose, I have the following image in my hand.
I have marked some pixels of the image as follows,
Now, I have obtained the pixel mask,
How can I traverse through only those pixels that are in that mask?
Given a binary mask, mask, where you want to iterate over all the true pixels in mask, you have at least two options that are both better than the double for loop example.
1) Logical indexing.
I(mask) = 255;
2) Use find.
linearIdx = find(mask);
I(linearIdx) = 255;
The original question:
How can I save only those pixels which I am interested in?
...
Question: Now, in the Step#2, I want to save those pixels in a data-structure (or, whatever) d so that I can apply another function f2(I, d, p,q,r) which does something on that image on the basis of those pixels d.
Create a binary mask
Try using a logical mask of the image to keep track of the pixels of interest.
I'll make up a random image for example here:
randImg = rand(64,64,3);
imgMask = false(size(randImg(:,:,1)));
imgMask(:,[1:4:end]) = true; % take every four columns This would be your d.
% Show what we are talking about
maskImg = zeros(size(randImg));
imgMaskForRGB = repmat(imgMask,1,1,3);
maskImg(imgMaskForRGB) = randImg(imgMaskForRGB);
figure('name','Psychadelic');
subplot(2,1,1);
imagesc(randImg);
title('Random image');
subplot(2,1,2);
imagesc(maskImg);
title('Masked pixels of interest');
Here's what it looks like:
It will be up to you to determine how to store and use the image mask (d in your case) as I am not sure how your functions are written. Hopefully this example will give you an understanding of how it can be done though.
EDIT
You added a second question since I posted:
But, now the problem is, how am I going to traverse through those pixels in K?
Vectrorization
To set all pixels to white:
randImg(imgMaskForRGB) = 255;
In my example, I accessed all of the pixels of interest at the same time with my mask in a vectorized fashion.
I translated my 2D mask into a 3D mask, in order to grab the RGB values of each pixel. That was this code:
maskImg = zeros(size(randImg));
imgMaskForRGB = repmat(imgMask,1,1,3);
Then to access all of these pixels in the image of interest, I used this call:
randImg(imgMaskForRGB)
These are your pixels of interest. If you want to divide these values in 1/2 you could do something like this:
randImg(imgMaskForRGB) = randImg(imgMaskForRGB)/2;
Loops
If you really want to traverse, one pixel at a time, you can always use a double for loop:
for r=1:size(randImg,1)
for c=1:size(randImg,2)
if(imgMask(r,c)) % traverse all the pixels
curPixel = randImg(r,c,:); % grab the ones that are flagged
end
end
end
Okay. I have solved this using the answer of #informaton,
I = imread('gray_bear.png');
J = rgb2gray(imread('marked_bear.png'));
mask = I-J;
for r=1:size(I,1)
for c=1:size(I,2)
if(mask(r,c))
I(r,c) = 255;
end
end
end
imshow(I);

Object detection based on CNN in matlab [duplicate]

I'm trying to perform object detection with RCNN on my own dataset following the tutorial on Matlab webpage. Based on the picture below:
I'm supposed to put image paths in the first column and the bounding box of each object in the following columns. But in each of my images, there is more than one object of each kind. For example there are 20 vehicles in one image. How should I deal with that? Should I create a separate row for each instance of vehicle in an image?
The example found on the website finds the pixel neighbourhood with the largest score and draws a bounding box around that region in the image. When you have multiple objects now, that complicates things. There are two approaches that you can use to facilitate finding multiple objects.
Find all bounding boxes with scores that surpass some global threshold.
Find the bounding box with the largest score and find those bounding boxes that surpass a percentage of this threshold. This percentage is arbitrary but from experience and what I have seen in practice, people tend to choose between 80% to 95% of the largest score found in the image. This will of course give you false positives if you submit an image as the query with objects not trained to be detected by the classifier but you will have to implement some more post-processing logic on your end.
An alternative approach would be to choose some value k and you would display the top k bounding boxes associated with the k highest scores. This of course requires that you know what the value of k is before hand and it will always assume that you have found an object in the image like the second approach.
In addition to the above logic, the approach that you state where you need to create a separate row for each instance of vehicle in the image is correct. This means that if you have multiple candidates of an object in a single image, you would need to introduce one row per instance while keeping the image filename the same. Therefore, if you had for example 20 vehicles in one image, you would need to create 20 rows in your table where the filename is all the same and you would have a single bounding box specification for each distinct object in that image.
Once you have done this, assuming that you have already trained the R-CNN detector and you want to use it, the original code to detect objects is the following referencing the website:
% Read test image
testImage = imread('stopSignTest.jpg');
% Detect stop signs
[bboxes, score, label] = detect(rcnn, testImage, 'MiniBatchSize', 128)
% Display the detection results
[score, idx] = max(score);
bbox = bboxes(idx, :);
annotation = sprintf('%s: (Confidence = %f)', label(idx), score);
outputImage = insertObjectAnnotation(testImage, 'rectangle', bbox, annotation);
figure
imshow(outputImage)
This only works for one object which has the highest score. If you wanted to do this for multiple objects, you would use the score that is output from the detect method and find those locations that either accommodate situation 1 or situation 2.
If you had situation 1, you would modify it to look like the following.
% Read test image
testImage = imread('stopSignTest.jpg');
% Detect stop signs
[bboxes, score, label] = detect(rcnn, testImage, 'MiniBatchSize', 128)
% New - Find those bounding boxes that surpassed a threshold
T = 0.7; % Define threshold here
idx = score >= T;
% Retrieve those scores that surpassed the threshold
s = score(idx);
% Do the same for the labels as well
lbl = label(idx);
bbox = bboxes(idx, :); % This logic doesn't change
% New - Loop through each box and print out its confidence on the image
outputImage = testImage; % Make a copy of the test image to write to
for ii = 1 : size(bbox, 1)
annotation = sprintf('%s: (Confidence = %f)', lbl(ii), s(ii)); % Change
outputImage = insertObjectAnnotation(outputImage, 'rectangle', bbox(ii,:), annotation); % New - Choose the right box
end
figure
imshow(outputImage)
Note that I've stored the original bounding boxes, labels and scores in their original variables while the subset of the ones that surpassed the threshold in separate variables in case you want to cross-reference between the two. If you wanted to accommodate for situation 2, the code remains the same as situation 1 with the exception of defining the threshold.
The code from:
% New - Find those bounding boxes that surpassed a threshold
T = 0.7; % Define threshold here
idx = scores >= T;
% [score, idx] = max(score);
... would now change to:
% New - Find those bounding boxes that surpassed a threshold
perc = 0.85; % 85% of the maximum threshold
T = perc * max(score); % Define threshold here
idx = score >= T;
The end result will be multiple bounding boxes of the detected objects in the image - one annotation per detected object.
I think you actually have to put all of the coordinates for that image as a single entry in your training data table. See this MATLAB tutorial for details. If you load the training data to your MATLAB locally and check the vehicleDataset variable, you will actually see this (sorry my score is not high enough to include images directly in my answers).
To summarize, in your training data table, make sure you have one unique entry for each image, and put however many bounding boxes into the corresponding category as a matrix, where each row is in the format of [x, y, width, height].

How to perform RCNN object detection on custom dataset?

I'm trying to perform object detection with RCNN on my own dataset following the tutorial on Matlab webpage. Based on the picture below:
I'm supposed to put image paths in the first column and the bounding box of each object in the following columns. But in each of my images, there is more than one object of each kind. For example there are 20 vehicles in one image. How should I deal with that? Should I create a separate row for each instance of vehicle in an image?
The example found on the website finds the pixel neighbourhood with the largest score and draws a bounding box around that region in the image. When you have multiple objects now, that complicates things. There are two approaches that you can use to facilitate finding multiple objects.
Find all bounding boxes with scores that surpass some global threshold.
Find the bounding box with the largest score and find those bounding boxes that surpass a percentage of this threshold. This percentage is arbitrary but from experience and what I have seen in practice, people tend to choose between 80% to 95% of the largest score found in the image. This will of course give you false positives if you submit an image as the query with objects not trained to be detected by the classifier but you will have to implement some more post-processing logic on your end.
An alternative approach would be to choose some value k and you would display the top k bounding boxes associated with the k highest scores. This of course requires that you know what the value of k is before hand and it will always assume that you have found an object in the image like the second approach.
In addition to the above logic, the approach that you state where you need to create a separate row for each instance of vehicle in the image is correct. This means that if you have multiple candidates of an object in a single image, you would need to introduce one row per instance while keeping the image filename the same. Therefore, if you had for example 20 vehicles in one image, you would need to create 20 rows in your table where the filename is all the same and you would have a single bounding box specification for each distinct object in that image.
Once you have done this, assuming that you have already trained the R-CNN detector and you want to use it, the original code to detect objects is the following referencing the website:
% Read test image
testImage = imread('stopSignTest.jpg');
% Detect stop signs
[bboxes, score, label] = detect(rcnn, testImage, 'MiniBatchSize', 128)
% Display the detection results
[score, idx] = max(score);
bbox = bboxes(idx, :);
annotation = sprintf('%s: (Confidence = %f)', label(idx), score);
outputImage = insertObjectAnnotation(testImage, 'rectangle', bbox, annotation);
figure
imshow(outputImage)
This only works for one object which has the highest score. If you wanted to do this for multiple objects, you would use the score that is output from the detect method and find those locations that either accommodate situation 1 or situation 2.
If you had situation 1, you would modify it to look like the following.
% Read test image
testImage = imread('stopSignTest.jpg');
% Detect stop signs
[bboxes, score, label] = detect(rcnn, testImage, 'MiniBatchSize', 128)
% New - Find those bounding boxes that surpassed a threshold
T = 0.7; % Define threshold here
idx = score >= T;
% Retrieve those scores that surpassed the threshold
s = score(idx);
% Do the same for the labels as well
lbl = label(idx);
bbox = bboxes(idx, :); % This logic doesn't change
% New - Loop through each box and print out its confidence on the image
outputImage = testImage; % Make a copy of the test image to write to
for ii = 1 : size(bbox, 1)
annotation = sprintf('%s: (Confidence = %f)', lbl(ii), s(ii)); % Change
outputImage = insertObjectAnnotation(outputImage, 'rectangle', bbox(ii,:), annotation); % New - Choose the right box
end
figure
imshow(outputImage)
Note that I've stored the original bounding boxes, labels and scores in their original variables while the subset of the ones that surpassed the threshold in separate variables in case you want to cross-reference between the two. If you wanted to accommodate for situation 2, the code remains the same as situation 1 with the exception of defining the threshold.
The code from:
% New - Find those bounding boxes that surpassed a threshold
T = 0.7; % Define threshold here
idx = scores >= T;
% [score, idx] = max(score);
... would now change to:
% New - Find those bounding boxes that surpassed a threshold
perc = 0.85; % 85% of the maximum threshold
T = perc * max(score); % Define threshold here
idx = score >= T;
The end result will be multiple bounding boxes of the detected objects in the image - one annotation per detected object.
I think you actually have to put all of the coordinates for that image as a single entry in your training data table. See this MATLAB tutorial for details. If you load the training data to your MATLAB locally and check the vehicleDataset variable, you will actually see this (sorry my score is not high enough to include images directly in my answers).
To summarize, in your training data table, make sure you have one unique entry for each image, and put however many bounding boxes into the corresponding category as a matrix, where each row is in the format of [x, y, width, height].

Matlab Shape Labelling

I have a set of shapes in an image I would like to label according to their area, I have used bwboundaries to find them, and regionprops to determine their area. I would like to label them such that they are labelled different based on whether their area is above or below the threshold i have determined.
I've thought about using inserObjectAnnotation, but I'm not sure how to add on a condition based on their area into the function?
Assuming TH to be the threshold area and BW to be the binary image and if you are okay with labeling them as o's and x's with matlab figure text at their centers (centroids to be exact), based on the thresholding, see if this satisfies your needs -
stats = regionprops(BW,'Area')
stats2 = regionprops(BW,'Centroid')
figure,imshow(BW)
for k = 1:numel(stats)
xy = stats2(k).Centroid
if (stats(k).Area>TH)
text(xy(1),xy(2),'L') %// Large Shape
else
text(xy(1),xy(2),'S') %// Small Shape
end
end
Sample output -
You could use CC = bwconncomp(BW,conn).
To get the number of pixels of every connected compontent you can use:
numPixels = cellfun(#numel,CC.PixelIdxList);
In CC.PixelIdxList you have a list of all found objects and the indices of the pixels belonging to the components. I guess to label your areas you could do something like:
for ind = 1:size(CC.PixelIdxList,2)
Image(CC.PixelIdxList{ind}) = ind;
end

MATLAB: image corner coordinates & referncing to cell arrays

I am having some problems comparing the elements in different cell arrays.
The context of this problem is that I am using the bwboundaries function in MATLAB to trace the outline of an image. The image is of a structural cross section and I am trying to find if there is continuity throughout the section (i.e. there is only one outline produced by the bwboundaries command).
Having done this and found where the is more than one section traced (i.e. it is not continuous), I have used the cornermetric command to find the corners of each section.
The code I have is:
%% Define the structural section as a binary matrix (Image is an I-section with the web broken)
bw(20:40,50:150) = 1;
bw(160:180,50:150) = 1;
bw(20:60,95:105) = 1;
bw(140:180,95:105) = 1;
Trace = bw;
[B] = bwboundaries(Trace,'noholes'); %Traces the outer boundary of each section
L = length(B); % Finds number of boundaries
if L > 1
disp('Multiple boundaries') % States whether more than one boundary found
end
%% Obtain perimeter coordinates
for k=1:length(B) %For all the boundaries
perim = B{k}; %Obtains perimeter coordinates (as a 2D matrix) from the cell array
end
%% Find the corner positions
C = cornermetric(bw);
Areacorners = find(C == max(max(C))) % Finds the corner coordinates of each boundary
[rowindexcorners,colindexcorners] = ind2sub(size(Newgeometry),Areacorners)
% Convert corner coordinate indexes into subcripts, to give x & y coordinates (i.e. the same format as B gives)
%% Put these corner coordinates into a cell array
Cornerscellarray = cell(length(rowindexcorners),1); % Initialises cell array of zeros
for i =1:numel(rowindexcorners)
Cornerscellarray(i) = {[rowindexcorners(i) colindexcorners(i)]};
%Assigns the corner indicies into the cell array
%This is done so the cell arrays can be compared
end
for k=1:length(B) %For all the boundaries found
perim = B{k}; %Obtains coordinates for each perimeter
Z = perim; % Initialise the matrix containing the perimeter corners
Sectioncellmatrix = cell(length(rowindexcorners),1);
for i =1:length(perim)
Sectioncellmatrix(i) = {[perim(i,1) perim(i,2)]};
end
for i = 1:length(perim)
if Sectioncellmatrix(i) ~= Cornerscellarray
Sectioncellmatrix(i) = [];
%Gets rid of the elements that are not corners, but keeps them associated with the relevent section
end
end
end
This creates an error in the last for loop. Is there a way I can check whether each cell of the array (containing an x and y coordinate) is equal to any pair of coordinates in cornercellarray? I know it is possible with matrices to compare whether a certain element matches any of the elements in another matrix. I want to be able to do the same here, but for the pair of coordinates within the cell array.
The reason I don't just use the cornercellarray cell array itself, is because this lists all the corner coordinates and does not associate them with a specific traced boundary.
Many-to-many comparisons cannot be done with the equal sign. You need to use ismember instead.
%# catenate all corners in one big corner array
Cornerscellarray = cat(1,Cornerscellarray{:});
%# loop through each section cell and remove all that is not corners
for i = 1:length(perim)
%# check for corners
cornerIdx = ismember(Sectioncellmatrix{i},Cornerscellarray,'rows');
%# only keep good entries
Sectioncellmatrix{i} = Sectioncellmatrix{i}(cornerIdx,:);
end
Also, this code really looks like it could be a bit optimized. For example, you could use bwlabel to label your arrays, read the label with the corner coordinates to associate corners with the features.
Like so:
bw(20:40,50:150) = 1;
bw(160:180,50:150) = 1;
bw(20:60,95:105) = 1;
bw(140:180,95:105) = 1;
%# get corners
cornerProbability = cornermetric(bw);
cornerIdx = find(cornerProbability==max(cornerProbability(:)));
%# Label the image. bwlabel puts 1 for the first feature, 2 for the second, etc.
%# Since concave corners are placed just outside the feature, grow the features
%# a little before labeling
bw2 = imdilate(bw,ones(3));
labeledImage = bwlabel(bw2);
%# read the feature number associated with the corner
cornerLabels = labeledImage(cornerIdx);
%# find all corners that are associated with feature 1
corners_1 = cornerIdx(cornerLabels==1);