Applying adaptive thresholding on Canny edge detection - image-segmentation

I want to remove the blurred background of images in my project dataset, and I already get a pretty nice solution in here using Canny edge detection. I want to apply an adaptive thresholding on the double threshold value requirements of Canny. I appreciate any help on this.
imageNames = glob.glob(r"C:\Users\Bikir\Pictures\rTest\*.jpg")
count=0
for i in imageNames:
img = Image.open(i)
img = np.array(img)
# grayscale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# canny - I want this two values (0 and 150) to be adaptive in this case
canned = cv2.Canny(gray, 0, 150)
# dilate to close holes in lines
kernel = np.ones((3,3),np.uint8)
mask = cv2.dilate(canned, kernel, iterations = 1);
# find contours
# Opencv 3.4, if using a different major version (4.0 or 2.0), remove the first underscore
_, contours, _ = cv2.findContours(mask, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE);
# find the biggest contour
biggest_cntr = None;
biggest_area = 0;
for contour in contours:
area = cv2.contourArea(contour);
if area > biggest_area:
biggest_area = area;
biggest_cntr = contour;
# draw contours
crop_mask = np.zeros_like(mask);
cv2.drawContours(crop_mask, [biggest_cntr], -1, (255), -1);
# opening + median blur to smooth jaggies
crop_mask = cv2.erode(crop_mask, kernel, iterations = 5);
crop_mask = cv2.dilate(crop_mask, kernel, iterations = 5);
crop_mask = cv2.medianBlur(crop_mask, 21);
# crop image
crop = np.zeros_like(img);
crop[crop_mask == 255] = img[crop_mask == 255];
img = im.fromarray(crop)
img.save(r"C:\Users\Bikir\Pictures\removed\\"+str(count)+".jpg")
count+=1

Related

How to separate human body from background in an image

I have been trying to separate the human body in an image from the background, but all the methods I have seen don't seem to work very well for me.
I have collected the following images;
The image of the background
The image of the background with the person in it.
Now I want to cut out the person from the background.
I tried subtracting the image of the background from the image with the person using res = cv2.subtract(background, foreground) (I am new to image processing).
Background subtraction methods in opencv like cv2.BackgroundSubtractorMOG2() and cv2.BackgroundSubtractorMOG2() only works with videos or image sequence and contour detection methods I have seen are only for solid shapes.
And grabCut doesn't quite work well for me because I would like to automate the process.
Given the images I have (Image of the background and image of the background with the person in it), is there a method of cutting the person out from the background?
I wouldn't recommend a neural net for this problem. That's a lot of work for something like this where you have a known background. I'll walk through the steps I took to do the background segmentation on this image.
First I shifted into the LAB color space to get some light-resistant channels to work with. I did a simple subtractions of foreground and background and combined the a and b channels.
You can see that there is still significant color change in the background even with a less light-sensitive color channel. This is likely due to the auto white balance on the camera, you can see that some of the background colors change when you step into view.
The next step I took was thresholding off of this image. The optimal threshold values may not always be the same, you'll have to adjust to a range that works well for your set of photos.
I used openCV's findContours function to get the segmentation points of each blob and I filtered the available contours by size. I set a size threshold of 15000. For reference, the person in the image had a pixel area of 27551.
Then it's just a matter of cropping out the contour.
This technique works for any good thresholding strategy. If you can improve the consistency of your pictures by turning off auto settings and ensure good contrast of the person against the wall then you can use simpler thresholding strategies and get good results.
Just for fun:
Edit:
I forgot to add in the code I used:
import cv2
import numpy as np
# rescale values
def rescale(img, orig, new):
img = np.divide(img, orig);
img = np.multiply(img, new);
img = img.astype(np.uint8);
return img;
# get abs(diff) of all hue values
def diff(bg, fg):
# do both sides
lh = bg - fg;
rh = fg - bg;
# pick minimum # this works because of uint wrapping
low = np.minimum(lh, rh);
return low;
# load image
bg = cv2.imread("back.jpg");
fg = cv2.imread("person.jpg");
fg_original = fg.copy();
# blur
bg = cv2.blur(bg,(5,5));
fg = cv2.blur(fg,(5,5));
# convert to lab
bg_lab = cv2.cvtColor(bg, cv2.COLOR_BGR2LAB);
fg_lab = cv2.cvtColor(fg, cv2.COLOR_BGR2LAB);
bl, ba, bb = cv2.split(bg_lab);
fl, fa, fb = cv2.split(fg_lab);
# subtract
d_b = diff(bb, fb);
d_a = diff(ba, fa);
# rescale for contrast
d_b = rescale(d_b, np.max(d_b), 255);
d_a = rescale(d_a, np.max(d_a), 255);
# combine
combined = np.maximum(d_b, d_a);
# threshold
# check your threshold range, this will work for
# this image, but may not work for others
# in general: having a strong contrast with the wall makes this easier
thresh = cv2.inRange(combined, 70, 255);
# opening and closing
kernel = np.ones((3,3), np.uint8);
# closing
thresh = cv2.dilate(thresh, kernel, iterations = 2);
thresh = cv2.erode(thresh, kernel, iterations = 2);
# opening
thresh = cv2.erode(thresh, kernel, iterations = 2);
thresh = cv2.dilate(thresh, kernel, iterations = 3);
# contours
_, contours, _ = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE);
# filter contours by size
big_cntrs = [];
marked = fg_original.copy();
for contour in contours:
area = cv2.contourArea(contour);
if area > 15000:
print(area);
big_cntrs.append(contour);
cv2.drawContours(marked, big_cntrs, -1, (0, 255, 0), 3);
# create a mask of the contoured image
mask = np.zeros_like(fb);
mask = cv2.drawContours(mask, big_cntrs, -1, 255, -1);
# erode mask slightly (boundary pixels on wall get color shifted)
mask = cv2.erode(mask, kernel, iterations = 1);
# crop out
out = np.zeros_like(fg_original) # Extract out the object and place into output image
out[mask == 255] = fg_original[mask == 255];
# show
cv2.imshow("combined", combined);
cv2.imshow("thresh", thresh);
cv2.imshow("marked", marked);
# cv2.imshow("masked", mask);
cv2.imshow("out", out);
cv2.waitKey(0);
Since it is very easy to find dataset consist a lot of human body, I suggest you to implement neural network segmentation tecniques to extract human body perfectly. Please check this link to see similar example.

Best way to isolate rectangular object

I have the following image and I would like to segment the rectangular object in the middle. I implemented the following code to segment but I cannot isolate the object. What functions or approaches can I take to isolate the rectangular object in the image?
im = imread('image.jpg');
% convert image to grayscale,
imHSV = rgb2hsv(im);
imGray = rgb2gray(im);
imSat = imHSV(:,:,2);
imHue = imHSV(:,:,1);
imVal = imHSV(:,:,3);
background = imopen(im,strel('disk',15));
I2 = im - background;
% detect edge using sobel algorithm
[~, threshold] = edge(imGray, 'sobel');
fudgeFactor = .5;
imEdge = edge(imGray,'sobel', threshold * fudgeFactor);
%figure, imshow(imEdge);
% split image into colour channels
redIM = im(:,:,1);
greenIM = im(:,:,2);
blueIM = im(:,:,3);
% convert image to binary image (using thresholding)
imBlobs = and((imSat < 0.6),(imHue < 0.6));
imBlobs = and(imBlobs, ((redIM + greenIM + blueIM) > 150));
imBlobs = imfill(~imBlobs,4);
imBlobs = bwareaopen(imBlobs,50);
figure,imshow(imBlobs);
In this example, you can leverage the fact that the rectangle contains blue in all of its corners in order to build a good initial mask.
Use threshold in order to locate the blue locations in the image and create an initial mask.
Given this initial mask, find its corners using min and max operations.
Connect between the corners with lines in order to receive a rectangle.
Fill the rectangle using imfill.
Code example:
% convert image to binary image (using thresholding)
redIM = im(:,:,1);
greenIM = im(:,:,2);
blueIM = im(:,:,3);
mask = blueIM > redIM*2 & blueIM > greenIM*2;
%noise cleaning
mask = imopen(mask,strel('disk',3));
%find the corners of the rectangle
[Y, X] = ind2sub(size(mask),find(mask));
minYCoords = find(Y==min(Y));
maxYCoords = find(Y==max(Y));
minXCoords = find(X==min(X));
maxXCoords = find(X==max(X));
%top corners
topRightInd = find(X(minYCoords)==max(X(minYCoords)),1,'last');
topLeftInd = find(Y(minXCoords)==min(Y(minXCoords)),1,'last');
p1 = [Y(minYCoords(topRightInd)) X((minYCoords(topRightInd)))];
p2 = [Y(minXCoords(topLeftInd)) X((minXCoords(topLeftInd)))];
%bottom corners
bottomRightInd = find(Y(maxXCoords)==max(Y(maxXCoords)),1,'last');
bottomLeftInd = find(X(minYCoords)==min(X(minYCoords)),1,'last');
p3 = [Y(maxXCoords(bottomRightInd)) X((maxXCoords(bottomRightInd)))];
p4 = [Y(maxYCoords(bottomLeftInd)) X((maxYCoords(bottomLeftInd)))];
%connect between the corners with lines
l1Inds = drawline(p1,p2,size(mask));
l2Inds = drawline(p3,p4,size(mask));
maskOut = mask;
maskOut([l1Inds,l2Inds]) = 1;
%fill the rectangle which was created
midP = ceil((p1+p2+p3+p4)./4);
maskOut = imfill(maskOut,midP);
%present the final result
figure,imshow(maskOut);
Final Result:
Intermediate results (1-after threshold taking, 2-after adding lines):
*drawline function is taken from drawline webpage

Find a nearly circular band of bright pixels in this image

This is the problem I have: I have an image as shown below. I want to detect the circular region which I have marked with a red line for display here (that particular bright ring).
Initially, this is what I do for now: (MATLAB)
binaryImage = imdilate(binaryImage,strel('disk',5));
binaryImage = imfill(binaryImage, 'holes'); % Fill holes.
binaryImage = bwareaopen(binaryImage, 20000); % Remove small blobs.
binaryImage = imerode(binaryImage,strel('disk',300));
out = binaryImage;
img_display = immultiply(binaryImage,rgb2gray(J1));
figure, imshow(img_display);
The output seems to be cut on one of the parts of the object (for a different image as input, not the one displayed above). I want an output in such a way that it is symmetric (its not always a perfect circle, when it is rotated).
I want to strictly avoid im2bw since as soon as I binarize, I lose a lot of information about the shape.
This is what I was thinking of:
I can detect the outer most circular (almost circular) contour of the image (shown in yellow). From this, I can find out the centroid and maybe find a circle which has a radius of 50% (to locate the region shown in red). But this won't be exactly symmetric since the object is slightly tilted. How can I tackle this issue?
I have attached another image where object is slightly tilted here
I'd try messing around with the 'log' filter. The region you want is essentially low values of the 2nd order derivative (i.e. where the slope is decreasing), and you can detect these regions by using a log filter and finding negative values. Here's a very basic outline of what you can do, and then tweak it to your needs.
img = im2double(rgb2gray(imread('wheel.png')));
img = imresize(img, 0.25, 'bicubic');
filt_img = imfilter(img, fspecial('log',31,5));
bin_img = filt_img < 0;
subplot(2,2,1);
imshow(filt_img,[]);
% Get regionprops
rp = regionprops(bin_img,'EulerNumber','Eccentricity','Area','PixelIdxList','PixelList');
rp = rp([rp.EulerNumber] == 0 & [rp.Eccentricity] < 0.5 & [rp.Area] > 2000);
bin_img(:) = false;
bin_img(vertcat(rp.PixelIdxList)) = true;
subplot(2,2,2);
imshow(bin_img,[]);
bin_img(:) = false;
bin_img(rp(1).PixelIdxList) = true;
bin_img = imfill(bin_img,'holes');
img_new = img;
img_new(~bin_img) = 0;
subplot(2,2,3);
imshow(img_new,[]);
bin_img(:) = false;
bin_img(rp(2).PixelIdxList) = true;
bin_img = imfill(bin_img,'holes');
img_new = img;
img_new(~bin_img) = 0;
subplot(2,2,4);
imshow(img_new,[]);
Output:

Stretching an ellipse in an image to form a circle

I want to stretch an elliptical object in an image until it forms a circle. My program currently inputs an image with an elliptical object (eg. coin at an angle), thresholds and binarizes it, isolates the region of interest using edge-detect/bwboundaries(), and performs regionprops() to calculate major/minor axis lengths.
Essentially, I want to use the 'MajorAxisLength' as the diameter and stretch the object on the minor axis to form a circle. Any suggestions on how I should approach this would be greatly appreciated. I have appended some code for your perusal (unfortunately I don't have enough reputation to upload an image, the binarized image looks like a white ellipse on a black background).
EDIT: I'd also like to apply this technique to the gray-scale version of the image, to examine what the stretch looks like.
code snippet:
rgbImage = imread(fullFileName);
redChannel = rgbImage(:, :, 1);
binaryImage = redChannel < 90;
labeledImage = bwlabel(binaryImage);
area_measurements = regionprops(labeledImage,'Area');
allAreas = [area_measurements.Area];
biggestBlobIndex = find(allAreas == max(allAreas));
keeperBlobsImage = ismember(labeledImage, biggestBlobIndex);
measurements = regionprops(keeperBlobsImage,'Area','MajorAxisLength','MinorAxisLength')
You know the diameter of the circle and you know the center is the location where the major and minor axes intersect. Thus, just compute the radius r from the diameter, and for every pixel in your image, check to see if that pixel's Euclidean distance from the cirlce's center is less than r. If so, color the pixel white. Otherwise, leave it alone.
[M,N] = size(redChannel);
new_image = zeros(M,N);
for ii=1:M
for jj=1:N
if( sqrt((jj-center_x)^2 + (ii-center_y)^2) <= radius )
new_image(ii,jj) = 1.0;
end
end
end
This can probably be optimzed by using the meshgrid function combined with logical indices to avoid the loops.
I finally managed to figure out the transform required thanks to a lot of help on the matlab forums. I thought I'd post it here, in case anyone else needed it.
stats = regionprops(keeperBlobsImage, 'MajorAxisLength','MinorAxisLength','Centroid','Orientation');
alpha = pi/180 * stats(1).Orientation;
Q = [cos(alpha), -sin(alpha); sin(alpha), cos(alpha)];
x0 = stats(1).Centroid.';
a = stats(1).MajorAxisLength;
b = stats(1).MinorAxisLength;
S = diag([1, a/b]);
C = Q*S*Q';
d = (eye(2) - C)*x0;
tform = maketform('affine', [C d; 0 0 1]');
Im2 = imtransform(redChannel, tform);
subplot(2, 3, 5);
imshow(Im2);

How to provide region of interest (ROI) for edge detection and corner detection in Matlab?

I have a movie file, in which I am interested in recording the movement of a point; center of a circular feature to be specific. I am trying to perform this using edge detection and corner detection techniques in Matlab.
To perform this, how do I specify a region of interest in the video? Is subplot a good idea?
I was trying to perform this using the binary masks as below,
hVideoSrc = vision.VideoFileReader('video.avi','ImageColorSpace', 'Intensity');
hEdge = vision.EdgeDetector('Method', 'Prewitt','ThresholdSource', 'Property','Threshold', 15/256, 'EdgeThinning', true);
hAB = vision.AlphaBlender('Operation', 'Highlight selected pixels');
WindowSize = [190 150];
hVideoOrig = vision.VideoPlayer('Name', 'Original');
hVideoOrig.Position = [10 hVideoOrig.Position(2) WindowSize];
hVideoEdges = vision.VideoPlayer('Name', 'Edges');
hVideoEdges.Position = [210 hVideoOrig.Position(2) WindowSize];
hVideoOverlay = vision.VideoPlayer('Name', 'Overlay');
hVideoOverlay.Position = [410 hVideoOrig.Position(2) WindowSize];
c = [123 123 170 170];
r = [160 210 210 160];
m = 480; % height of pout image
n = 720; % width of pout image
BW = ~poly2mask(c,r,m,n);
while ~isDone(hVideoSrc)
dummy_frame = step(hVideoSrc) > 0.5; % Read input video
frame = dummy_frame-BW;
edges = step(hEdge, frame);
composite = step(hAB, frame, edges); % AlphaBlender
step(hVideoOrig, frame); % Display original
step(hVideoEdges, edges); % Display edges
step(hVideoOverlay, composite); % Display edges overlayed
end
release(hVideoSrc);
but it turns out that the mask applied on frame is good only for the original video. The edge detection algorithm detects the edges those are masked by binary mask. How can I mask other features permanently and perform edge detection?
Is this what you mean?
BW = poly2mask(c,r,m,n);
frame = dummy_frame .* BW;