Smooth and fit edge of binary images - matlab

i am working on a research about the swimming of fishes using analysis of videos, then i need to be carefully with the images (obtained from video frames) with emphasis in the tail.
The images are in High-Resolution and the software that i customize works with binary images, because is easy to use maths operations on this.
For obten this binary images i use 2 methods:
1)Convert the image to gray, invert the colors,later to bw and finally to binary with a treshold that give me images like this, with almost nothing of noise. The images sometimes loss a bit of area and doesn't is very exactly with the tail(now i need more acurracy for determinate the amplitude of tail moves)
image 1
2)i use this code, for cut the border that increase the threshold, this give me a good image of the edge, but i dont know like joint these point and smooth the image, or fitting binary images, the app fitting of matlab 2012Rb doesn't give me a good graph and i don't have access to the toolboxs of matlab.
s4 = imread('arecorte.bmp');
A=[90 90 1110 550]
s5=imcrop(s4,A)
E = edge(s5,'canny',0.59);
image2
My question is that
how i can fit the binary image or joint the points and smooth without disturb the tail?
Or how i can use the edge of the image 2 to increase the acurracy of the image 1?
i will upload a image in the comments that give me the idea of the method 2), because i can't post more links, please remember that i am working with iterations and i can't work frame by frame.
Note: If i ask this is because i am in a dead point and i don't have the resources to pay to someone for do this, until this moment i was able to write the code but in this final problem i can't alone.

I think you should use connected component labling and discard the small labels and than extract the labels boundary to get the pixels of each part
the code:
clear all
% Read image
I = imread('fish.jpg');
% You don't need to do it you haef allready a bw image
Ibw = rgb2gray(I);
Ibw(Ibw < 100) = 0;
% Find size of image
[row,col] = size(Ibw);
% Find connceted components
CC = bwconncomp(Ibw,8);
% Find area of the compoennts
stats = regionprops(CC,'Area','PixelIdxList');
areas = [stats.Area];
% Sort the areas
[val,index] = sort(areas,'descend');
% Take the two largest comonents ids and create filterd image
IbwFilterd = zeros(row,col);
IbwFilterd(stats(index(1,1)).PixelIdxList) = 1;
IbwFilterd(stats(index(1,2)).PixelIdxList) = 1;
imshow(IbwFilterd);
% Find the pixels of the border of the main component and tail
boundries = bwboundaries(IbwFilterd);
yCorrdainteOfMainFishBody = boundries{1}(:,1);
xCorrdainteOfMainFishBody = boundries{1}(:,2);
linearCorrdMainFishBody = sub2ind([row,col],yCorrdainteOfMainFishBody,xCorrdainteOfMainFishBody);
yCorrdainteOfTailFishBody = boundries{2}(:,1);
xCorrdainteOfTailFishBody = boundries{2}(:,2);
linearCorrdTailFishBody = sub2ind([row,col],yCorrdainteOfTailFishBody,xCorrdainteOfTailFishBody);
% For visoulaztion put color for the boundries
IFinal = zeros(row,col,3);
IFinalChannel = zeros(row,col);
IFinal(:,:,1) = IFinalChannel;
IFinalChannel(linearCorrdMainFishBody) = 255;
IFinal(:,:,2) = IFinalChannel;
IFinalChannel = zeros(row,col);
IFinalChannel(linearCorrdTailFishBody) = 125;
IFinal(:,:,3) = IFinalChannel;
imshow(IFinal);
The final image:

Related

How to separate human body from background in an image

I have been trying to separate the human body in an image from the background, but all the methods I have seen don't seem to work very well for me.
I have collected the following images;
The image of the background
The image of the background with the person in it.
Now I want to cut out the person from the background.
I tried subtracting the image of the background from the image with the person using res = cv2.subtract(background, foreground) (I am new to image processing).
Background subtraction methods in opencv like cv2.BackgroundSubtractorMOG2() and cv2.BackgroundSubtractorMOG2() only works with videos or image sequence and contour detection methods I have seen are only for solid shapes.
And grabCut doesn't quite work well for me because I would like to automate the process.
Given the images I have (Image of the background and image of the background with the person in it), is there a method of cutting the person out from the background?
I wouldn't recommend a neural net for this problem. That's a lot of work for something like this where you have a known background. I'll walk through the steps I took to do the background segmentation on this image.
First I shifted into the LAB color space to get some light-resistant channels to work with. I did a simple subtractions of foreground and background and combined the a and b channels.
You can see that there is still significant color change in the background even with a less light-sensitive color channel. This is likely due to the auto white balance on the camera, you can see that some of the background colors change when you step into view.
The next step I took was thresholding off of this image. The optimal threshold values may not always be the same, you'll have to adjust to a range that works well for your set of photos.
I used openCV's findContours function to get the segmentation points of each blob and I filtered the available contours by size. I set a size threshold of 15000. For reference, the person in the image had a pixel area of 27551.
Then it's just a matter of cropping out the contour.
This technique works for any good thresholding strategy. If you can improve the consistency of your pictures by turning off auto settings and ensure good contrast of the person against the wall then you can use simpler thresholding strategies and get good results.
Just for fun:
Edit:
I forgot to add in the code I used:
import cv2
import numpy as np
# rescale values
def rescale(img, orig, new):
img = np.divide(img, orig);
img = np.multiply(img, new);
img = img.astype(np.uint8);
return img;
# get abs(diff) of all hue values
def diff(bg, fg):
# do both sides
lh = bg - fg;
rh = fg - bg;
# pick minimum # this works because of uint wrapping
low = np.minimum(lh, rh);
return low;
# load image
bg = cv2.imread("back.jpg");
fg = cv2.imread("person.jpg");
fg_original = fg.copy();
# blur
bg = cv2.blur(bg,(5,5));
fg = cv2.blur(fg,(5,5));
# convert to lab
bg_lab = cv2.cvtColor(bg, cv2.COLOR_BGR2LAB);
fg_lab = cv2.cvtColor(fg, cv2.COLOR_BGR2LAB);
bl, ba, bb = cv2.split(bg_lab);
fl, fa, fb = cv2.split(fg_lab);
# subtract
d_b = diff(bb, fb);
d_a = diff(ba, fa);
# rescale for contrast
d_b = rescale(d_b, np.max(d_b), 255);
d_a = rescale(d_a, np.max(d_a), 255);
# combine
combined = np.maximum(d_b, d_a);
# threshold
# check your threshold range, this will work for
# this image, but may not work for others
# in general: having a strong contrast with the wall makes this easier
thresh = cv2.inRange(combined, 70, 255);
# opening and closing
kernel = np.ones((3,3), np.uint8);
# closing
thresh = cv2.dilate(thresh, kernel, iterations = 2);
thresh = cv2.erode(thresh, kernel, iterations = 2);
# opening
thresh = cv2.erode(thresh, kernel, iterations = 2);
thresh = cv2.dilate(thresh, kernel, iterations = 3);
# contours
_, contours, _ = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE);
# filter contours by size
big_cntrs = [];
marked = fg_original.copy();
for contour in contours:
area = cv2.contourArea(contour);
if area > 15000:
print(area);
big_cntrs.append(contour);
cv2.drawContours(marked, big_cntrs, -1, (0, 255, 0), 3);
# create a mask of the contoured image
mask = np.zeros_like(fb);
mask = cv2.drawContours(mask, big_cntrs, -1, 255, -1);
# erode mask slightly (boundary pixels on wall get color shifted)
mask = cv2.erode(mask, kernel, iterations = 1);
# crop out
out = np.zeros_like(fg_original) # Extract out the object and place into output image
out[mask == 255] = fg_original[mask == 255];
# show
cv2.imshow("combined", combined);
cv2.imshow("thresh", thresh);
cv2.imshow("marked", marked);
# cv2.imshow("masked", mask);
cv2.imshow("out", out);
cv2.waitKey(0);
Since it is very easy to find dataset consist a lot of human body, I suggest you to implement neural network segmentation tecniques to extract human body perfectly. Please check this link to see similar example.

How to remove white annotations from image?

I try to remove the white annotations of this image (the numbers and arrows), as well as the black grid, with MATLAB:
I tried to compute, for each pixel, the mode of neighbors, but this process is very slow and I get poor results.
How can I obtain an image like this one?
Thank you for your time.
The general name for such a task is inpainting. If you search for that you will find better methods than what I'm showing here. This is no more than a proof of concept. I'm using DIPimage 3 (because I'm an author and it's easy for me to use).
First we need to create a mask for the regions that we want to remove (inpaint). It is easy to find pixels where all three channels have a high value (white) or a low value (black):
img = readim('https://i.stack.imgur.com/16r9N.png');
% Find a mask for the areas to remove
whitemask = min(img,'tensor') > 50;
blackmask = max(img,'tensor') < 30;
mask = whitemask | blackmask;
This mask doesn't capture all of the black grid, if we increase the threshold we will also remove the dark region of sea off the coast of Spain. And it also captures the white outline of the coasts. We can do a little bit better than this with some additional filtering:
% Find a mask for the areas to remove
whitemask = min(img,'tensor') > 50;
whitemask = whitemask - pathopening(whitemask,50);
blackmask = max(img,'tensor');
blackmask2 = blackmask < 80;
blackmask2 = blackmask2 - areaopening(blackmask2,6);
blackmask = blackmask < 30 | blackmask2;
mask = whitemask | blackmask;
This produces the following mask:
Still far from perfect, but a good start for our proof of concept.
One simple inpainting method uses normalized convolution: using the inverse of the mask we made, convolve the image multiplied by the mask, and convolve the mask separately. The ratio of these two results is a smoothed image that doesn't take the masked pixels into account. Finally, we replace the pixels in the original image under the mask with the values from this normalized convolution:
% Solution 1: normalized convolution
smooth = gaussf(img * ~mask, 2) / gaussf(~mask, 2);
img(mask) = smooth(mask);
An alternative solution applies a closing on the image multiplied by the mask (note that this multiplication makes the pixels we don't want completely black; the closing will spread the surrounding colors over the black areas):
% Solution 2: morphology
smooth = iterate('closing',img * ~mask, 13);
img(mask) = smooth(mask);

Finding area in image with maximum variation of pixels

I am struggling with some algorithm to extract the region from an image which has the maximum change in pixels. I got the following image after preprocessing.
I did following steps of pre-processing
x = imread('test2.jpg');
gray_x = rgb2gray(x);
I = medfilt2(gray_x,[3 3]);
gray_x = I;
%%
canny_x = edge(gray_x,'canny',0.3);
figure,imshow(canny_x);
%%
s = strel('disk',3);
si = imdilate(canny_x,s);
%figure5
figure; imshow(si);
se = imerode(canny_x,s);title('dilation');
%figure6
figure; imshow(se);title('Erodsion');
I = imsubtract(si,se);
%figure7
figure; imshow(I);
Basically what I am struggling for, is to make weapon detection system using Image processing. I want to localize possible area's to be weapon so that I could feed them to my classifier to identify if it is a weapon or not. Any suggestions? Thank you
A possible solution could be:
Find corner points in the image (Harris corner points, etc)
Set value of all the corner points to white while remaining image will be black
Take a rectangular window and traverse it over the whole image
sum all the white pixels in that rectangular window
select that region whose sum is maximum of all regions

Changing image aspect ratio of interpolated RGB image. Square to rectangular

I have some code which takes a fish eye images and converts it to a rectangular image in each RGB channels. I am having trouble with the fact the the output image is square instead of rectangular. (this means that the image is distorted, compressed horizontally.) I have tried changing the output matrix to a more suitable format, without success. Besides this i have also discovered that for the code to work the input image must be square like 500x500. Any idea how to solve this issue? This is the code:
The code is inspired by Prakash Manandhar "Polar To/From Rectangular Transform of Images" file exchange on mathworks.
EDIT. Code now works.
function imP = FISHCOLOR2(imR)
rMin=0.1;
rMax=1;
[Mr, Nr, Dr] = size(imR); % size of rectangular image
xRc = (Mr+1)/2; % co-ordinates of the center of the image
yRc = (Nr+1)/2;
sx = (Mr-1)/2; % scale factors
sy = (Nr-1)/2;
reduced_dim = min(size(imR,1),size(imR,2));
imR = imresize(imR,[reduced_dim reduced_dim]);
M=size(imR,1);N=size(imR,2);
dr = (rMax - rMin)/(M-1);
dth = 2*pi/N;
r=rMin:dr:rMin+(M-1)*dr;
th=(0:dth:(N-1)*dth)';
[r,th]=meshgrid(r,th);
x=r.*cos(th);
y=r.*sin(th);
xR = x*sx + xRc;
yR = y*sy + yRc;
for k=1:Dr % colors
imP(:,:,k) = interp2(imR(:,:,k), xR, yR); % add k channel
end
imP = imresize(imP,[size(imP,1), size(imP,2)/3]);
imP = imrotate(imP,270);
SOLVED
Input image <- Image link
Output image <- Image link
PART A
To remove the requirement of a square input image, you may resize the input image into a square one with this -
%%// Resize the input image to make it square
reduced_dim = min(size(imR,1),size(imR,2));
imR = imresize(imR,[reduced_dim reduced_dim]);
Few points I would like to raise here though about this image-resizing to make it a square image. This was a quick and dirty approach and distorts the image for a non-square image, which you may not want if the image is not too "squarish". In many of those non-squarish images, you would find blackish borders across the boundaries of the image. If you can remove that using some sort of image processing algorithm or just manual photoshoping, then it would be ideal. After that even if the image is not square, imresize could be considered a safe option.
PART B
Now, after doing the main processing of flattening out the fisheye image,
at the end of your code, it seemed like the image has to be rotated
90 degrees clockwise or counter-clockwise depending on if the fisheye
image have objects inwardly or outwardly respectively.
%%// Rotating image
imP = imrotate(imP,-90); %%// When projected inwardly
imP = imrotate(imP,-90); %%// When projected outwardly
Note that the flattened image must have the height equal to the half of the
size of the input square image, that is the radius of the image.
Thus, the final output image must have number of rows as - size(imP,2)/2
Since you are flattening out a fisheye image, I assumed that the width
of the flattened image must be 2*PI times the height of it. So, I tried this -
imP = imresize(imP,[size(imP,2)/2 pi*size(imP,2)]);
But the results looked too flattened out. So, the next logical experimental
value looked like PI times the height, i.e. -
imP = imresize(imP,[size(imP,2)/2 pi*size(imP,2)/2]);
Results in this case looked good.
I'm not very experienced in the finer points of image processing in MATLAB, but depending on the exact operation of the imP fill mechanism, you may get what you're looking for by doing the following. Change:
M = size(imR, 1);
N = size(imR, 2);
To:
verticalScaleFactor = 0.5;
M = size(imR, 1) * verticalScaleFactor;
N = size(imR, 2);
If my hunch is right, you should be able to tune that scale factor to get the image just right. It may, however, break your code. Let me know if it doesn't work, and edit your post to flesh out exactly what each section of code does. Then we should be able to give it another shot. Good luck!
This is the code which works.
function imP = FISHCOLOR2(imR)
rMin=0.1;
rMax=1;
[Mr, Nr, Dr] = size(imR); % size of rectangular image
xRc = (Mr+1)/2; % co-ordinates of the center of the image
yRc = (Nr+1)/2;
sx = (Mr-1)/2; % scale factors
sy = (Nr-1)/2;
reduced_dim = min(size(imR,1),size(imR,2));
imR = imresize(imR,[reduced_dim reduced_dim]);
M=size(imR,1);N=size(imR,2);
dr = (rMax - rMin)/(M-1);
dth = 2*pi/N;
r=rMin:dr:rMin+(M-1)*dr;
th=(0:dth:(N-1)*dth)';
[r,th]=meshgrid(r,th);
x=r.*cos(th);
y=r.*sin(th);
xR = x*sx + xRc;
yR = y*sy + yRc;
for k=1:Dr % colors
imP(:,:,k) = interp2(imR(:,:,k), xR, yR); % add k channel
end
imP = imresize(imP,[size(imP,1), size(imP,2)/3]);
imP1 = imrotate(imP1,270);

Matlab fill shapes by white

As you see, I have shapes and their white boundaries. I want to fill the shapes in white color.
The input is:
I would like to get this output:
Can anybody help me please with this code? it doesn't change the black ellipses to white.
Thanks alot :]]
I = imread('untitled4.bmp');
Ibw = im2bw(I);
CC = bwconncomp(Ibw); %Ibw is my binary image
stats = regionprops(CC,'pixellist');
% pass all over the stats
for i=1:length(stats),
size = length(stats(i).PixelList);
% check only the relevant stats (the black ellipses)
if size >150 && size < 600
% fill the black pixel by white
x = round(mean(stats(i).PixelList(:,2)));
y = round(mean(stats(i).PixelList(:,1)));
Ibw = imfill(Ibw, [x, y]);
end;
end;
imshow(Ibw);
Your code can be improved and simplified as follows. First, negating Ibw and using BWCONNCOMP to find 4-connected components will give you indices for each black region. Second, sorting the connected regions by the number of pixels in them and choosing all but the largest two will give you indices for all the smaller circular regions. Finally, the linear indices of these smaller regions can be collected and used to fill in the regions with white. Here's the code (quite a bit shorter and not requiring any loops):
I = imread('untitled4.bmp');
Ibw = im2bw(I);
CC = bwconncomp(~Ibw, 4);
[~, sortIndex] = sort(cellfun('prodofsize', CC.PixelIdxList));
Ifilled = Ibw;
Ifilled(vertcat(CC.PixelIdxList{sortIndex(1:end-2)})) = true;
imshow(Ifilled);
And here's the resulting image:
If your images are all black&white, and you have the image processing toolkit, then this looks like what you need:
http://www.mathworks.co.uk/help/toolbox/images/ref/imfill.html
Something like:
imfill(image, [startX, startY])
where startX, startY is a pixel in the area that you want to fill.