black on black motion segmentation in Matlab - matlab

I have a video that I am using background subtraction and motion segmentation on. The floor in the video is black, so when I get the silhouette the feet and parts of the legs are cut off. Is there a fix around this? This is what it looks like.
This is the background image.
This is a piece of my code....
clear all
close all
clc
% Read the video in the video object Mov
MM = mmreader('kassie_test_video.wmv');
% Read in all video frames.
Mov = read(MM);
% Get the number of frames.
FrameNum = MM.NumberOfFrames;
% load 'object_data.mat'
BackgroundImage = (Mov(:,:,:,98)); % background image
% set the sampling rate as well as the threshold for binary image.
downSamplingRate = MM.FrameRate;
%%
index = 1;
clear IM
clear Images
sf=10;ef=sf+30;
for ii =sf:ef
% Extract next frame
Im = im2double(Mov(:,:,:,ii));
% Background subtraction
Ib = rgb2gray(abs(im2double((Mov(:,:,:,ii)))-im2double(BackgroundImage)));
% conversion to binary image.
Thresh = graythresh(Ib);
Ib = im2bw(Ib, Thresh);
se = strel('square',1);
Ib = imerode(Ib,se); % Erode the image
Ib = medfilt2(Ib); % median filtering
Ib = imfill(Ib,'holes'); % fill the holes in the image
imshow(Ib,[])
end

there is a limit to what can be achieved in computer vision utilizing only pixel-processing without incorporating higher-level semantic information. it appears as if he only thing that makes you think the legs are missing is your high-level knowledge of how a body should look like. The real question here: is there any real information in the pixels? if it just so happen that the legs are exactly the same color as the background there's nothing much you can do unless you incorporate high level semantic information.

Related

Edge linking in Matlab

I am trying to link the edges in images like the image below:
I've tried using dilation/erosion operations but the result isn't good. Is there any other way to link the edges?
Here is the original image:
The result you have might very well be good enough to apply the Hough transform to, which will identify your eight most important lines across the image.
I don't know if all your images are similar, but in the example you show it is easy to separate the gray lines from the green background. For example, the following code (using DIPimage, but easy to implement with other tools too) will distinguish the relatively bright gray from anything that is dark or colorful:
img = readim('https://i.stack.imgur.com/vmBiF.jpg');
img = colorspace(img,'hsv');
img = (0.5-img{2})*img{3}; % img{2} is the saturation channel, img{3} is the value (intensity) channel
img = clip(img); % set negative values to 0
Next, a Laplace of Gaussian filter (which is a line detector), some thresholding just above zero, and a selection of only the larger objects results in the detected lines:
img = -laplace(img,5); % LoG with sigma=5
img = img > 0.05; % 0.05 is just above 0
img = areaopening(img,1000); % remove objects smaller than 1000 pixels
Needless to say, this is a lot cheaper computationally than running a U-Net.

How to get the boundary of the bubble in the water and output the coordinate using MATLAB?

everyone. I am trying to get to boundary dimension of the bubble inside the water using MATLAB. The code and result are shown below.
clear;
clc;
i1=imread('1.jpg');
i2=imread('14.jpg');
% i1=rgb2gray(i1);
% i2=rgb2gray(i2);
[m,n]=size(i1);
im1=double(i1);
im2=double(i2);
i3=zeros(size(i1));
threshold=29;
for i=1:m;
for j=1:n;
if abs((im2(i,j))-(im1(i,j)))>threshold ;
i3(i,j)=1;
else abs((im2(i,j))-(im1(i,j)))<threshold;
i3(i,j)=0;
end
end;
end;
se = strel('square', 5);
filteredForeground = imopen(i3, se);
figure; imshow(filteredForeground); title('Clean Foreground');
BW1 = edge(filteredForeground,'sobel');
subplot(2,2,1);imshow(i1);title('BackGround');
subplot(2,2,2);imshow(i2);title('Current Frame');
subplot(2,2,3);imshow(filteredForeground);title('Clean Foreground');
subplot(2,2,4);imshow(BW1);title('Edge');
As the figure shows, the result is not very satisfactory. So can anyone give me some advice to improve my result? And how can I output the boundary coordinate to file and get the real dimension of the bubble? Thank you very much!
First note that your background removal is almost useless.
If we plot diffI=i2-i1; imshow(diffI,[]);colorbar, we can see that the difference is almost as big as the image itself. You need to understand that what its visually similar to you, its not necessarily similar numerically, and this is a great example for it.
Therefore you dont have what you think you have. The background is there in your thresholding. Then, note that the object you want to segment, its not just whiter. Its definitely as dark as the background in some areas. This means that a simple segmentation by value thresholding will not work. You need better segmentation techniques.
I happen to have a copy of this level set algorithm in my MATLAB, the "Distance Regularized Level Set Evolution".
When I run the code demo_1 with your image, I get the following (nice gif!):
(Uncompressed)
Full code of the demo:
% This Matlab code demonstrates an edge-based active contour model as an application of
% the Distance Regularized Level Set Evolution (DRLSE) formulation in the following paper:
%
% C. Li, C. Xu, C. Gui, M. D. Fox, "Distance Regularized Level Set Evolution and Its Application to Image Segmentation",
% IEEE Trans. Image Processing, vol. 19 (12), pp. 3243-3254, 2010.
%
% Author: Chunming Li, all rights reserved
% E-mail: lchunming#gmail.com
% li_chunming#hotmail.com
% URL: http://www.imagecomputing.org/~cmli//
clear all;
close all;
Img=imread('https://i.stack.imgur.com/Wt9be.jpg');
Img=double(Img(:,:,1));
%% parameter setting
timestep=1; % time step
mu=0.2/timestep; % coefficient of the distance regularization term R(phi)
iter_inner=5;
iter_outer=300;
lambda=5; % coefficient of the weighted length term L(phi)
alfa=-3; % coefficient of the weighted area term A(phi)
epsilon=1.5; % papramater that specifies the width of the DiracDelta function
sigma=.8; % scale parameter in Gaussian kernel
G=fspecial('gaussian',15,sigma); % Caussian kernel
Img_smooth=conv2(Img,G,'same'); % smooth image by Gaussiin convolution
[Ix,Iy]=gradient(Img_smooth);
f=Ix.^2+Iy.^2;
g=1./(1+f); % edge indicator function.
% initialize LSF as binary step function
c0=2;
initialLSF = c0*ones(size(Img));
% generate the initial region R0 as two rectangles
initialLSF(size(Img,1)/2-5:size(Img,1)/2+5,size(Img,2)/2-5:size(Img,2)/2+5)=-c0;
% initialLSF(25:35,40:50)=-c0;
phi=initialLSF;
potential=2;
if potential ==1
potentialFunction = 'single-well'; % use single well potential p1(s)=0.5*(s-1)^2, which is good for region-based model
elseif potential == 2
potentialFunction = 'double-well'; % use double-well potential in Eq. (16), which is good for both edge and region based models
else
potentialFunction = 'double-well'; % default choice of potential function
end
% start level set evolution
for n=1:iter_outer
phi = drlse_edge(phi, g, lambda, mu, alfa, epsilon, timestep, iter_inner, potentialFunction);
if mod(n,2)==0
figure(2);
imagesc(Img,[0, 255]); axis off; axis equal; colormap(gray); hold on; contour(phi, [0,0], 'r');
drawnow
end
end
% refine the zero level contour by further level set evolution with alfa=0
alfa=0;
iter_refine = 10;
phi = drlse_edge(phi, g, lambda, mu, alfa, epsilon, timestep, iter_inner, potentialFunction);
finalLSF=phi;
figure(2);
imagesc(Img,[0, 255]); axis off; axis equal; colormap(gray); hold on; contour(phi, [0,0], 'r');
hold on; contour(phi, [0,0], 'r');
str=['Final zero level contour, ', num2str(iter_outer*iter_inner+iter_refine), ' iterations'];
title(str);
Ander pointed out in his answer that the background image doesn't match the background of the bubble image. My very best advice to you is not to try to fix this in code, but to fix your experimental setup. If you fix this in software, you'll get a complicated program with lots of "magic numbers" that nobody will be able to maintain after you graduate and leave. Anybody wanting to continue your work will have a hard time adjusting the program to match some new experimental conditions. Fixing the setup will lead to an experiment that is much easier to reproduce and to build on.
So what is wrong with the background picture? First of all, make sure the illumination hasn't changed since you took it. Let's assume you took the pictures in succession, and the change in background illumination is due to shadows of the bubble on the background.
In your previous question about this topic you got some so-so advice about your experimental setup. This picture is from that question:
This looks really great, you have a transparent tank, and a big white surface behind it. I recommend that you take out the reticulated sheet from behind it, and put all your lights on the white background. The goal is to get back-illuminated bubbles. The bubbles will cast a shadow, but it will be towards the camera, not the background -- they will darken the image, making detection really simple. But you need to make sure there is no direct light falling on the bubbles, since the reflection of that light towards the camera will causes highlights (as you see in your picture) that could be brighter than the background, or at least will reduce contrast.
If you keep some distance between the tank and the white background, then when focusing the camera on the bubbles that background will be out of focus and blurred, meaning that it will be fairly uniform. The less detail in the background, the easier the detection of bubbles is.
If you need the markings from the reticulated sheet, then I recommend you use a transparent sheet for that, on which you can draw lines with a permanent marker.
Sorry, this was not at all a programming answer... :)
So here is what this could look like. An example image with bubbles that we've used in Delft for many decades as an exercise:
I actually don't know what it is from, but they seem to be small bubbles in liquid. Some are out of focus, you won't have this problem. Segmentation is quite simple (This uses MATLAB with DIPimage):
img = readim('bubbles.tif');
background = closing(img,25); % estimate of background
out = threshold(background - img);
out = fillholes(out);
traces = traceobjects(out);
If you have a background image (which of course you'll have), then you don't need to estimate it. What the code then does is simply threshold the difference between the background and the image (since the bubbles are darker, I subtract the image from the background instead of the other way around), and a very simple post-processing to fill up the holes in the objects. Depending on what your images look like, you might need a bit more preprocessing or postprocessing... Think about noise removal in the input image!
The last line traces the object boundaries, yielding a polygon for each bubble (this last command is only in DIPimage 3.0, which isn't officially released yet, but you can compile it yourself if you're adventurous). Alternatively, use the bwboundaries function from the Image Processing Toolbox:
traces = bwboundaries(dip_array(out));

what's the common way for segmenting a text from an image using matlab?

I have searched about this field and I found some papers that present new methods to extracting texts from images, but I have a grayscale image consists of a simple background and some texts.so I need a method that everyone works with it.
please provide details on how this can be done.
Here an article about text segmentation.
the article
And here an easy way to segment your image in 2 class.
I = imread('...'); % Your board image
ThreshConstant = 1; % Try to vary this constant.
bw = im2bw(I , ThreshConstant * graythresh(I)); % Black-white image
SegmentedImg = I.*repmat(uint8(bw), [1 1 3]);
Just do imshow(bw); and you will have a 2 color image normally well segmented.
If the threshold is too strong, try to turn around 0.5 to 1.5 with ThreshConstant.

Real time face detection in MATLAB

I'm trying to make a real time face detector using MATLAB. I found a sample code on the Mathworks' page, but it uses a sample video. What I'm having a problem with it that this code only can track the one it chooses to even with a few faces in the opening frame. I need it to track several faces at once. Is that possible with a change in this code that is not drastic.
I found the following code on MathWorks' web page:
% Create a cascade detector object.
faceDetector = vision.CascadeObjectDetector();
% Read a video frame and run the detector.
videoFileReader = vision.VideoFileReader('visionface.avi');
videoFrame = step(videoFileReader);
bbox = step(faceDetector, videoFrame);
% Draw the returned bounding box around the detected face.
videoOut = insertObjectAnnotation(videoFrame,'rectangle',bbox,'Face');
figure, imshow(videoOut), title('Detected face');
% Get the skin tone information by extracting the Hue from the video frame
% converted to the HSV color space.
[hueChannel,~,~] = rgb2hsv(videoFrame);
% Display the Hue Channel data and draw the bounding box around the face.
figure, imshow(hueChannel), title('Hue channel data');
rectangle('Position',bbox(1,:),'LineWidth',2,'EdgeColor',[1 1 0])
% Detect the nose within the face region. The nose provides a more accurate
% measure of the skin tone because it does not contain any background
% pixels.
noseDetector = vision.CascadeObjectDetector('Nose');
faceImage = imcrop(videoFrame,bbox(1,:));
noseBBox = step(noseDetector,faceImage);
% The nose bounding box is defined relative to the cropped face image.
% Adjust the nose bounding box so that it is relative to the original video
% frame.
noseBBox(1,1:2) = noseBBox(1,1:2) + bbox(1,1:2);
% Create a tracker object.
tracker = vision.HistogramBasedTracker;
% Initialize the tracker histogram using the Hue channel pixels from the
% nose.
initializeObject(tracker, hueChannel, noseBBox(1,:));
% Create a video player object for displaying video frames.
videoInfo = info(videoFileReader);
videoPlayer = vision.VideoPlayer('Position',[300 300 videoInfo.VideoSize+30]);
% Track the face over successive video frames until the video is finished.
while ~isDone(videoFileReader)
% Extract the next video frame
videoFrame = step(videoFileReader);
% RGB -> HSV
[hueChannel,~,~] = rgb2hsv(videoFrame);
% Track using the Hue channel data
bbox = step(tracker, hueChannel);
% Insert a bounding box around the object being tracked
videoOut = insertObjectAnnotation(videoFrame,'rectangle',bbox,'Face');
% Display the annotated video frame using the video player object
step(videoPlayer, videoOut);
end
% Release resources
release(videoFileReader);
release(videoPlayer);
Thanks in advance!
That example is designed to only track a single face. For tracking multiple objects please take a look at this example, that uses vision.KalmanFilter objects for tracking. You can replace the detection part in this example with the code to detect faces.
Alternatively, take a look at this example that uses the KLT algorithm (vision.PointTracker) to track points. You can modify that to track multiple faces too, but that is considerably more work. You would have to do a lot of bookkeeping to keep track of which points belong to which face.
Edit:
Here is an example of how to use vision.PointTracker to track multiple faces.

MATLAB Image Processing - Find Edge and Area of Image

As a preface: this is my first question - I've tried my best to make it as clear as possible, but I apologise if it doesn't meet the required standards.
As part of a summer project, I am taking time-lapse images of an internal melt figure growing inside a crystal of ice. For each of these images I would like to measure the perimeter of, and area enclosed by the figure formed. Linked below is an example of one of my images:
The method that I'm trying to use is the following:
Load image, crop, and convert to grayscale
Process to reduce noise
Find edge/perimeter
Attempt to join edges
Fill perimeter with white
Measure Area and Perimeter using regionprops
This is the code that I am using:
clear; close all;
% load image and convert to grayscale
tyrgb = imread('TyndallTest.jpg');
ty = rgb2gray(tyrgb);
figure; imshow(ty)
% apply a weiner filter to remove noise.
% N is a measure of the window size for detecting coherent features
N=20;
tywf = wiener2(ty,[N,N]);
tywf = tywf(N:end-N,N:end-N);
% rescale the image adaptively to enhance contrast without enhancing noise
tywfb = adapthisteq(tywf);
% apply a canny edge detection
tyedb = edge(tywfb,'canny');
%join edges
diskEnt1 = strel('disk',8); % radius of 4
tyjoin1 = imclose(tyedb,diskEnt1);
figure; imshow(tyjoin1)
It is at this stage that I am struggling. The edges do not quite join, no matter how much I play around with the morphological structuring element. Perhaps there is a better way to complete the edges? Linked is an example of the figure this code outputs:
The reason that I am trying to join the edges is so that I can fill the perimeter with white pixels and then use regionprops to output the area. I have tried using the imfill command, but cannot seem to fill the outline as there are a large number of dark regions to be filled within the perimeter.
Is there a better way to get the area of one of these melt figures that is more appropriate in this case?
As background research: I can make this method work for a simple image consisting of a black circle on a white background using the below code. However I don't know how edit it to handle more complex images with edges that are less well defined.
clear all
close all
clc
%% Read in RGB image from directory
RGB1 = imread('1.jpg') ;
%% Convert RPG image to grayscale image
I1 = rgb2gray(RGB1) ;
%% Transform Image
%CROP
IC1 = imcrop(I1,[74 43 278 285]);
%BINARY IMAGE
BW1 = im2bw(IC1); %Convert to binary image so the boundary can be traced
%FIND PERIMETER
BWP1 = bwperim(BW1);
%Traces perimeters of objects & colours them white (1).
%Sets all other pixels to black (0)
%Doing the same job as an edge detection algorithm?
%FILL PERIMETER WITH WHITE IN ORDER TO MEASURE AREA AND PERIMETER
BWF1 = imfill(BWP1); %This opens figure and allows you to select the areas to fill with white.
%MEASURE PERIMETER
D1 = regionprops(BWF1, 'area', 'perimeter');
%Returns an array containing the properties area and perimeter.
%D1(1) returns the perimeter of the box and an area value identical to that
%perimeter? The box must be bounded by a perimeter.
%D1(2) returns the perimeter and area of the section filled in BWF1
%% Display Area and Perimeter data
D1(2)
I think you might have room to improve the effect of edge detection in addition to the morphological transformations, for instance the following resulted in what appeared to me a relatively satisfactory perimeter.
tyedb = edge(tywfb,'sobel',0.012);
%join edges
diskEnt1 = strel('disk',7); % radius of 4
tyjoin1 = imclose(tyedb,diskEnt1);
In addition I used bwfill interactively to fill in most of the interior. It should be possible to fill the interior programatically but I did not pursue this.
% interactively fill internal regions
[ny nx] = size(tyjoin1);
figure; imshow(tyjoin1)
tyjoin2=tyjoin1;
titl = sprintf('click on a region to fill\nclick outside window to stop...')
while 1
pts=ginput(1)
tyjoin2 = bwfill(tyjoin2,pts(1,1),pts(1,2),8);
imshow(tyjoin2)
title(titl)
if (pts(1,1)<1 | pts(1,1)>nx | pts(1,2)<1 | pts(1,2)>ny), break, end
end
This was the result I obtained
The "fractal" properties of the perimeter may be of importance to you however. Perhaps you want to retain the folds in your shape.
You might want to consider Active Contours. This will give you a continous boundary of the object rather than patchy edges.
Below are links to
A book:
http://www.amazon.co.uk/Active-Contours-Application-Techniques-Statistics/dp/1447115570/ref=sr_1_fkmr2_1?ie=UTF8&qid=1377248739&sr=8-1-fkmr2&keywords=Active+shape+models+Andrew+Blake%2C+Michael+Isard
A demo:
http://users.ecs.soton.ac.uk/msn/book/new_demo/Snakes/
and some Matlab code on the File Exchange:
http://www.mathworks.co.uk/matlabcentral/fileexchange/28149-snake-active-contour
and a link to a description on how to implement it: http://www.cb.uu.se/~cris/blog/index.php/archives/217
Using the implementation on the File Exchange, you can get something like this:
%% Load the image
% You could use the segmented image obtained previously
% and then apply the snake on that (although I use the original image).
% This will probably make the snake work better and the edges
% in your image is not that well defined.
% Make sure the original and the segmented image
% have the same size. They don't at the moment
I = imread('33kew0g.jpg');
% Convert the image to double data type
I = im2double(I);
% Show the image and select some points with the mouse (at least 4)
% figure, imshow(I); [y,x] = getpts;
% I have pre-selected the coordinates already
x = [ 525.8445 473.3837 413.4284 318.9989 212.5783 140.6320 62.6902 32.7125 55.1957 98.6633 164.6141 217.0749 317.5000 428.4172 494.3680 527.3434 561.8177 545.3300];
y = [ 435.9251 510.8691 570.8244 561.8311 570.8244 554.3367 476.3949 390.9586 311.5179 190.1085 113.6655 91.1823 98.6767 106.1711 142.1443 218.5872 296.5291 375.9698];
% Make an array with the selected coordinates
P=[x(:) y(:)];
%% Start Snake Process
% You probably have to fiddle with the parameters
% a bit more that I have
Options=struct;
Options.Verbose=true;
Options.Iterations=1000;
Options.Delta = 0.02;
Options.Alpha = 0.5;
Options.Beta = 0.2;
figure(1);
[O,J]=Snake2D(I,P,Options);
If the end result is an area/diameter estimate, then why not try to find maximal and minimal shapes that fit in the outline and then use the shapes' area to estimate the total area. For instance, compute a minimal circle around the edge set then a maximal circle inside the edges. Then you could use these to estimate diameter and area of the actual shape.
The advantage is that your bounding shapes can be fit in a way that minimizes error (unbounded edges) while optimizing size either up or down for the inner and outer shape, respectively.