Resize Frame for Optical Flow - matlab

I have problem with optical flow if the frame size have been manipulated in any way this gives me error. There are two options either change the resolution of the video at the beginning or somehow how change the frame size in a way that optical flow will work. I will want to add a cascade object to detect nose, mouth and eyes in further development therefore I need solution that will work for individual regions without necessary setting optical flow individually for those regions especially that a bounding box does not have a fixed size and it will displace itself slightly from frame to frame. Here is my code so far, the error is that it is exceeding matrix dimensions.
faceDetector = vision.CascadeObjectDetector();
vidObj = vision.VideoFileReader('MEXTest.mp4','ImageColorSpace','Intensity','VideoOutputDataType','uint8');
converter = vision.ImageDataTypeConverter;
opticalFlow = vision.OpticalFlow('ReferenceFrameDelay', 1);
opticalFlow.OutputValue = 'Horizontal and vertical components in complex form';
shapeInserter = vision.ShapeInserter('Shape','Lines','BorderColor','Custom','CustomBorderColor', 255);
vidPlayer = vision.VideoPlayer('Name','Motion Vector');
while ~isDone(vidObj);
frame = step(vidObj);
fraRes = imresize(frame,0.5);
fbbox = step(faceDetector,fraRes);
I = imcrop(fraRes,fbbox);
im = step(converter,I);
of = step(opticalFlow,im);
lines = videooptflowlines(of, 20);
if ~isempty(lines)
out = step(shapeInserter,im,lines);
step(vidPlayer,out);
end
end
release(vidPlayer);
release(VidObj);

UPDATE: I went and edited the function for optical flow which creates lines and this sorts out the some size issues however it is necessary to to input this manually for each object (so if there is any other way let me know). I think the best solution would be set a fixed size to cascadeObjectDetector, does anyone know how to do this? Or have any other idea?
faceDetector = vision.CascadeObjectDetector(); %I need fixed size for this
faceDetector.MinSize = [150 150];
vidRead = vision.VideoFileReader('MEXTest.mp4','ImageColorSpace','Intensity','VideoOutputDataType','uint8');
convert = vision.ImageDataTypeConverter;
optFlo = vision.OpticalFlow('ReferenceFrameDelay', 1);
optFlo.OutputValue = 'Horizontal and vertical components in complex form';
shapeInserter = vision.ShapeInserter('Shape','Lines','BorderColor','Custom', 'CustomBorderColor', 255);
while ~isDone(vidRead)
frame = step(vidRead);
fraRes = imresize(frame,0.3);
fraSin = im2single(fraRes);
bbox = step(faceDetector,fraSin);
I = imcrop(fraSin, bbox);
im = step(convert, I);
release(optFlo);
of = step(optFlo, im);
lines = optfloo(of, 50); %use videooptflowlines instead of (optfloo)
out = step(shapeInserter, im, lines);
imshow(out);
end

Related

Most efficient way to track multiple small objects in MATLAB?

I am relatively new to image processing, and have never attempted to do anything with images in MATLAB, so forgive me if i am making some very rookie errors.
I am attempting to make a program that will track ants in a video. The video is taken from a stationary camera, and records the ants from a birds-eye perspective. I am having issues making reliable tracks of the ants however. Initially, i used the ForegroundDetection function, however there were multiple issues:
1.) Stationary ants were not detected
2.) There was too much overlap between objects (high levels of occlusion)
A friend of mine recommended having a larger gap between compared frames, so instead of subtracting frame 1 from frame 2, do frame 1 from frame 30 instead (1 second apart), as this will make the ants that do not move as much more likely to appear on the subtracted image.
Below is the code i have so far. It is a bit of a shot-in-the-dark attempt to solve the problem, as i am running out of ideas:
i = 1;
k = 1;
n = 1;
Video = {};
SubtractedVideo = {};
FilteredVideo = {};
videoFReader = vision.VideoFileReader('001.mp4',...
'ImageColorSpace', 'Intensity', 'VideoOutputDataType', 'uint8');
videoPlayer = vision.VideoPlayer;
blobby = vision.BlobAnalysis('BoundingBoxOutputPort', true, ...
'AreaOutputPort', true, 'CentroidOutputPort', true, ...
'MinimumBlobArea', 1);
shapeInserter = vision.ShapeInserter('BorderColor','White');
while ~isDone(videoFReader) %Read all frame of video
frame = step(videoFReader);
Video{1, i} = frame;
i = i+1;
end
%Perform subtraction
for j=1: 1: numel(Video)-60
CurrentFrame = Video{1,j};
FutureFrame = Video{1,j+60};
SubtractedImage = imsubtract(CurrentFrame, FutureFrame);
SubtractedVideo{1,j} = SubtractedImage;
ImFiltered = imgaussfilt(SubtractedImage, 2);
BWIm = im2bw(ImFiltered, 0.25);
FilteredVideo{1,j} = BWIm;
end
for a = n:numel(FilteredVideo)
frame = Video{1, n};
bbox = step(blobby, FilteredVideo{1, k});
out = step(shapeInserter, frame, bbox);
step(videoPlayer, out);
k = k+1;
end
currently, when i run the code, i get the following error on the line out = step(shapeInserter, frame, bbox):
'The Points input must be a 4-by-N matrix. Each column specifies a different rectangle and is of the form [row column height width].'
My questions is:
1.) Is this the best way to try and solve the problem I'm having? Is there potentially an easier solution?
2.) What does this error mean? How do i solve the issue?
I appreciate any help anyone can give, thank you!

Find a nearly circular band of bright pixels in this image

This is the problem I have: I have an image as shown below. I want to detect the circular region which I have marked with a red line for display here (that particular bright ring).
Initially, this is what I do for now: (MATLAB)
binaryImage = imdilate(binaryImage,strel('disk',5));
binaryImage = imfill(binaryImage, 'holes'); % Fill holes.
binaryImage = bwareaopen(binaryImage, 20000); % Remove small blobs.
binaryImage = imerode(binaryImage,strel('disk',300));
out = binaryImage;
img_display = immultiply(binaryImage,rgb2gray(J1));
figure, imshow(img_display);
The output seems to be cut on one of the parts of the object (for a different image as input, not the one displayed above). I want an output in such a way that it is symmetric (its not always a perfect circle, when it is rotated).
I want to strictly avoid im2bw since as soon as I binarize, I lose a lot of information about the shape.
This is what I was thinking of:
I can detect the outer most circular (almost circular) contour of the image (shown in yellow). From this, I can find out the centroid and maybe find a circle which has a radius of 50% (to locate the region shown in red). But this won't be exactly symmetric since the object is slightly tilted. How can I tackle this issue?
I have attached another image where object is slightly tilted here
I'd try messing around with the 'log' filter. The region you want is essentially low values of the 2nd order derivative (i.e. where the slope is decreasing), and you can detect these regions by using a log filter and finding negative values. Here's a very basic outline of what you can do, and then tweak it to your needs.
img = im2double(rgb2gray(imread('wheel.png')));
img = imresize(img, 0.25, 'bicubic');
filt_img = imfilter(img, fspecial('log',31,5));
bin_img = filt_img < 0;
subplot(2,2,1);
imshow(filt_img,[]);
% Get regionprops
rp = regionprops(bin_img,'EulerNumber','Eccentricity','Area','PixelIdxList','PixelList');
rp = rp([rp.EulerNumber] == 0 & [rp.Eccentricity] < 0.5 & [rp.Area] > 2000);
bin_img(:) = false;
bin_img(vertcat(rp.PixelIdxList)) = true;
subplot(2,2,2);
imshow(bin_img,[]);
bin_img(:) = false;
bin_img(rp(1).PixelIdxList) = true;
bin_img = imfill(bin_img,'holes');
img_new = img;
img_new(~bin_img) = 0;
subplot(2,2,3);
imshow(img_new,[]);
bin_img(:) = false;
bin_img(rp(2).PixelIdxList) = true;
bin_img = imfill(bin_img,'holes');
img_new = img;
img_new(~bin_img) = 0;
subplot(2,2,4);
imshow(img_new,[]);
Output:

Detecting frames for which a face appears in a video

I need to detect the number of frames for which a face is appearing in a video. I looked into the sample code using CAMShift algorithm provided in the MathWorks site(http://www.mathworks.in/help/vision/examples/face-detection-and-tracking-using-camshift.html). Is there a way of knowing whether a face has appeared in a particular frame?
I'm new to MatLab. I'm assuming the step function will return a false value if no face is detected (condition fails - similar to C). Is there a possible solution? I think using MinSize is also a possible solution.
I am not concerned about the computational burden - although a faster approach for the same would be appreciated. My current code is given below:
clc;
clear all;
videoFileReader = vision.VideoFileReader('Teapot.mp4', 'VideoOutputDataType', 'uint8', 'ImageColorSpace', 'Intensity');
video = VideoReader('Teapot.mp4');
numOfFrames = video.NumberOfFrames;
faceDetector = vision.CascadeObjectDetector();
opFolder = fullfile(cd, 'Face Detected Frames');
frameCount = 0;
shotCount = 0;
while ~isDone(videoFileReader)
videoFrame = step(videoFileReader);
bbox = step(faceDetector, videoFrame);
framCount = frameCount + 1;
for i = 1:size(bbox,1)
shotCount = shotCount + 1;
rectangle('Position',bbox(i,:),'LineWidth', 2, 'EdgeColor', [1 1 0]);
videoOut = insertObjectAnnotation(videoFrame,'rectangle',bbox,'Face');
progIndication = sprintf('Face has been detected in frame %d of %d frames', shotCount, numOfFrames);
figure, imshow(videoOut), title(progIndication);
end
end
release(videoFileReader);
You can use the vision.CascadeObjectDetector object to detect faces in any particular frame. If it does not detect any faces, its step method will return an empty array. The problem is that the face detection algorithm is not perfect. Sometimes it detects false positives, i. e. detects faces where there are none. You can try to mitigate that my setting the MinSize and MaxSize properties, assuming you know what size faces you expect to find.

rotation image - different degrees

I am trying to do my own algorithm of rotating an image without using imrotate.
clear all
img1 = imread('image1.jpg');imshow(img1);
[m,n,p]=size(img1);
thet = pi/6;
m1=round(m*1.5);
n1=round(n*1.5);
rotatedImg = zeros(m1,n1);
for i=1:m
for j=1:n
t = uint16((i)*cos(thet)-(j)*sin(thet)+m+100);
s = uint16((i)*sin(thet)+(j)*cos(thet));
if i>0 && j>0 && i<=m && j<=n
try rotatedImg(t,s,1)=img1(i,j,1);
catch
a=1;
end
end
end
end
figure;
imshow(rotatedImg);
However for some reasons, at a certain angle, some parts of the image is clipped, so not the whole image is on the window. I can't seem to work out how to do it properly. It seems like i need to make the window bigger each time at different angle so the image won't be clipped.
Also my image turns out to be full of black spots which I assume I need to do some sort of interpolation. How do I go about that?
*The image I'm using is (http://i.stack.imgur.com/kDdx5.jpg) and rotating with these angles - pi/6, pi/2 , ((pi/6)*4) *
Well, there are two issues:
The rotation as always around the origin. That's the reason you need to adjust the offset (100) for each angle. Better solution is to rotate around the image center.
You are not doing any kind of interpolation. While that is not the reason per se, it could happen that you don't hit every pixel in the destination image due to rounding errors. It's better to iterate through the destination image and grab the correct pixel from the source.
Here is my solution:
clear all
img1 = imread('ngc6543a.jpg');
imshow(img1);
[m,n,p]=size(img1);
thet = pi/6;
m1=round(m*1.5);
n1=round(n*1.5);
rotatedImg = zeros(m1,n1, 3, 'uint8');
tic
for i=1:m1
for j=1:n1
p = [i; j] - [m1/2; n1/2];
source = [cos(thet), sin(thet); -sin(thet), cos(thet)] * p;
source = source + [m/2; n/2];
t = int16(source(1));
s = int16(source(2));
if t>0 && s>0 && t<=m && s<=n
rotatedImg(i,j,:) = img1(t,s,:);
end
end
end
toc
figure;
imshow(rotatedImg);
And while that looks okay, I'd definitely recommend bilinear interpolation.
I see, so you images are not quadratic. IN that case, it is not sufficient to callculate the new dimensions as old*1.5, but more exact (or more generous).
Here is the final solution that should work for all angles and arbitary images. Matlab is a bit fiddly because the indexing is (y,x) but otherwise the code should be fine.
clear all
img1 = imread('kDdx5.jpg');
imshow(img1);
[orgHeight,orgWidth,p]=size(img1);
thet = pi/7;
matrix = [cos(thet), -sin(thet); sin(thet), cos(thet)];
p1 = abs(matrix * [orgWidth/2; orgHeight/2]);
p2 = abs(matrix * [orgWidth/2; -orgHeight/2]);
corner = [max(p1(1), p2(1)); max(p1(2), p2(2))];
newWidth = ceil(2*corner(1));
newHeight = ceil(2*corner(2));
rotatedImg = zeros(newHeight, newWidth, 3, 'uint8');
tic
for i=1:newWidth
for j=1:newHeight
p = [i; j] - [newWidth/2; newHeight/2];
source = matrix * p;
source = source + [orgWidth/2; orgHeight/2;];
t = int16(source(1));
s = int16(source(2));
if t>0 && s>0 && s<=orgHeight && t<=orgWidth
rotatedImg(j,i,:) = img1(s,t,:);
end
end
end
toc
figure;
imshow(rotatedImg);

Stretching an ellipse in an image to form a circle

I want to stretch an elliptical object in an image until it forms a circle. My program currently inputs an image with an elliptical object (eg. coin at an angle), thresholds and binarizes it, isolates the region of interest using edge-detect/bwboundaries(), and performs regionprops() to calculate major/minor axis lengths.
Essentially, I want to use the 'MajorAxisLength' as the diameter and stretch the object on the minor axis to form a circle. Any suggestions on how I should approach this would be greatly appreciated. I have appended some code for your perusal (unfortunately I don't have enough reputation to upload an image, the binarized image looks like a white ellipse on a black background).
EDIT: I'd also like to apply this technique to the gray-scale version of the image, to examine what the stretch looks like.
code snippet:
rgbImage = imread(fullFileName);
redChannel = rgbImage(:, :, 1);
binaryImage = redChannel < 90;
labeledImage = bwlabel(binaryImage);
area_measurements = regionprops(labeledImage,'Area');
allAreas = [area_measurements.Area];
biggestBlobIndex = find(allAreas == max(allAreas));
keeperBlobsImage = ismember(labeledImage, biggestBlobIndex);
measurements = regionprops(keeperBlobsImage,'Area','MajorAxisLength','MinorAxisLength')
You know the diameter of the circle and you know the center is the location where the major and minor axes intersect. Thus, just compute the radius r from the diameter, and for every pixel in your image, check to see if that pixel's Euclidean distance from the cirlce's center is less than r. If so, color the pixel white. Otherwise, leave it alone.
[M,N] = size(redChannel);
new_image = zeros(M,N);
for ii=1:M
for jj=1:N
if( sqrt((jj-center_x)^2 + (ii-center_y)^2) <= radius )
new_image(ii,jj) = 1.0;
end
end
end
This can probably be optimzed by using the meshgrid function combined with logical indices to avoid the loops.
I finally managed to figure out the transform required thanks to a lot of help on the matlab forums. I thought I'd post it here, in case anyone else needed it.
stats = regionprops(keeperBlobsImage, 'MajorAxisLength','MinorAxisLength','Centroid','Orientation');
alpha = pi/180 * stats(1).Orientation;
Q = [cos(alpha), -sin(alpha); sin(alpha), cos(alpha)];
x0 = stats(1).Centroid.';
a = stats(1).MajorAxisLength;
b = stats(1).MinorAxisLength;
S = diag([1, a/b]);
C = Q*S*Q';
d = (eye(2) - C)*x0;
tform = maketform('affine', [C d; 0 0 1]');
Im2 = imtransform(redChannel, tform);
subplot(2, 3, 5);
imshow(Im2);