Car Tracking using Optical Flow. Why isnt the vectors plotting properly - matlab

I am new to optical flow and computer vision in general and I started to work with a simple demo example from Matlab.
The objective of it is to use a video and plot the motion vectors onto the screen. I am using the following code:
%% initialization
close all
clear all
% Create reader
reader = vision.VideoFileReader;
reader.Filename = 'viptraffic.avi';
% Create viewer
viewer = vision.DeployableVideoPlayer;
%%viewer.FrameRate = 10;
%Create Optical Flow
optical = vision.OpticalFlow; %how pixels are moving from one frame to the next
optical.OutputValue = 'Horizontal and vertical components in complex form'; %will allow us to draw a vector
%%%on the vision so that we see how the pixels are moving from one frame to the next
%%We pass the horizontal and vertical components to the shape inserter
%%below
% Display vector fields
shapes = vision.ShapeInserter;
shapes.Shape = 'Lines';
shapes.BorderColor = 'white';
R = 1:4:120;%%downsample the optical flow field
C = 1:4:160;%%downsample the optical flow field
[Cv, Rv] = meshgrid (C, R); %%% display a grid on the image and take every fourth value
Rv = Rv(:)';
Cv = Cv(:)';
%% Execution
reset(reader)
%Set up for stream
while ~isDone(reader)
I = step(reader);
of = step(optical,rgb2gray(I));
size(of)
ofd = of(R,C);
size(ofd)
H = imag(ofd)*20;
V = real(ofd)*20;
%Draw lines on top of image
lines = [Rv;Cv; Rv+H(:)'; Cv+V(:)']; %%start and a finish , start+movement, end+movement
% lines = [Cv;Rv;Cv;Rv];
Ishp = step(shapes,I,lines);
step(viewer,Ishp);
end
release(viewer);
I do not why the vector lines are not plotting properly.
Can anyone help me?
Thanks
PS: here is the result:

Try using
lines = [Rv(:); Cv(:); Rv(:)+H(:); Cv(:)+V(:)];
instead of
lines = [Rv;Cv; Rv+H(:)'; Cv+V(:)'];
Better yet, if you have a recent version of Matlab, try using the insertShape function instead of vision.ShapeInserter.
Edit:
If you have a recent version of the Computer Vision System Toolbox, try the new optical flow functions: opticalFlowHS, opticalFlowLK, opticalFlowLKDoG, and opticalFlowFarneback.

Related

Searching Across a Line in a Matrix in Octave

The attached image has a line with a break in it.
My code finds the line using a hough transform resulting in r=32 and theta=2.3213. The hough transform isn't perfect, the angle (especially with a more complex image) is always off by a little bit, and in this case, because of the edge detection, the line is offset. I want to read values across the line to find the breaks in it. In order to do this, I will need to be able to sample values on either side of the line to find where the maximum density of the line is.
Further explanation (if you want it):
If you look closely at the image you can see areas where the line crosses a pixel pretty much dead on resulting in a value of nearly 1/white. Other areas have two pixels side by side with values of about .5/gray. I need to find a solution that takes into account the anti-aliasing of the line, and allows me to extract the breaks in it.
%Program Preparation
clear ; close all; clc %clearing command window
pkg load image %loading image analyzation suite
pkg load optim
%Import Image
I_original = imread("C:/Users/3015799/Desktop/I.jpg");
%Process Image to make analysis quicker and more effective
I = mat2gray(I_original); %convert to black and white
I = edge(I, 'sobel');
%Perform Hough Transform
angles = pi*[-10:189]/180;
hough = houghtf(I,"line",angles);
%Detect hot spots in hough transform
detect = hough>.5*max(hough(:));
%Shrink hotspots to geometric center, and index
detect = bwmorph(detect,'shrink',inf);
[ii, jj] = find(detect);
r = ii - (size(hough,1)-1)/2;
theta = angles(jj);
%Cull duplicates. i.e outside of 0-180 degrees
dup = theta<-1e-6 | theta>=pi-1e-6;
r(dup) = [];
theta(dup) = [];
%Compute line parameters (using Octave's implicit singleton expansion)
r = r(:)'
theta = theta(:)'
x = repmat([1;1133],1,length(r)); % 2xN matrix, N==length(r)
y = (r - x.*cos(theta))./sin(theta); % solve line equation for y
%The above goes wrong when theta==0, fix that:
horizontal = theta < 1e-6;
x(:,horizontal) = r(horizontal);
y(:,horizontal) = [1;:];
%Plot
figure
imshow(I)
hold on
plot(y,x,'r-','linewidth',2)
If you are only interested in the length of the gap, this would be very easy:
clear all
pkg load image
img_fn = "input.jpg";
if (! exist (img_fn, "file"))
urlwrite ("https://i.stack.imgur.com/5UnpO.jpg", img_fn);
endif
Io = imread(img_fn);
I = im2bw (Io);
r = max(I);
c = max(I');
ri = find (diff(r));
ci = find (diff(c));
## both should have 4 elements (one break)
assert (numel (ri) == 4);
assert (numel (ci) == 4);
## the gap is in the middle
dx = diff(ri(2:3))
dy = diff(ci(2:3))
# the length is now easy
l = hypot (dy, dx)
gives
dx = 5
dy = 5
l = 7.0711
without any hogh transform. Of course you have to also check the corener cases for horizontal and vertical lines but this should give you an idea

Segment out those objects that have holes in it

I have a binary image, that has circles and squares in it.
imA = imread('blocks1.png');
A = im2bw(imA);
figure,imshow(A);title('Input Image - Blocks');
imBinInv = ~A;
figure(2); imshow(imBinInv); title('Inverted Binarized Original Image');
Some circles and squares have small holes in them based on which, I have to generate an image which has only those circles and squares that have holes/missing point in them. How can I code that?
PURPOSE: Later on, using regionprops in MATLAB, I will extract the information that from those objects, how many are circles and squares.
You should use the Euler characteristic. It's a topological invariant which describes the amount of holes in a object in the 2D case. You can calculate it using regionprops too:
STATS = regionprops(L, 'EulerNumber');
Any single object with no holes will have an Euler characteristic of 1, any single object with 1 hole will have an Euler characteristic of 0, two holes -> -1 etc. So you can segment out all the objects with EC < 1. It's pretty fast to calculate too.
imA = imread('blocks1.png');
A = logical(imA);
L = bwlabel(A); %just for visualizing, you can call regionprops straight on A
STATS = regionprops(L, 'EulerNumber');
holeIndices = find( [STATS.EulerNumber] < 1 );
holeL = false(size(A));
for i = holeIndices
holeL( L == i ) = true;
end
Output holeL:
There might be a faster way, but this should work:
Afilled = imfill(A,'holes'); % fill holes
L = bwlabel(Afilled); % label each connected component
holes = Afilled - A; % get only holes
componentLabels = unique(nonzeros(L.*holes)); % get labels of components which have at least one hole
A = A.*L; % label original image
A(~ismember(A,componentLabels)) = 0; % delete all components which have no hole
A(A~=0)=1; % turn back from labels to binary - since you are later continuing with regionprops you maybe don't need this step.

Matlab Rectify image with reference of corner points

I want to rectify an image with perspectival distorsion. I have points of the corners and I have also have an algorithm that perfoms what I need but it executes really slow. It has 'imtransform' and 'maketform' functions which matlab has faster functions for these actions. So I tried to replace them but I couldn't make it right. Any helps will be appreciated.
Here is the Images to make this question clearer:
Input Image with known Coordinates(x,y):
and Desired Output:
This process executed with the interval of 2 seconds, I need to replace this process via new matlab functions but I couldn't make it.
Old algorihm was:
%X has the clockwise X coordinates %Y has the clockwise Y coordinates
A=zeros(8,8);
A(1,:)=[X(1),Y(1),1,0,0,0,-1*X(1)*x(1),-1*Y(1)*x(1)];
A(2,:)=[0,0,0,X(1),Y(1),1,-1*X(1)*y(1),-1*Y(1)*y(1)];
A(3,:)=[X(2),Y(2),1,0,0,0,-1*X(2)*x(2),-1*Y(2)*x(2)];
A(4,:)=[0,0,0,X(2),Y(2),1,-1*X(2)*y(2),-1*Y(2)*y(2)];
A(5,:)=[X(3),Y(3),1,0,0,0,-1*X(3)*x(3),-1*Y(3)*x(3)];
A(6,:)=[0,0,0,X(3),Y(3),1,-1*X(3)*y(3),-1*Y(3)*y(3)];
A(7,:)=[X(4),Y(4),1,0,0,0,-1*X(4)*x(4),-1*Y(4)*x(4)];
A(8,:)=[0,0,0,X(4),Y(4),1,-1*X(4)*y(4),-1*Y(4)*y(4)];
v=[x(1);y(1);x(2);y(2);x(3);y(3);x(4);y(4)];
u=A\v;
%transfer fonksiyonumuz
U=reshape([u;1],3,3)';
w=U*[X';Y';ones(1,4)];
w=w./(ones(3,1)*w(3,:));
T=maketform('projective',U');
%transform uygulayıp resmi düzleştiriyoruz
P2=imtransform(I,T,'XData',[1 n],'YData',[1 m]);
if it helps, here is how I generated "A" matrix and U matrix:
Out Link
using the builtin MATLAB functions (fitgeotrans, imref2d, and imwarp) the following code runs in 0.06 seconds on my laptop:
% read the image
im = imread('paper.jpg');
tic
% set the moving points := the original image control points
x = [1380;2183;1282;422];
y = [727;1166;2351;1678];
movingPoints = [x,y];
% set the fixed points := the desired image control points
xfix = [1;1000;1000;1];
yfix = [1;1;1000;1000];
fixedPoints = [xfix,yfix];
% generate geometric transform
tform = fitgeotrans(movingPoints,fixedPoints,'projective');
% generate reference object (full desired image size)
R = imref2d([1000 1000]);
% warp image
outputImage = imwarp(im,tform,'OutputView',R);
toc
% show image
imshow(outputImage);

How to implement integral image on sliding window detection?

I am doing a project to detect people in crowd using HOG-LBP. I want to make it for real-time application. I've read in some references, integral image/histogram can increase the speed of the performance from sliding window detection. I want to ask, how to implement integral image on my sliding window detection:
here is the code for integral image from matlab:
A = (cumsum(cumsum(double(img)),2));
and here my sliding window detection code:
im = strcat ('C:\Documents\Crowd_PETS09\S1\L1\Time_13-57\View_001\frame_0150.jpg');
im = imread (im);
figure (1), imshow(im);
win_size= [32, 32];
[lastRightCol lastRightRow d] = size(im);
counter = 1;
%% Scan the window by using sliding window object detection
% this for loop scan the entire image and extract features for each sliding window
% Loop on scales (based on size of the window)
for s=1
disp(strcat('s is',num2str(s)));
X=win_size(1)*s;
Y=win_size(2)*s;
for y = 1:X/4:lastRightCol-Y
for x = 1:Y/4:lastRightRow-X
%get four points for boxes
p1 = [x,y];
p2 = [x+(X-1), y+(Y-1)];
po = [p1; p2] ;
% cropped image based on the four points
crop_px = [po(1,1) po(2,1)];
crop_py = [po(1,2) po(2,2)];
topLeftRow = ceil(min(crop_px));
topLeftCol = ceil(min(crop_py));
bottomRightRow = ceil(max(crop_px));
bottomRightCol = ceil(max(crop_py));
cropedImage = im(topLeftCol:bottomRightCol,topLeftRow:bottomRightRow,:);
%Get the feature vector from croped image
HOGfeatureVector{counter}= getHOG(double(cropedImage));
LBPfeatureVector{counter}= getLBP(cropedImage);
LBPfeatureVector{counter}= LBPfeatureVector{counter}';
boxPoint{counter} = [x,y,X,Y];
counter = counter+1;
x = x+2;
end
end
end
where should i put the integral image code?
i am really appreciate, if someone can help me to figure it out.
Thank you.
The integral image is most suited for the Haar-like features. Using it for HOG or LBP would be tricky. I would suggest to first get your algorithm working, and then think about optimizing it.
By the way, the Computer Vision System Toolbox includes the extractHOGFeatures function, which would be helpful. Here's an example of training a HOG-SVM classifier to recognize hand-written digits. Also there is a vision.PeopleDetector object, which uses a HOG-SVM classifier to detect people. You could either use it directly for your project, or use it to evaluate performance of your own algorithm.

what this function does videooptflowlines() matlab?

what is purpose of this function videooptflowlines(); also does the object hof contain information about previous frame to calculate optical flow?
hvfr = vision.VideoFileReader('viptraffic.avi', ...
'ImageColorSpace', 'Intensity', ...
'VideoOutputDataType', 'uint8');
hidtc = vision.ImageDataTypeConverter;
hof = vision.OpticalFlow('ReferenceFrameDelay', 1);
hof.OutputValue = 'Horizontal and vertical components in complex form';
hsi = vision.ShapeInserter('Shape','Lines','BorderColor','Custom', 'CustomBorderColor', 255);
hvp = vision.VideoPlayer('Name', 'Motion Vector');
while ~isDone(hvfr)
frame = step(hvfr);
im = step(hidtc, frame); % convert the image to 'single' precision
of = step(hof, im); % compute optical flow for the video
lines = videooptflowlines(of, 20); % generate coordinate points
if ~isempty(lines)
out = step(hsi, im, lines); % draw lines to indicate flow
step(hvp, out); % view in video player
end
end
release(hvp);
release(hvfr);
The function videooptflowlines is a helper function used by the demos (visiondemos) in the Computer Vision System toolbox. You can see the code for this function by typing edit videooptflowlines in the Matlab command window. A comment in the code states that, as its name indicates, the function is used in a help example for vision.OpticalFlow.
Essentially the function does the basic math to create vector lines that indicate optic flow direction. There are several parameters in the code that will probably depend on the resolution of the image used. If you're creating your own code that uses this function, you should probably create a copy of it and edit the new version to suit your needs.
The answer to your second question is "yes". A vision.OpticalFlow object does contain the information about the previous frame.