Motion History Image in Matlab - matlab

I am working on a project about action recognition using motion history images in matlab. I am new to this field. I did background subtraction using frame differencing method to get images that have only the moving person. Now I want to compute MHI. I found the following code for MHI. I did not understand what is fg{1} and how to use it. Any help will be appreciated. Thank you.
vid= VideoReader('PS7A1P1T1.avi');
n = vid.NumberOfFrames;
fg = cell(1, n);
for i = 1:n
frame = read(vid,i);
frame = rgb2gray(frame);
fg{i} = frame;
end
%---------------------------------------------------------------
%background subtraction using frame differencing method
thresh = 25;
bg = fg{1}; % read in 1st frame as background frame
% ----------------------- set frame size variables -----------------------
fr_size = size(bg);
width = fr_size(2);
height = fr_size(1);
% --------------------- process frames -----------------------------------
for i = 2:n
fr = fg{i}; % read in frame
fr_diff = abs(double(fr) - double(bg)); % cast operands as double to avoid negative overflow
for j=1:width % if fr_diff > thresh pixel in foreground
for k=1:height
if ((fr_diff(k,j) > thresh))
fg {i}(k,j) = fr(k,j);
else
fg {i}(k,j) = 0;
end
end
end
bg = fr;
imshow(fg{i})
end
out = MHI(fg);
//----------------------------------------
function MHI = MHI(fg)
% Initialize the output, MHI a.k.a. H(x,y,t,T)
MHI = fg;
% Define MHI parameter T
T = 15; % # of frames being considered; maximal value of MHI.
% Load the first frame
frame1 = fg{1};
% Get dimensions of the frames
[y_max x_max] = size(frame1);
% Compute H(x,y,1,T) (the first MHI)
MHI{1} = fg{1} .* T;
% Start global loop for each frame
for frameIndex = 2:length(fg)
%Load current frame from image cell
frame = fg{frameIndex};
% Begin looping through each point
for y = 1:y_max
for x = 1:x_max
if (frame(y,x) == 255)
MHI{frameIndex}(y,x) = T;
else
if (MHI{frameIndex-1}(y,x) > 1)
MHI{frameIndex}(y,x) = MHI{frameIndex-1}(y,x) - 1;
else
MHI{frameIndex}(y,x) = 0;
end
end
end
end
end

fg{1} is most likely the first frame of a grayscale video. Given your comments, you are using the VideoReader class to read in frames. As such, read in each frame individually, convert to grayscale then place on a cell in a cell array. When you're done, call the script.
Here's the code modified from your comments to suit this task:
vid = VideoReader('PS7A1P2T1.avi');
n = vid.NumberOfFrames;
fg = cell(1, n);
for i = 1:n
frame = read(vid,i);
frame = rgb2gray(frame);
fg{i} = frame;
end
You can then call the MHI script:
out = MHI(fg);

Related

How to process image before applying bwlabel?

I = imread('Sub1.png');
figure, imshow(I);
I = imcomplement(I);
I = double(I)/255;
I = adapthisteq(I,'clipLimit',0.0003,'Distribution','exponential');
k = 12;
beta = 2;
maxIter = 100;
for i=1:length(beta)
[seg,prob,mu,sigma,it(i)] = ICM(I, k, beta(i), maxIter,5);
pr(i) = prob(end);
hold on;
end
figure, imshow(seg,[]);
and ICM function is defined as
function [segmented_image,prob,mu,sigma,iter] = ICM(image, k, beta, max_iterations, neigh)
[width, height, bands] = size(image);
image = imstack2vectors(image);
segmented_image = init(image,k,1);
clear c;
iter = 0;
seg_old = segmented_image;
while(iter < max_iterations)
[mu, sigma] = stats(image, segmented_image, k);
E1 = energy1(image,mu,sigma,k);
E2 = energy2(segmented_image, beta, width, height, k);
E = E1 + E2;
[p2,~] = min(E2,[],2);
[p1,~] = min(E1,[],2);
[p,segmented_image] = min(E,[],2);
prob(iter+1) = sum(p);
%find mismatch with previous step
[c,~] = find(seg_old~=segmented_image);
mismatch = (numel(c)/numel(segmented_image))*100;
if mismatch<0.1
iter
break;
end
iter = iter + 1;
seg_old = segmented_image;
end
segmented_image = reshape(segmented_image,[width height]);
end
Output of my algorithm is a logical matrix (seg) of size 305-by-305. When I use
imshow(seg,[]);
I am able to display the image. It shows different component with varying gray value. But bwlabel returns 1. I want to display the connected components. I think bwlabel thresholds the image to 1. unique(seg) returns values 1 to 10 since number of classes used in k-means is 10. I used
[label n] = bwlabel(seg);
RGB = label2rgb(label);
figure, imshow(RGB);
I need all the ellipse-like structures which are in between the two squares close to the middle of the image. I don't know the number of classes present in it.
Input image:
Ground truth:
My output:
If you want to explode the label image to different connected components you need to use a loop to extract labels for each class and sum label images to get the out label image.
u = unique(seg(:));
out = zeros(size(seg));
num_objs = 0;
for k = 1: numel(u)
mask = seg==u(k);
[L,N] = bwlabel(mask);
L(mask) = L(mask) + num_objs;
out = out + L;
num_objs = num_objs + N ;
end
mp = jet(num_objs);
figure,imshow(out,mp)
Something like this is produced:
I have tried to do everything out of scratch. I wish it is of some help.
I have a treatment chain that get at first contours with parameters tuned on a trial-and-error basis, I confess. The last "image" is given at the bottom ; with it, you can easily select the connected components and do for example a reconstruction by markers using "imreconstruct" operator.
clear all;close all;
I = imread('C:\Users\jean-marie.becker\Desktop\imagesJPG10\spinalchord.jpg');
figure,imshow(I);
J = I(:,:,1);% select the blue channel because jpg image
J=double(J<50);% I haven't inverted the image
figure, imshow(J);
se = strel('disk',5);
J=J-imopen(J,se);
figure, imshow(J);
J=imopen(J,ones(1,15));% privilegizes long horizontal strokes
figure, imshow(J);
K=imdilate(J,ones(20,1),'same');
% connects verticaly not-to-far horizontal "segments"
figure, imshow(K);

Problems with initializing CamShift algorithm

I use matlab to implement a tracker using cam shift. I need to implement it myself hence I'm not using the built in method.
I let the user choose the object to track and after the first iteration the convergence makes it start tracking a different object. pictures speaks louder than words so:
I choose my face in the first frame:
It converges to my shoulder and starting tracking it:
Now the tracking is overall ok throughout the video so I think the issue is in the initialization phase.
SomeCode: Get position from user:
function [pos] = getObjectPosition(F)
f = figure;
imshow(F);
pos = getPosition(imrect(gca));
close(f);
end
Main Loop:
firstFrame = readFrame(input);
pos = getObjectPosition(firstFrame);
while hasFrame(input)
% init next frame
nextFrame = readFrame(input);
[currFrameHSV, ~, ~] = rgb2hsv(nextFrame);
frameHue = double(currFrameHSV(:,:,1));
% init while loop vars
prevX = -1;
prevY = -1;
i = 0;
while (i < numOfIterations) && ~(prevX == pos(1) && prevY == pos(2))
% updating the curr iteration number
i = i + 1;
% getting search window location and dimensions
envRect = round(getEnvRect(pos,pos(4)/yRatio,pos(3)/xRatio,height,width));
[X,Y] = meshgrid(envRect(1):envRect(1) + envRect(3),...
envRect(2):envRect(2) + envRect(4));
% getting the sub image
I = imcrop(frameHue,envRect);
M00 = sum(sum(I));
M10 = sum(sum(X.*I));
M01 = sum(sum(Y.*I));
% getting center mass location
Xc = round(M10/M00);
Yc = round(M01/M00);
% updating object position
prevX = pos(1);
prevY = pos(2);
pos(1) = floor(Xc - pos(3)/2);
pos(2) = floor(Yc - pos(4)/2);
end
% adding the object's bounding box to the frame to write
% show the image
end
getEnv method:
function [ envRect ] = getEnvRect( feature,hShift,wShift,rows,cols )
hEnvSize = feature(4) + 2*hShift;
wEnvSize = feature(3) + 2*wShift;
xPatch = feature(1);
xEnv = max(1, xPatch - wShift);
yPatch = feature(2);
yEnv = max(1, yPatch - hShift);
width = min(cols, xEnv + wEnvSize) - xEnv;
height = min(rows, yEnv + hEnvSize) - yEnv;
envRect = [xEnv, yEnv, width, height];
end

Executing nested functions with a timer object

I am trying to write a program which acquires an image from a webcam, averages over several of the vertical pixel rows, displays the result and then applies a Gaussian fit to locate the peak position with a high degree of accuracy. All of this is to be done in real time. I used a timer object to run the function (called datain) which triggers the camera and acquires the data. To iterate and update the data the function includes a while loop. My code worked perfectly until I added a command inside the while loop to call another function (called GaussianFit) which preformed the fit. When this function is added I receive an error which says the callback function in the timer does not have enough inputs.
"Error while evaluating TimerFcn for timer 'timer-1' Not enough input arguments."
This seems odd since the data acquisition function provides all the inputs needed by the fitting function. My code is as follows:
function WaveAQ()
global webcamin
%%%%%%%%%%%%%%%%%%%%%%%
% Create video object %
%%%%%%%%%%%%%%%%%%%%%%%
dID = 2; vForm = 'MJPG_1280x720';
% Search for existing video objects to avoid creating multiple handles
vObj = imaqfind('DeviceID', dID, 'VideoFormat', vForm);
if isempty(vObj)
webcamin = videoinput('winvideo',dID,vForm);
else
webcamin = vObj{1};
end
% Configure # datasets to be averaged per measurement
set(webcamin,'FramesPerTrigger',1);
% Repeat until stopped
set(webcamin,'TriggerRepeat',Inf);
% Set to grayscale
set(webcamin,'ReturnedColorSpace','grayscale');
% Allow webcam to be triggered by timer
triggerconfig(webcamin,'manual')
% % View the properties for the selected video source object.
% src_obj = getselectedsource(webcamin);
% get(src_obj)
% Prompt for dark reading
j = 1;
j = input('Would you like to take a dark reading? (y/n): ','s');
switch j
case 'y'
j = 1;
case 'n'
j = 2;
end
if j == 1
prompt = input('Block input and enter any number to continue: ');
end
% We now define a function which will acquire data from the camera to be
% executed by the timer function
fig1 = figure(1);
function datain(~,event,input)
persistent image
h = 1; % Initialize Loop
bg = 0;
avgroi = 0;
% Calibration and fitting constants
tol = 1;
calib_slope = .003126;
calib_offset = 781.8;
spect = (1:1280)*calib_slope + calib_offset;
while h == 1
% Check if figure is open
h = ishandle(fig1);
trigger(input);
% First run finds dark reading
if j == 1
bg = getdata(input,1,'uint8');
j = 2;
else
image = getdata(input,1,'uint8');
image = image - bg; % Correct for background
npxy = length(image(:,1));
npxx = length(image(1,:));
% Region of interest for spectral data
dy_roi = 200;
y_roi = npxy/2;
% Region of interest for beam position
dx_roi = npxx/2;
x_roi = npxx/2;
avgroi = sum(image((y_roi-dy_roi):(y_roi+dy_roi),:))/(2*dy_roi);
avgroi = avgroi/256; % Normalize units
avg_y = (sum(image(:,(x_roi-dx_roi+1):(x_roi+dx_roi))).'/(2*dx_roi)).';
[cx,sx,Ipeak] = GaussianFit(avgroi,tol);
% cx = max(avgroi);
% sx = 2;
% Ipeak = max(avgroi);
subplot(2,1,1)
plot(spect,avgroi)
title('Spectrogram')
xlabel('Wavelength (nm)')
ylabel('Intensity (arb.)')
legend(['\lambda_P_e_a_k = ',num2str(cx),'nm'])
subplot(2,1,2)
plot(1:length(avg_y),avg_y/256)
title('Beam Profile in Y')
xlabel('Pixel #')
ylabel('Intensity (arb.)')
end
end
end
%%%%%%%%%%%%%%%%%%%
% Define timer object to execute daq %
%%%%%%%%%%%%%%%%%%%
% Set callback properies
timerfcn = #datain;
% Image acquisition rate
Raq = 5;
timer_exec = timer('TimerFcn',{timerfcn,webcamin},'period',1/Raq,'BusyMode','drop');% Maybe change busy mode to queue if regular acquisition interval needed.
% timer_exec2 = timer('TimerFcn',#GaussianFit,'period',1/Raq,'BusyMode','drop');
% Execute acquistion
start(webcamin);
preview(webcamin);
start(timer_exec);
% start(timer_exec2);
% Clean system
stop(timer_exec);
% stop(timer_exec2);
delete(timer);
stop(webcamin);
delete(webcamin);
clear functions;
end
Can anyone tell me why the timer needs more inputs when GaussianFit is called inside the loop?

Issues with imgIdx in DescriptorMatcher mexopencv

My idea is simple here. I am using mexopencv and trying to see whether there is any object present in my current that matches with any image stored in my database.I am using OpenCV DescriptorMatcher function to train my images.
Here is a snippet, I am wishing to build on top of this, which is one to one one image matching using mexopencv, and can also be extended for image stream.
function hello
detector = cv.FeatureDetector('ORB');
extractor = cv.DescriptorExtractor('ORB');
matcher = cv.DescriptorMatcher('BruteForce-Hamming');
train = [];
for i=1:3
train(i).img = [];
train(i).points = [];
train(i).features = [];
end;
train(1).img = imread('D:\test\1.jpg');
train(2).img = imread('D:\test\2.png');
train(3).img = imread('D:\test\3.jpg');
for i=1:3
frameImage = train(i).img;
framePoints = detector.detect(frameImage);
frameFeatures = extractor.compute(frameImage , framePoints);
train(i).points = framePoints;
train(i).features = frameFeatures;
end;
for i = 1:3
boxfeatures = train(i).features;
matcher.add(boxfeatures);
end;
matcher.train();
camera = cv.VideoCapture;
pause(3);%Sometimes necessary
window = figure('KeyPressFcn',#(obj,evt)setappdata(obj,'flag',true));
setappdata(window,'flag',false);
while(true)
sceneImage = camera.read;
sceneImage = rgb2gray(sceneImage);
scenePoints = detector.detect(sceneImage);
sceneFeatures = extractor.compute(sceneImage,scenePoints);
m = matcher.match(sceneFeatures);
%{
%Comments in
img_no = m.imgIdx;
img_no = img_no(1);
%I am planning to do this based on the fact that
%on a perfect match imgIdx a 1xN will be filled
%with the index of the training
%example 1,2 or 3
objPoints = train(img_no+1).points;
boxImage = train(img_no+1).img;
ptsScene = cat(1,scenePoints([m.queryIdx]+1).pt);
ptsScene = num2cell(ptsScene,2);
ptsObj = cat(1,objPoints([m.trainIdx]+1).pt);
ptsObj = num2cell(ptsObj,2);
%This is where the problem starts here, assuming the
%above is correct , Matlab yells this at me
%index exceeds matrix dimensions.
end [H,inliers] = cv.findHomography(ptsScene,ptsObj,'Method','Ransac');
m = m(inliers);
imgMatches = cv.drawMatches(sceneImage,scenePoints,boxImage,boxPoints,m,...
'NotDrawSinglePoints',true);
imshow(imgMatches);
%Comment out
%}
flag = getappdata(window,'flag');
if isempty(flag) || flag, break; end
pause(0.0001);
end
Now the issue here is that imgIdx is a 1xN matrix , and it contains the index of different training indices, which is obvious. And only on a perfect match is the matrix imgIdx is completely filled with the matched image index. So, how do I use this matrix to pick the right image index. Also
in these two lines, I get the error of index exceeding matrix dimension.
ptsObj = cat(1,objPoints([m.trainIdx]+1).pt);
ptsObj = num2cell(ptsObj,2);
This is obvious since while debugging I saw clearly that the size of m.trainIdx is greater than objPoints, i.e I am accessing points which I should not, hence index exceeds
There is scant documentation on use of imgIdx , so anybody who has knowledge on this subject, I need help.
These are the images I used.
Image1
Image2
Image3
1st update after #Amro's response:
With the ratio of min distance to distance at 3.6 , I get the following response.
With the ratio of min distance to distance at 1.6 , I get the following response.
I think it is easier to explain with code, so here it goes :)
%% init
detector = cv.FeatureDetector('ORB');
extractor = cv.DescriptorExtractor('ORB');
matcher = cv.DescriptorMatcher('BruteForce-Hamming');
urls = {
'http://i.imgur.com/8Pz4M9q.jpg?1'
'http://i.imgur.com/1aZj0MI.png?1'
'http://i.imgur.com/pYepuzd.jpg?1'
};
N = numel(urls);
train = struct('img',cell(N,1), 'pts',cell(N,1), 'feat',cell(N,1));
%% training
for i=1:N
% read image
train(i).img = imread(urls{i});
if ~ismatrix(train(i).img)
train(i).img = rgb2gray(train(i).img);
end
% extract keypoints and compute features
train(i).pts = detector.detect(train(i).img);
train(i).feat = extractor.compute(train(i).img, train(i).pts);
% add to training set to match against
matcher.add(train(i).feat);
end
% build index
matcher.train();
%% testing
% lets create a distorted query image from one of the training images
% (rotation+shear transformations)
t = -pi/3; % -60 degrees angle
tform = [cos(t) -sin(t) 0; 0.5*sin(t) cos(t) 0; 0 0 1];
img = imwarp(train(3).img, affine2d(tform)); % try all three images here!
% detect fetures in query image
pts = detector.detect(img);
feat = extractor.compute(img, pts);
% match against training images
m = matcher.match(feat);
% keep only good matches
%hist([m.distance])
m = m([m.distance] < 3.6*min([m.distance]));
% sort by distances, and keep at most the first/best 200 matches
[~,ord] = sort([m.distance]);
m = m(ord);
m = m(1:min(200,numel(m)));
% naive classification (majority vote)
tabulate([m.imgIdx]) % how many matches each training image received
idx = mode([m.imgIdx]);
% matches with keypoints belonging to chosen training image
mm = m([m.imgIdx] == idx);
% estimate homography (used to locate object in query image)
ptsQuery = num2cell(cat(1, pts([mm.queryIdx]+1).pt), 2);
ptsTrain = num2cell(cat(1, train(idx+1).pts([mm.trainIdx]+1).pt), 2);
[H,inliers] = cv.findHomography(ptsTrain, ptsQuery, 'Method','Ransac');
% show final matches
imgMatches = cv.drawMatches(img, pts, ...
train(idx+1).img, train(idx+1).pts, ...
mm(logical(inliers)), 'NotDrawSinglePoints',true);
% apply the homography to the corner points of the training image
[h,w] = size(train(idx+1).img);
corners = permute([0 0; w 0; w h; 0 h], [3 1 2]);
p = cv.perspectiveTransform(corners, H);
p = permute(p, [2 3 1]);
% show where the training object is located in the query image
opts = {'Color',[0 255 0], 'Thickness',4};
imgMatches = cv.line(imgMatches, p(1,:), p(2,:), opts{:});
imgMatches = cv.line(imgMatches, p(2,:), p(3,:), opts{:});
imgMatches = cv.line(imgMatches, p(3,:), p(4,:), opts{:});
imgMatches = cv.line(imgMatches, p(4,:), p(1,:), opts{:});
imshow(imgMatches)
The result:
Note that since you did not post any testing images (in your code you are taking input from the webcam), I created one by distorting one the training images, and using it as a query image. I am using functions from certain MATLAB toolboxes (imwarp and such), but those are non-essential to the demo and you could replace them with equivalent OpenCV ones...
I must say that this approach is not the most robust one.. Consider using other techniques such as the bag-of-word model, which OpenCV already implements.

How to place an image A onto image B BUT I WANT TO IGNORE all O pixels of image A

I want to place a modelImage (RGB image) on baseImage (RGB image) where the center of modelImage will be placed at pPoint.
I already wrote the function.
And it works. However, there are some 0 pixels in the modelImage. I do not want to place 0 pixels of modelImage onto baseImage. Can you help me to modify my function?
% Put the modelImage onto baseImage at pPoint
function newImage = imgTranslate(modelImage,baseImage, pPoint)
[nRow nCol noDim] = size(modelImage);
pPointX = pPoint(1);
pPointY = pPoint(2);
startColumn = pPointY - nCol/2;
startRow = pPointX - nRow/2;
startColumn = round(startColumn);
startRow = round(startRow);
endColumn = startColumn+ nCol;
endRow = startRow+nRow;
%% Place modelImage onto baseImage BUT I WANT TO IGNORE O pixels of modelImage
baseImage(startRow:(endRow-1),startColumn:(endColumn-1),:) = modelImage;
newImage = baseImage;
Using FOR and IF works but it will slow the progam
%%
for i = startRow: (endRow-1)
x = (i-startRow +1);
for j = startColumn : (endColumn-1)
y = j-startColumn + 1;
if modelImage(x,y,:)~=0
baseImage(i,j,:) = modelImage(x,y,:);
end
end
end
Any way to not to use FOR and IF ?
There may be a way to simplify this, but something like (untested):
% Indicator map of non-zeros
nzs = any(modelImage,3);
nzs = repmat(nzs, [1 1 3]);
% Crop destination image
tmpImage = baseImage(startRow:(endRow-1),startColumn:(endColumn-1),:);
% Replace all but non-zeros
tmpImage(nzs) = modelImage(nzs);
% Place into output image as before
baseImage(startRow:(endRow-1),startColumn:(endColumn-1),:) = tmpImage;
% Indicator map of non-zeros
zs = modelImage(:,:,:)==0;
nzs = ~zs;
% Crop destination image
tmpImage = baseImage(startRow:(endRow-1),startColumn:(endColumn-1),:);
% Replace all but non-zeros
tmpImage(nzs) = modelImage(nzs);
% Place into output image as before
baseImage(startRow:(endRow-1),startColumn:(endColumn-1),:) = tmpImage;