I have an inverted pendulum video here which is 33 second length. The objective is to plot a red point in the center of moving rectangle part of the pendulum and to plot a line along the black stick calculating its angle for every frame.
I have handled the video frame by frame. Then I have used Object Detection In A Cluttered Scene Using Point Feature Matching. It would be good if I had access to the matching point's indexes and then I would easily calculate the angle.
I have thought that I can get the moving rectangle part's region and seek the similar regions in the next frames. But this solution seems too local.
I do not know which techniques to apply.
clear all;
clc;
hVideoFileReader = vision.VideoFileReader;
hVideoPlayer = vision.VideoPlayer;
hVideoFileReader.Filename = 'inverted-pendulum.avi';
hVideoFileReader.VideoOutputDataType = 'single';
while ~isDone(hVideoFileReader)
grayFrame = rgb2gray(step(hVideoFileReader));
frame = step(hVideoFileReader);
if isFirstFrame
part = grayFrame(202:266,202:282); % #moving part's region
isFirstFrame = false;
subplot(1,2,1);
imshow(part);
end
partPoints = detectSURFFeatures(part);
grayFramePoints = detectSURFFeatures(grayFrame);
hold on;
subplot(1,2,1), plot(partPoints .selectStrongest(10));
subplot(1,2,2), imshow(grayFrame);
subplot(1,2,2), plot(grayFramePoints .selectStrongest(20));
frame2 = pointPendulumCenter(frame);
frame3 = plotLineAlongStick(frame2);
step(hVideoPlayer, frame3);
hold off;
end
release(hVideoFileReader);
release(hVideoPlayer);
%% #Function to find the moving part's center point and plot a red dot on it.
function f = pointPendulumCenter(frame)
end
%% #Function to plot a red line along the stick after calculating the angle of it.
function f = plotLineAlongStick(frame)
end
It would make the problem much easier if your camera did not move. If you take your video with a stationary camera (e.g. mounted on a tripod) then you can use vision.ForegroundDetector to segment out moving objects from the static background.
Related
I am struggling with some algorithm to extract the region from an image which has the maximum change in pixels. I got the following image after preprocessing.
I did following steps of pre-processing
x = imread('test2.jpg');
gray_x = rgb2gray(x);
I = medfilt2(gray_x,[3 3]);
gray_x = I;
%%
canny_x = edge(gray_x,'canny',0.3);
figure,imshow(canny_x);
%%
s = strel('disk',3);
si = imdilate(canny_x,s);
%figure5
figure; imshow(si);
se = imerode(canny_x,s);title('dilation');
%figure6
figure; imshow(se);title('Erodsion');
I = imsubtract(si,se);
%figure7
figure; imshow(I);
Basically what I am struggling for, is to make weapon detection system using Image processing. I want to localize possible area's to be weapon so that I could feed them to my classifier to identify if it is a weapon or not. Any suggestions? Thank you
A possible solution could be:
Find corner points in the image (Harris corner points, etc)
Set value of all the corner points to white while remaining image will be black
Take a rectangular window and traverse it over the whole image
sum all the white pixels in that rectangular window
select that region whose sum is maximum of all regions
The background of this problem relates to my attempt to combine output from a ray tracer with Matlab's 3d plotters. When doing ray tracing, there is no need to apply a perspective transformation to the rendered image. You see this in the image below. Basically, the intersections of the rays with the viewport will automatically adjust for the perspective scaling.
Suppose I've gone and created a ray-traced image (so I am given my camera, my focal length, viewport dimensions, etc.). How do I create exactly the same view in Matlab's 3d plotting environment?
Here is an example:
clear
close all
evec = [0 200 300]; % Camera position
recw = 200; % cm width of box
recl = 200; % cm length of box
h = 150; % cm height of box
% Create the front face rectangle
front = zeros(3,5);
front(:,1) = [-recw/2; 0; -recl/2];
front(:,2) = [recw/2; 0; -recl/2];
front(:,3) = [recw/2; h; -recl/2];
front(:,4) = [-recw/2; h; -recl/2];
front(:,5) = front(:,1);
% Back face rectangle
back = zeros(3,5);
back(:,1) = [-recw/2; 0; recl/2];
back(:,2) = [recw/2; 0; recl/2];
back(:,3) = [recw/2; h; recl/2];
back(:,4) = [-recw/2; h; recl/2];
back(:,5) = back(:,1);
% Plot the world view
figure(1);
patch(front(1,:), front(2,:), front(3,:), 'r'); hold all
patch(back(1,:), back(2,:), back(3,:), 'b');
plot3(evec(1), evec(2), evec(3), 'bo');
xlabel('x'); ylabel('y'); zlabel('z');
title('world view'); view([-30 40]);
% Plot the camera view
figure(2);
patch(front(1,:), front(2,:), front(3,:), 'r'); hold all
patch(back(1,:), back(2,:), back(3,:), 'b');
xlabel('x'); ylabel('y'); zlabel('z');
title('Camera view');
campos(evec);
camup([0 1 0]); % Up vector is y+
camproj('perspective');
camtarget([evec(1), evec(2), 0]);
title('camera view');
Now you see the world view
and the camera view
I know how to adjust the camera position, the camera view angle, and orientation to match the output from my ray tracer. However, I do not know how to adjust Matlab's built-in perspective command
camproj('perspective')
for different distortions.
Note: within the documentation, there is the viewmtx command, which allows you to output a transformation matrix corresponding to a perspective distortion of a certain angle. This is not quite what I want. I want to do things in 3D and through Matlab's OpenGL viewer. In essence, I want a command like
camproj('perspective', distortionamount)
so I can match up the amount of distortion in Matlab's viewer with the distortion from the ray tracer. If you use the viewmtx command to create the 2D projections, you will not be able to use patch' orsurf' and keep colours and faces intact.
The MATLAB perspective projection works just like your raytracer. You don't need any transformation matrices to it use it. Perspective distortion is determined entirely by the camera position and direction of projection.
In the terminology of the raytracer diagram above, if the CameraPosition matches your raytracer's pinhole coordinates and the vector between CameraPosition and CameraTarget is perpendicular to your raytracer's viewport, the perspective distortion will also match. The rest is just scaling and alignment.
I am currently trying to figure out how to be able to detect a face that is 5m away from the source and will have its facial features clear enough for the user to see. The code i am working on is as shown.
faceDetector = vision.CascadeObjectDetector();
%Get the input device using image acquisition toolbox,resolution = 640x480 to improve performance
obj =imaq.VideoDevice('winvideo', 1, 'YUY2_640x480','ROI', [1 1 640 480]);
set(obj,'ReturnedColorSpace', 'rgb');
figure('menubar','none','tag','webcam');
while (true)
frame=step(obj);
bbox=step(faceDetector,frame);
boxInserter = vision.ShapeInserter('BorderColor','Custom',...
'CustomBorderColor',[255 255 0]);
videoOut = step(boxInserter, frame,bbox);
imshow(videoOut,'border','tight');
f=findobj('tag','webcam');
if (isempty(f));
[hueChannel,~,~] = rgb2hsv(frame);
% Display the Hue Channel data and draw the bounding box around the face.
figure, imshow(hueChannel), title('Hue channel data');
rectangle('Position',bbox,'EdgeColor','r','LineWidth',1)
hold off
noseDetector = vision.CascadeObjectDetector('Nose');
faceImage = imcrop(frame,bbox);
imshow(faceImage)
noseBBox = step(noseDetector,faceImage);
noseBBox(1:1) = noseBBox(1:1) + bbox(1:1);
videoInfo = info(obj);
ROI=get(obj,'ROI');
VideoSize = [ROI(3) ROI(4)];
videoPlayer = vision.VideoPlayer('Position',[300 300 VideoSize+30]);
tracker = vision.HistogramBasedTracker;
initializeObject(tracker, hueChannel, bbox);
while (1)
% Extract the next video frame
frame = step(obj);
% RGB -> HSV
[hueChannel,~,~] = rgb2hsv(frame);
% Track using the Hue channel data
bbox = step(tracker, hueChannel);
% Insert a bounding box around the object being tracked
videoOut = step(boxInserter, frame, bbox);
%Insert text coordinates
% Display the annotated video frame using the video player object
step(videoPlayer, videoOut);
pause (.2)
end
% Release resources
release(obj);
release(videoPlayer);
close(gcf)
break
end
pause(0.05)
end
release(obj)
% remove video object from memory
delete(handles.vid);
I am trying to work on this code to figure out the distance it can cover when tracking a face. I couldnt figure out which one handles that. Thanks!
Not sure what your question is, but try this example. It uses the KLT algorithm, which, IMHO, is more robust for face tracking than CAMShift. It also uses the webcam interface in base MATLAB, which is very easy.
I am trying to perform human body animation using translation and orientation data I am given. I have a set of rigid body segments made using patch all centered at (0,0,0) to represent the human body and translated accordingly. I have set up a hierarchy for each of them and performed a transformation matrix for each rigid body segment. The limb segments begin to offset one another and give problems. For example, the rigid body of the arm moves as if it does not have a relative point of origin even though it follows the proper motion. The motion is akin to moving the rigid body from the patch center of gravity? Whereas it is supposed to move with one end being fixed while the other end follows translation data. Can someone let me know what it is that I am doing wrong? Layout of my code is:
% Body segment lengths
xlength = somevalue
ylength = somevalue
zlength = somevalue
% Translation data
Xdata
Ydata
Zdata
% Orientation data
Yaw = rotation about z axis
Pitch = rotation about x axis
Roll = rotation about y axis
Vertices = [xlength*ones(8,1),ylength*ones(8,1),zlength*ones(8,1)]...
.*[-0.5,-0.5,-0.5;
0.5,-0.5,-0.5;
-0.5,0.5,-0.5;
-0.5,-0.5,0.5;
0.5,0.5,-0.5;
-0.5,0.5,0.5;
0.5,-0.5,0.5;
0.5,0.5,0.5];
% Create patches
for i = 1:6
% create faces for patches
end
% create axes
ax = axes(...)
% draw patches
bodysegmentPatch = patch(patchxdata,patchydata,patchzdata)
% create hierarchy using hgtransform
pelvis = hgtransform('Parent',ax);
trunk = hgtransform('Parent',pelvis);
head = hgtransform('Parent',trunk);
leftupperarm = hgtransform('Parent',trunk);
leftforearm = hgtransform('Parent',leftupperarm);
rightupperarm = hgtransform('Parent',trunk);
rightforearm = hgtransform('Parent',rightupperarm);
leftthigh = hgtransform('Parent',pelvis);
leftcalf = hgtransform('Parent',leftthigh);
rightthigh = hgtransform('Parent',pelvis);
rightcalf = hgtransform('Parent',rightthigh);
% set patches to hierarchy
set(pelvisPatch,'Parent',pelvis)
% Animation loop
for i = 1:n
% translation of body segment
bodysegmentT = makehgtform('translate',[x(i) y(i) z(i)]);
% rotation of body segment
bodysegmentR = makehgtform('yrotate',Roll(i),'xrotate',Pitch(i),'zrotate',Yaw(i));
% Create transform matrices
set(pelvis,'Matrix',pelvisR);
set(trunk,'Matrix',trunkR*pelvisR);
set(leftupperarm,'Matrix',leftupperarmT*leftupperarmR*trunkR*pelvisR);
drawnow
end
I'm not sure what your problem is exactly, but without looking at your code too closely my guess is it is likely one of two common mistakes while doing these skeleton transformations.
Your matrix transformations are in the wrong order. Remember that A = A*B is not the same thing as A = B*A. When you do this kind of transformation stack this order is very important.
The objects are rotating around the wrong point. Usually they rotate around the origin. So if you want something to rotate around the center of the object you will have to translate the image to the origin, rotate the image, translate the image back to the original location.
Don't give up! These transformations can be tricky, and often times it may look totally chaotic while in reality the code is really close to being correct.
Below is an arbitrary hand-drawn Intensity profile of a line in an image:
The task is to draw the line. The profile can be approximated to an arc of a circle or ellipse.
This I am doing for camera calibration. Since I do not have the actual industrial camera, I am trying to simulate the correction needed for calibration.
The question can be rephrased as I want pixel values which will follow a plot similar to the above. I want to do this using program (Preferably using opencv) and not manually enter these values because I have thousands of pixels in the line.
An algorithm/pseudo code will suffice. Also please note that I do not have any actual Intensity profile, otherwise I would have read those values.
When will you encounter such situation ?
Suppose you take a picture (assuming complete white) from a Camera, your object being placed on table, and camera just above it in vertical direction. The light coming on the center of the picture vertically downward from the camera will be stronger in intensity as compared to the light reflecting at the edges. You measure pixel values across any line in the Image, you will find intensity curve like shown above. Since I dont have camera for the time being, I want to emulate this situation. How to achieve this?
This is not exactly image processing, rather image generation... but anyways.
Since you want an arc, we still need three points on that arc, lets take the first, middle and last point (key characteristics in my opinion):
N = 100; % number of pixels
x1 = 1;
x2 = floor(N/2);
x3 = N;
y1 = 242;
y2 = 255;
y3 = 242;
and now draw a circle arc that contains these points.
This problem is already discussed here for matlab: http://www.mathworks.nl/matlabcentral/newsreader/view_thread/297070
x21 = x2-x1; y21 = y2-y1;
x31 = x3-x1; y31 = y3-y1;
h21 = x21^2+y21^2; h31 = x31^2+y31^2;
d = 2*(x21*y31-x31*y21);
a = x1+(h21*y31-h31*y21)/d; % circle center x
b = y1-(h21*x31-h31*x21)/d; % circle center y
r = sqrt(h21*h31*((x3-x2)^2+(y3-y2)^2))/abs(d); % circle radius
If you assume the middle value is always larger (and thus it's the upper part of the circle you'll have to plot), you can draw this with:
x = x1:x3;
y = b+sqrt(r^2-(x-a).^ 2);
plot(x,y);
you can adjust the visible window with
xlim([1 N]);
ylim([200 260]);
which gives me the following result: