How to detect a bending person using Matlab - matlab

I'm facing a problem while using a computer vision system toolbox (Matlab)
vision.PeopleDetector System object to detect the person when it is bending. Since this tool is to only detect upright person, it failed when the bending posture is not upright.
I did try using regionprops that worked with segmented silhouette of the bending figure but since I'm using Gaussian mixture model to segment, the results are bad as well.
Anyone has good suggestion on detecting a bending person? Thank you very much.

Just to clarify, are you working with a video? Is your camera stationary? In that case, you should be able to use vision.ForegroundDetector to detect anything that moves, and then use regionprops to select the blobs of the right size. If regionprops does not work for you, you may want to try using morphology (imclose and imopen) to close small gaps and to filter out noise.
Also, if you are working with a video, then you can use vision.KalmanFilter to track the people. Then you would not necessarily have to detect each person in every frame. If a person bends down, you may still be able to recover the track when he straightens back up.
Another possibility is to try the upper body detection with vision.CascadeObjectDetector. If you rotate the image 90 degrees, you should be able to detect the upper body of a bending person.
Yet another possibility is to train your own "bending person detector" using the trainCascadeObjectDetector function.

Related

How to find contours of soccer player in dynamic background

My project is a designation for a system which analyze soccer videos. In a part of this project I need to detect contours of players and everybody in the play field. For all players which don’t have occlusion with the advertisement billboards, I have used the color of play field (green) to detect contours and extract players. But I have problem with the situation that players or referee have occlusion with the advertisement billboards. Suppose a situation that advertisements on the billboards are dynamic (LED billboards). As you know in this situation finding the contours is more difficult because there is no static background color or texture. You can see two example of this condition in the following images.
NOTE: in order to find the position of occlusion, I use the region between the field line and advertisement billboards, because this region has the color of the field (green). This region is shown by a red rectangular in the following image.
I expect the result be similar to the following image.
Could anyone suggest an algorithm to detect these contours?
You can try several things.
Use vision.PeopleDetector object to detect people on the field. You can also track the detected people using vision.KalmanFilter, as in the Tracking Pedestrians from a Moving Car example.
Use vision.OpticalFlow object in the Computer Vision System Toolbox to compute optical flow. You can then analyze the resulting flow field to separate camera motion from the player motion.
Use frame differencing to detect moving objects. The good news is that that will give you the contours of the people. The bad news is that it will give you many spurious contours as well.
Optical Flow would work for such problems as it captures motion information. Foreground extraction techniques using HMM or GMM or non-parametric may solve the problem as I have used for motion analysis in surveillance videos to detect anomaly (Background was static). Magnitude and orientation of optical flow seems to be an effective method. I have read papers on segmentation using optical flow. I hope this may help you.

How to move the game object according the movement of real world object in the web cam in unity?

i want to develop a tan gram game in unity with the concept of augmented reality. i want to make tan gram figures using real tan grams in front of a webcam ,according to the tan gram figure in the screen. For that i want to place the game object with respect to the real tan gram in the camera frame. i also want to change the position and angle accordingly. please suggest a way to achive this. Thanks in advance!!!!
With difficulty.
If you want to do this without some sort of custom built hardware controller on the real tan gram, you will need some quite intricate image processing techniques. The following are some vague steps and pointers to achieve what you want. If there is a better option I cannot think of it, but this is very conceptual and by no means guaranteed to work - Just how I would attempt the task if I really had to.
Use a Laplacian operator on the image to calculate the edges
Use this, along with the average colour information in pixels to the left/right and above/below of each "edge" pixel (within a certain tolerance) to detect the individual shapes, corners, and relative positions starting from the centre of the image.
Calculate the relative sizes of each shape and and approximate the rotation using basic trigonometry.
However I can't help but feel like this is an incredibly large amount of work for such a concept, and could be so intensive to calculate this for each pixel to make it truly not worth your time. Furthermore it depends a lot on the quality of the camera used, and parallax errors would probably be nightmarish to resolve. Unless you are truly committed to this idea, I would either search for some pre-existing asset that does this for you or not undertake the project.

How to use accelerometer, CMMotion data to locate a point in 3D space?

I am creating an application. In which iPhone will be placed (a separate cover is made for it) with golf club(racket). I want to get array of points which state the path of the racket movement.
For example, I start collecting the data when racket is on the ground. After then user prepares himself for shot. So, he will take the racket back side and then he will hit the shot by moving racket forward. I want to catch all these points in 3D and want to plot them on screen (2D projection). I saw many similar questions, accelerometer, CMMotion framework documents. But could not find a way to doing so.
I hope, I have explained the question properly. Can you suggest me some formula or how to process the data to achieve it?
Thanks in advance.
You cannot track these movements in the 3D space.
But you can track the orientation of the racket and that should work well.
I have implemented a sensor fusion algorithm for the Shimmer platform, not a trivial task. I would use Core Motion and I would not try to create my own sensor fusion algorithm.
Hope this helps, good luck!
i tried the sensors fusion algorithm developed by Madgwick, but the output, on my device, it's similar to the CoreMotion Attitude output.
I don't have the possibility to test the attitude outputs from other iPhone, but in my case, the problem it's the yaw angle, even if the iphone it's fixed on the table the yaw angle tend to be unstable, probably due to the distinct chip-placement of z-axis gyro.

how to deform image?

Hi Friends
I Want to make a simple gaming Application in which the user hit the car and car breaks from that point means the image get little deformed when the user hit the car image. I know everything could be possible with using of lots of images and get change when user hit that car image but i don't want to use so many images.
is there any solution for this , how can i deform the image ..sorry for my English but , here i paste a link of the game that is on flash and this is what i exactly want..
http://www.playgecogames.com/file.php?f=657&a=popup
please respond soon
thanks
You don't say if this is in 2D or 3D, or what techniques you're going to use.
If you're implementing the game using OpenGL, it's fairly straightforward. The object can be made up of a regular mesh, with the image as a texture mapped to the mesh. When the user hits the object, you just deform the mesh.
A simple method would be to take a vector in the direction of the hit, displace the nearest vertex by an amount proportional to the force of the strike, and then fan out in to deform the rest of the mesh in decreasing amounts. By deforming the mesh, the image texture will be rendered with all the dents or deformations you like.
If you want to to this without OpenGL and just straight images, you could use image resampling to simulate the effect. You have your original pristine image which is 'filtered' to make up the resulting image. At first there are no deformations so you copy the original image verbatim. Each time the user hits the object, you can add a deformation using a filter or transform within a local region of interest. This function would resample the source image in a distorted manner, causing it to look like the object is damaged.
If you look up some good books on game development, you'll find a great range of approaches to object collisions, deformations and so on.
If you know a bit about image processing technics here is the documentation for accessing the pixels of the image :
Apple Reference
You also have libraries for this such as this one :
simple-iphone-image-processing
But for what you want to do this might not be the easiest way. What I would suggest is that you divide the car into several images depending on what areas can be impacted. Then you just change the image corresponding to the damaged zone each time the car is hit.
I think you should use the cocos2d effects http://www.cocos2d-iphone.org/wiki/doku.php/prog_guide%3aeffects + multiple images. Because there are many parts which drops after the player kick the car. Like when user kick the side mirror you should change the car image with without side mirror car image.
The person that has made that flash game used around 4 images to display the car. If you want the game to be in 2d, the easiest way is to draw the car, cut it into about 4 pieces (: left side + right side (duplicate of the left side) hood and roof).
If you want to "really" deform the car you'll have to use a 3d engine like openGLES.
Id really suggest doing it in 2d :)
I suggest having a look at the cocos2d game engine. You can modify images with effects, which are applied using a virtual grid. Have a look at the effects page in their programming guide.

MATLAB Eye Recognition

Is it possible to detect eye in the video mode in MATLAB? I am trying to detect the eye and make some predictions based on the movement of the eye. But am not sure on how to do that. Can someone help me in how to start about that? Thanks in advance.
You could take a look at this set of functions on The MathWorks File Exchange: Fast Eyetracking by Peter Aldrian.
Quoting from the description of the post (to give a little more detail here):
This project handles with the question
how to extract fixed feature points
from a given face in a real time
environment. It is based on the idea,
that a face is given by Viola Jones
Algorithm for face detection and
processed to track pupil movement in
relation to the face without using
infrared light.
My MATLAB is incredibly rusty, but this site appears to have some scripts to help track eye movement.
eye detection will be possible with MATLAB and you can come up with that. But there is a difference between recognition and detection that you need to consider carefully. Detection is checking if an object is available in the image whereas recognition is determining what the different objects in the image are.
I hope it'll increase someone's knowledge.