I'm doing robot project - It need to measure subtle movements in XY direction, while driving in Z direction .
So I was thinking of using a camera with MATLAB and blinking LED attached to a wall - that way using image subtraction I can identify the LED, and with weight matrix locate the center of the light.
Now every period of time I can log the amount of pixels the center moved right-left or up-down directions and check the accuracy of the motion.
But when attempting this sensing solution I had some challenges I couldn't overcome
light source like LED/laser has soft edges so the center is not accurate
the camera is not calibrated (and I'm not sure how to calibrate it)
Is there other simple solution for this problem?
note: the amount of motion can be proportional.
You might be able to improve the accuracy of the location of the led by applying some kind of peak interpolation.
For the calibration: Matlab offers an app for camera calibration, maybe that helps you.
Related
I have an application for visualizing scan of a room and i am using an Augmented Image to align my points to real world. I am not using plane detection in my application so it is optional for me.
However, i have some questions regarding tracking accuracy because accuracy of my alignment at the moment solely depends on how accurate i can detect the center position and corners of the augmented image.
Does using plane detection increase accuracy of detecting an image position in an Augmented Image application?
Does it also affect accuracy of tracking and ARCore environmental understanding. Because users can move in the room and inspect the scan and also i tested my application with and without plane detection and it appears with plane detection my alignment changes over time because of ARCore environmental understanding and there is a shift in anchors. This does not happen that much without plane detection.
Thanks in advance for any help!
My project is a designation for a system which analyze soccer videos. In a part of this project I need to detect contours of players and everybody in the play field. For all players which don’t have occlusion with the advertisement billboards, I have used the color of play field (green) to detect contours and extract players. But I have problem with the situation that players or referee have occlusion with the advertisement billboards. Suppose a situation that advertisements on the billboards are dynamic (LED billboards). As you know in this situation finding the contours is more difficult because there is no static background color or texture. You can see two example of this condition in the following images.
NOTE: in order to find the position of occlusion, I use the region between the field line and advertisement billboards, because this region has the color of the field (green). This region is shown by a red rectangular in the following image.
I expect the result be similar to the following image.
Could anyone suggest an algorithm to detect these contours?
You can try several things.
Use vision.PeopleDetector object to detect people on the field. You can also track the detected people using vision.KalmanFilter, as in the Tracking Pedestrians from a Moving Car example.
Use vision.OpticalFlow object in the Computer Vision System Toolbox to compute optical flow. You can then analyze the resulting flow field to separate camera motion from the player motion.
Use frame differencing to detect moving objects. The good news is that that will give you the contours of the people. The bad news is that it will give you many spurious contours as well.
Optical Flow would work for such problems as it captures motion information. Foreground extraction techniques using HMM or GMM or non-parametric may solve the problem as I have used for motion analysis in surveillance videos to detect anomaly (Background was static). Magnitude and orientation of optical flow seems to be an effective method. I have read papers on segmentation using optical flow. I hope this may help you.
i want to develop a tan gram game in unity with the concept of augmented reality. i want to make tan gram figures using real tan grams in front of a webcam ,according to the tan gram figure in the screen. For that i want to place the game object with respect to the real tan gram in the camera frame. i also want to change the position and angle accordingly. please suggest a way to achive this. Thanks in advance!!!!
With difficulty.
If you want to do this without some sort of custom built hardware controller on the real tan gram, you will need some quite intricate image processing techniques. The following are some vague steps and pointers to achieve what you want. If there is a better option I cannot think of it, but this is very conceptual and by no means guaranteed to work - Just how I would attempt the task if I really had to.
Use a Laplacian operator on the image to calculate the edges
Use this, along with the average colour information in pixels to the left/right and above/below of each "edge" pixel (within a certain tolerance) to detect the individual shapes, corners, and relative positions starting from the centre of the image.
Calculate the relative sizes of each shape and and approximate the rotation using basic trigonometry.
However I can't help but feel like this is an incredibly large amount of work for such a concept, and could be so intensive to calculate this for each pixel to make it truly not worth your time. Furthermore it depends a lot on the quality of the camera used, and parallax errors would probably be nightmarish to resolve. Unless you are truly committed to this idea, I would either search for some pre-existing asset that does this for you or not undertake the project.
I am using Xbox-Unity and am trying to make a Kinect game. I need to be able to know when a player's foot is in the air and when it comes back down on the ground. I thought that this would be as simple as tracking the Joint Positions but the foot's Y changes based on the proximity to the Kinect Camera (Taking the foot joint position from Kinect). If I lifted my left foot up far away from the camera, it's Y would be high(let's say 10). If it were to land close to the camera, the Y would be low(let's say -20). What I had hoped was that I could just say 0 is the floor and have an easy time knowing when a foot was in the air and when it was on the ground. Does anybody have any ideas on how I can correctly tell when a foot is grounded?(everything I can think of so far had at least one exception that would make the gameplay broken)
Edit: Used a point to plane equation but no matter what I do, the distance to floor is always different based on my proximity to the camera.
One possibility would be to compare it to the other foot.. if one is higher than the other, chances are they're standing on the other foot. If you're looking to detect jumps, you should be able to find a sudden change in the y position of both feet.
There's also the Floor Clipping Plane, but that involves some more complicated math from what I've seen. Check out the Kinect programming guide, which is super old but I think should still be relevant here. The section "Floor Determination" is what you're after.
I have been doing a bit of research, but I cannot seem to find a way to determine small distances (centimeters and meters) using the sensors in Android or iOS devices.
Bluetooth appears too inaccurate and require more than one device, GPS only works over larger variations in distance, and small variations in rotation seem to make using the accelerometer nearly impossible.
Is there a method that I am unaware of that would allow me to do such a thing? I am familiar with Calculus, so using Integrals to determine distance based on changes in time and velocity/ acceleration is not a problem for me, I just do not know how to determine those things.
Thank you.
There's no sensor in these devices which is able to give you the desired accuracy without exterior help.
If your use case allows for a bit of external setup, here are some ideas:
You could use the camera and computer vision to calculate device movement. You could, for example, use ARToolkit to measure the distance to a visual tag fixed to a wall. In close distances you can get pretty high accuracy (mm) using this technique.
Another idea would be to measure the distance to a solid object, like a wall, by emitting a short audio signal using the speaker and measure the time until the echo arrives at the microphone. This would be more of a research project, though.
You CAN use the accelerometer to measure distance travelled
(if ONLY absolute displacement is involved).
Have the user hold the device flat and walk from pointA to pointB.
The user presses a "Start" button in ur app as he starts from A and
presses an "End" button in ur app as he reaches B.
Calculate the double-intergral of AccelX & AccelY seperately over time
between the 2 button presses. These will be distX & distY respectively.
Total displacement will be sqrt( (distXsquared) + (distY squared) ).
GoodLUCK!!
Regards
CVS#2600Hertz
Just as a thought experiment, you should be able to do this using a combination of the accelerometer and the compass on each device.
However, whether the accuracy of these sensors is enough for what you want to do...well I think you'd just have to try it.