I've found this really cool site on interfacing an Arduino to an optical mouse to read out x-y readings from it. I've done it, and it's working nicely.
Then I was thinking, 'Why not plot all this to become a graph?' and I came across Processing.
I am aware that Processing has an example named 'MouseSignal'
This example is the EXACT thing that I want to write with Processing. But, the only change is that, I want to use the x-y coordinates from the mouse that is attached to the Arduino and ask Processing to generate a 'real-time' graph of the coordinate.
Thanks!
Change the spot in the code where it says:
xvals[width-1] = mouseX;
yvals[width-1] = mouseY;
Replace mouseX and mouseY with the values coming from the Arduino. You may need to scale these values to fit within the axes.
Related
I am working on a simple SLAM simulation for a project. Here's the problem:
For the simulation, I will be using a mobile robot moving in a room. The robot has laser distance sensors so he can detect the detect the distances from itself to the wall from inside an angle, as shown in the first figure:
The MATLAB code I've implemented for the simulation is to simply calculate the angles from each wall point to the the robot's pose and return all the points whose angle is inside, for example, [-60°,+60°].
For more complex room configurations it can't be used though, since walls that shouldn't be detected (walls from other rooms) will be detected as well, as seen in the second figure:
I need a better way of implementing this detection inside the simulation so I can use it for any kind of rooms like this one, producing results like this:
I have a CT scan for an heart and I am designing a device that rests on top of it. As such, getting the right lengths for certain attributes is important. The CT scan was segmented in MeshLab and my advisor gave me code that uses PLY_IO to read the ply file exported from MeshLab. From this, I have a map of the surface. surf(Map.X, Map.Y,Map.Z) outputs the 3D model. Now, what I would ideally want is to be able to select points graphically via the figure window and have Matlab either tell me what the points are or allow me to draw a geodesic line to determine its length. Question: Does anyone have any idea of how I could do this in a simple way?
Ultimately, just drawing on the figure might be ok too if I can just get it in the right orientation. Ideally, though, I would select the start and end point and then Matlab would graphically show a geodesic on the surface that I can later find the length of. I'm willing to do some programming for this, but hopefully there's something out there you guys might already know about.
One way to interactively extract points on a surface is to use datacursormode. Here's a simple example of how to get two points:
surf(peaks);
dcm_obj = datacursormode(gcf);
set(dcm_obj,'DisplayStyle','datatip',...
'SnapToDataVertex','off','Enable','on')
disp('Select first point then press any key')
pause
c_info{1} = getCursorInfo(dcm_obj);
disp('Select second point then press any key')
pause
c_info{2} = getCursorInfo(dcm_obj);
Note that if you (or the user) changes mode (e.g. by clicking the rotate button) in order to select the point, you will have to switch back to datacursor mode to move the datacursor again:
You should now have c_info{1}.position and c_info{2}.position which are two points on the surface. Calculating the geodesic is another matter - have a look on the File Exchange, see if there's anything around already that will do the job for the type of data you have already.
I have an application where I have a stepper motor connected to a turntable. The stepper motor's driver is controlled serially via Matlab.That is, I can control the angle through which the stepper motor turns the turntable and the speed at which it is happening by sending data to the stepper motor using a serial port in Matlab. Once the stepper motor turns by the required angle, it returns its current angle back to another serial port.
I am trying to build a GUI where the user can input the speed and the angle to which the turntable must be turned.I want to be able to illustrate the rotation of the turntable on my GUI itself, using the data about the current angle that the stepper motor returns to me.
Can anyone suggest a good way to creat and animate the turntable's rotation using MATLAB's GUIDE? The animated figure must be able to display the current angle with its initial reference angle. What I have in mind looks like the figure below.
Just plot eveything with lines and update the lines accordingly, don't be afraid to start using some math to calculate the angles and line end points, and to plot the circles. This is something pretty easy to do.
I am currently trying to reconstruct a 3D trajectory of a falling object like a ball or a rock out of a sequence of images taken from an iPhone video.
Where should I start looking? I know I have to calibrate the camera (I think I'll use the matlab calibration toolbox by Jean-Yves Bouguet) and then find the vanishing point from the same sequence, but then I'm really stuck.
read this: http://www.cs.auckland.ac.nz/courses/compsci773s1c/lectures/773-GG/lectA-773.htm
it explains 3d reconstruction using two cameras. Now for a simple summary, look at the figure from that site:
You only know pr/pl, the image points. By tracing a line from their respective focal points Or/Ol you get two lines (Pr/Pl) that both contain the point P. Because you know the 2 cameras origin and orientation, you can construct 3d equations for these lines. Their intersection is thus the 3d point, voila, it's that simple.
But when you discard one camera (let's say the left one), you only know for sure the line Pr. What's missing is depth. Luckily you know the radius of your ball, this extra information can give you the missing depth information. see next figure (don't mind my paint skills):
Now you know the depth using the intercept theorem
I see one last issue: the shape of ball changes when projected under an angle (ie not perpendicular on your capture plane). However you do know the angle, so compensation is possible, but I leave that up to you :p
edit: #ripkars' comment (comment box was too small)
1) ok
2) aha, the correspondence problem :D Typically solved by correlation analysis or matching features (mostly matching followed by tracking in a video). (other methods exist too)
I haven't used the image/vision toolbox myself, but there should definitely be some things to help you on the way.
3) = calibration of your cameras. Normally you should only do this once, when installing the cameras (and every other time you change their relative pose)
4) yes, just put the Longuet-Higgins equation to work, ie: solve
P = C1 + mu1*R1*K1^(-1)*p1
P = C2 + mu2*R2*K2^(-1)*p2
with
P = 3D point to find
C = camera center (vector)
R = rotation matrix expressing the orientation of the first camera in the world frame.
K = calibration matrix of the camera (containing internal parameters of the camera, not to be confused with the external parameters contained by R and C)
p1 and p2 = the image points
mu = parameter expressing the position of P on the projection line from camera center C to P (if i'm correct R*K^-1*p expresses a line equation/vector pointing from C to P)
these are 6 equations containing 5 unknowns: mu1, mu2 and P
edit: #ripkars' comment (comment box too small once again)
The only computer vison library that pops up in my mind is OpenCV (http://opencv.willowgarage.com/wiki ). But that's a C library, not matlab... I guess google is your friend ;)
About the calibration: yes, if those two images contain enough information to match some features. If you change the relative pose of the cameras, you'll have to recalibrate of course.
The choice of the world frame is arbitrary; it only becomes important when you want to analyze the retrieved 3d data afterwards: for example you could align one of the world planes with the plane of motion -> simplified motion equation if you want to fit one.
This world frame is just a reference frame, changeable with a 'change of reference frame transformation' (translation and/or rotation transformation)
Unless you have a stereo camera, you will never be able to know the position for sure, even with calibrated camera. Because you don't know whether the ball is small and close or large and far away.
There are other methods with single camera, based on series of images with different focus. But I doubt that you can control the camera of your cell phone in that way.
Edit(1):
as #GuntherStruyf points out correctly, you can know the position if one of your inputs is the size of the ball.
I'm concepting an iPhone app that will require precise calibration to the iPhones accelerometer and gyro data. I will have to simulate specific movements that I would eventually like to execute code. (Think shake-to-shuffle, or undo).
Is there a good way of doing this already? or something you can come up with? Perhaps some way to generate a time/value graph of the movement data as it is being captured?
Movement data being captured - see the accelerometer graph sample app, which shows the data in real time: http://developer.apple.com/library/ios/#samplecode/AccelerometerGraph/Introduction/Intro.html
The data is pretty noisy - the gyro and accelerometer aren't good enough right now to be able to track where the phone is in local 3d space, for example. The rotation, however, is very solid, and the orientation of the device can be pretty accurately tracked. You may have the best results making gestures out of rotation data instead of movement along an axis. Or, basic direction like shakes along an axis will work as Jacob Jennings said.
A good starting point for accelerometer gesture recognition is this tutorial by Kevin Bomberry at AblePear:
http://blog.ablepear.com/2010/02/iphone-sdk-shake-rattle-roll.html
He sets a blanket threshold for the absolute value of acceleration on any axis. I would generate an 'event' for the axis that had the highest acceleration during the break of the threshold (Z POSITIVE, X NEGATIVE, etc), and push these on an 'event history' queue. At the end of each didAccelerate call, evaluate the queue for patterns that match a gesture, for example:
X POSITIVE, X NEGATIVE, X POSITIVE, X NEGATIVE might be considered a 'shake' along that axis. This should provide a couple different gesture commands.
See the following for a simple queue category addition to NSMutableArray:
How do I make and use a Queue in Objective-C?