I recently have started learning how to code in matlab, i.e. programming simple experience for cognitive psychological investigations. I wanted to ask, whether someone knows both, how to define, where to draw a dot in the screen,and how to define the fixation time before the stimulus onset. I know, that the code for defining a dot position is the following:
dotXpos = [?] * screenXpixels;
dotYpos = [?] * screenYpixels;
However, I don't know, which coordinates define the exact middle of the screen.
Thank you in advance!
In Psychtoolbox, most of the fundamental drawing routines are provided through the Screen function. To draw a dot, you can use the DrawDots subcommand:
Screen('DrawDots', windowPtr, xy [,size] [,color] [,center] [,dot_type]);
Here, the xy should be the position of all the "centers" of the dots. For you it should be [dotXpos, dotYpos].
The center position of the screen is:
dotXpos = 0.5 * screenXpixels;
dotYpos = 0.5 * screenYpixels;
To implement a timed delay before stimulus appears, you can use WaitSecs
Please check out:
https://web.archive.org/web/20160515043421/http://docs.psychtoolbox.org/DrawDots
https://web.archive.org/web/20160419072932/http://docs.psychtoolbox.org/WaitSecs
Related
I have two directions and i am trying to calculate the angle on a specific axis. The object from which the directions are derived is not a static object so i'm struggling to get my head round the maths to work out what i need to do.
FYI, I have the start and end points that i have used to calculate the directions if they are needed.
Here's a diagram to show what i am looking for:
The above image is from a top-down view in unity and it shows the angle i want.
The problem can be seen from the above image, which is that the directions are not on the same height so i can't use the vector3.angle function as it won't give me the correct angle value.
In essence i want to know how much would i have to rotate the red line to the left (top view) so that it would line up with the blue (top-view).
The reason i need this is as i am trying to find a way of getting the side-to-side angles of fingers from my leap motion sensor.
This a generic version of my other question:
Leap Motion - Angle of proximal bone to metacarpal (side to side movement)
It will provide more specific information as to the problem if you need it and it has more specific screenshots.
**UPDATE:
After re-reading my question i can see it wasn't particularly clear so here i will hopefully make it clearer. I am trying to calculate the angle of a finger from the leap motion tracking data. Specifically the angle of the finger relative to the metacarpal bone (bone is back of hand). An easy way to demonstrate what i mean would be for you to move your index finger side-to-side (i.e. towards your thumb and then far away from your thumb).
I have put two diagrams below to hopefully illustrate this.
The blue line follows the metacarpal bone which your finger would line up with in a resting position. What i want to calculate is the angle between the blue and red lines (marked with a green line). I am unable to use Vector3.Angle as this value also takes into account the bending of the finger. I need someway of 'flattening' the finger direction out, thus essentially ignoring the bending and just looking at the side to side angle. The second diagram will hopefully show what i mean.
In this diagram:
The blue line represents the actual direction of the finger (taken from the proximal bone - knuckle to first joint)
The green line represents the metacarpal bone direction (the direction to compare to)
The red line represents what i would like to 'convert' the blue line to, whilst keeping it's side to side angle (as seen in the first hand diagram).
It is also worth mentioning that i can't just always look at the x and z axis as this hand will be moving at rotating.
I hope this helps clear things up and truly appreciate the help received thus far.
If I understand your problem correctly, you need to project your two vectors onto a plane. The vectors might not be in that plane currently (your "bent finger" problem) and thus you need to "project" them onto the plane (think of a tree casting a shadow onto the ground; the tree is the vector and the shadow is the projection onto the "plane" of the ground).
Luckily Unity3D provides a method for projection (though the math is not that hard). Vector3.ProjectOnPlane https://docs.unity3d.com/ScriptReference/Vector3.ProjectOnPlane.html
Vector3 a = ...;
Vector3 b = ...;
Vector3 planeNormal = ...;
Vector3 projectionA = Vector3.ProjectOnPlane(a, planeNormal);
Vector3 projectionB = Vector3.ProjectOnPlane(b, planeNormal);
float angle = Vector3.Angle(projectionA, projectionB);
What is unclear in your problem description is what plane you need to project onto? The horizontal plane? If so planeNormal is simply the vertical. But if it is in reference to some other transform, you will need to define that first.
I was trying to animate different moves in Matlab. This post Matlab for loop animations helped my a lot , but I would like to change moves after some time. Thus, after defining the trajectories I animated them. Could you please have a look?
I wanted to keep the speed of the dot fixed and thus I solved it with differential equations, which are defined at other files. I have defined also the times tf , tf1... I used exactly the same way as it was suggested at the above link, with hPoint.
tf=4*pi/15; % time at which 4pi are completed. speed=15
tf1= 2+tf;
tf2= pi/15 +tf1;
[t,X]=ode45(#dif,[0 tf],[0 -15 -15])
p1 = [X(:,2) X(:,3)];
[t,X2]=ode45(#dif2,[tf tf1],[-15 -15])
p1a = [X2(:,1) X2(:,2)];
[t,X3]=ode45(#dif,[tf1 tf2],[0 -15 +15])
p1b = [X3(:,2) X3(:,3)];
D = [p1(:,1) p1(:,2)
p1a(:,1) p1a(:,2)
p1b(:,1) p1b(:,2)];
hPoint = line('XData',D(1,1), 'YData',D(1,2), 'EraseMode',ERASEMODE, ...
'Color','r', 'Marker','o', 'MarkerSize',50, 'LineWidth',1);
However, when I am trying to animate it, the dot stops a bit and then continues. Especially for the vector p1b, which is the third part( the upper circle). Any ideas about this behavior? Is there any way to make it stable and animate with the same speed? Thank you in advance !
I have a stack of images with a bar close to the center. As the stack progresses the bar pivots around one end and the entire stack contains images with the bar rotated at many different angles up to 45 degrees above or below horizontal.
As shown here:
I'm looking for a way to rotate the bar and/or entire image and align everything horizontally before I do my other processing. Ideally this would be done in Matlab / imageJ / ImageMagick. I'm currently trying to work out a method using first Canny edge detection, followed by a Hough transform, followed by an image rotation, but I'm hoping this is a specific case of a more general problem which has already been solved.
If you have the image processing toolbox you can use regionprops with the 'Orientation' property to find the angle.
http://www.mathworks.com/help/images/ref/regionprops.html#bqkf8ji
The problem you are solving is known as image registration or image alignment.
-The first thing you need to due is to treshold the image, so you end up with a black and white image. This will simplify the process.
-Then you need to calculate the mass center of the imgaes and then translate them to match each others centers.
Then you need to rotate the images to matcheach other. This could be done using the principal axis measure. The principal axis will give you the two axis that explain most of the variance in the population. Which will basically give you a vector showing which way your bar is pointing. Then all you need to due is rotate the bars in the same direction.
-After the principal axis transformation you can try rotating the pictues a little bit more in each direction to try and optimise the rotation.
All the way through your translation and rotation you need a measure for showing you how good a fit your tranformation is. This measure can be many thing. If the picture is black and white a simple subtraction of the pictures is enough. Otherwise you can use measures like mutual information.
...you can also look at procrustes analysis see this link for a matlab function http://www.google.dk/search?q=gpa+image+analysis&oq=gpa+image+analysis&sugexp=chrome,mod=9&sourceid=chrome&ie=UTF-8#hl=da&tbo=d&sclient=psy-ab&q=matlab+procrustes+analysis&oq=matlab+proanalysis&gs_l=serp.3.1.0i7i30l4.5399.5883.2.9481.3.3.0.0.0.0.105.253.2j1.3.0...0.0...1c.1.5UpjL3-8aC0&pbx=1&bav=on.2,or.r_gc.r_pw.r_qf.&bvm=bv.1355534169,d.Yms&fp=afcd637d8ae07bde&bpcl=40096503&biw=1600&bih=767
You might want to look into the SIFT transform.
You should take as your image the rectangle that represents a worst case guess for your bar and determine the rotation matrix for that.
See http://www.vlfeat.org/overview/sift.html
Use the StackReg plugin of ImageJ. I'm not 100% sure but I think it already comes installed with FIJI (FIJI Is Just ImageJ).
EDIT: I think I have misread your question. That is not a stack of images you are trying to fix, right? In that case, a simple approach (probably not the most efficient but definetly works), is the following algorithm:
threshold the image (seems easy, your background is always white)
get a long horizontal line as a structuring element and dilate the image with it
rotate the structuring element and keep dilating image, measuring the size of the dilation.
the angle that maximizes it, is the rotation angle you'll need to fix your image.
There are several approaches to this problem as suggested by other answers. One approach possibly similar to what you are already trying, is to use Hough transform. Hough transform is good at detecting line orientations. Combining this with morphological processing and image rotation after detecting the angle you can create a system that corrects for angular variations. The basic steps would be
Use Morphological operations to make the bar a single line blob.
Use Hough transform on this image.
Find the maximum in the transform output and use that to find orientation angle.
Use the angle to fix original image.
A full example which comes with Computer Vision System Toolbox for this method. See
http://www.mathworks.com/help/vision/examples/rotation-correction-1.html
you can try givens or householder transform, I prefer givens.
it require an angle, using cos(angle) and sin(angle) to make the givens matrix.
Suggest a method/algorithm to track the center point of the feature,
the features is part of a video. As the video is played, the feature keeps moving around but never goes out of the rectangle of size shown in figure.
I wish to track the center point over the duration of the video.
*the red point is not part of the image. I have overlaid it to show the center point I wish to track.
A very simple way:
create an image with the pattern to recognize
do cross-correlation along X and Y with your frames
select the peaks of the X and Y correlation signals to identify position
There must be a lot of material around .. start here http://en.wikipedia.org/wiki/Video_tracking
Try using vision.PointTracker in the Computer Vision System Toolbox.
I am currently trying to reconstruct a 3D trajectory of a falling object like a ball or a rock out of a sequence of images taken from an iPhone video.
Where should I start looking? I know I have to calibrate the camera (I think I'll use the matlab calibration toolbox by Jean-Yves Bouguet) and then find the vanishing point from the same sequence, but then I'm really stuck.
read this: http://www.cs.auckland.ac.nz/courses/compsci773s1c/lectures/773-GG/lectA-773.htm
it explains 3d reconstruction using two cameras. Now for a simple summary, look at the figure from that site:
You only know pr/pl, the image points. By tracing a line from their respective focal points Or/Ol you get two lines (Pr/Pl) that both contain the point P. Because you know the 2 cameras origin and orientation, you can construct 3d equations for these lines. Their intersection is thus the 3d point, voila, it's that simple.
But when you discard one camera (let's say the left one), you only know for sure the line Pr. What's missing is depth. Luckily you know the radius of your ball, this extra information can give you the missing depth information. see next figure (don't mind my paint skills):
Now you know the depth using the intercept theorem
I see one last issue: the shape of ball changes when projected under an angle (ie not perpendicular on your capture plane). However you do know the angle, so compensation is possible, but I leave that up to you :p
edit: #ripkars' comment (comment box was too small)
1) ok
2) aha, the correspondence problem :D Typically solved by correlation analysis or matching features (mostly matching followed by tracking in a video). (other methods exist too)
I haven't used the image/vision toolbox myself, but there should definitely be some things to help you on the way.
3) = calibration of your cameras. Normally you should only do this once, when installing the cameras (and every other time you change their relative pose)
4) yes, just put the Longuet-Higgins equation to work, ie: solve
P = C1 + mu1*R1*K1^(-1)*p1
P = C2 + mu2*R2*K2^(-1)*p2
with
P = 3D point to find
C = camera center (vector)
R = rotation matrix expressing the orientation of the first camera in the world frame.
K = calibration matrix of the camera (containing internal parameters of the camera, not to be confused with the external parameters contained by R and C)
p1 and p2 = the image points
mu = parameter expressing the position of P on the projection line from camera center C to P (if i'm correct R*K^-1*p expresses a line equation/vector pointing from C to P)
these are 6 equations containing 5 unknowns: mu1, mu2 and P
edit: #ripkars' comment (comment box too small once again)
The only computer vison library that pops up in my mind is OpenCV (http://opencv.willowgarage.com/wiki ). But that's a C library, not matlab... I guess google is your friend ;)
About the calibration: yes, if those two images contain enough information to match some features. If you change the relative pose of the cameras, you'll have to recalibrate of course.
The choice of the world frame is arbitrary; it only becomes important when you want to analyze the retrieved 3d data afterwards: for example you could align one of the world planes with the plane of motion -> simplified motion equation if you want to fit one.
This world frame is just a reference frame, changeable with a 'change of reference frame transformation' (translation and/or rotation transformation)
Unless you have a stereo camera, you will never be able to know the position for sure, even with calibrated camera. Because you don't know whether the ball is small and close or large and far away.
There are other methods with single camera, based on series of images with different focus. But I doubt that you can control the camera of your cell phone in that way.
Edit(1):
as #GuntherStruyf points out correctly, you can know the position if one of your inputs is the size of the ball.