How to calculate ios accelerometer horizontal plane - iphone

I am working on an IOS AR project, now i 've done with Camera, GPS with altitude, Compass heading but i can not get the right vector of gravity to draw the right horizontal plane. Please help me with my problem.
- Calculate horizontal plane and draw on the camera view. (And with altitude will be so good)
- Maybe project wll help me a lot.
Please help.
Thanks you very much.

Record some filtered accelerometer samples while not moving the device.
Compute the average of all those samples to get the down vector.
Vectors perpendicular to it make up the horizontal planes. Taking the dot product of the down vector with a vector along the Z axis (0,0,1) would allow you to figure out the angle of the screen relative to the horizon (see accelerometer axes)
I haven't tried this, but that would be my approach... hope it helps somehow

Related

Rotate an image around x or y axis in Matlab

If a camera is looking at say 6 evenly spaced dots in the real world it would look like the image below if the camera is looking at the image straight on with no rotation in the x, y or z axis
The z-axis is perpendicular to the image sensor so rotation around the z-axis is simple, it's just a tilt of the image. If I were to rotate the camera (or objects being looked at) around the x axis (if the x-axis is up down) the rows and columns will no longer be parallel and would project off to a vanishing point, like this.
What I would like to do is take a 2 dimensional image of say, dots, and be able to apply different rotations around the x,y and z axes independently. I've experimented with reading my image in Matlab and multiplying by a rotation matrix, or even a full camera matrix but I can't figure out how to take a 2D image, simulate rotating it around the x axis and then saving that back to an image. So that my original grid of dots would look like the bottom image with lines going off to a vanishing point. I've seen some examples using imwarp but I didn't see how I can set the angle of rotation. I'm working on camera calibration so I really want to be able to specify an angle of rotation around each axis.
Thanks for any help.

How to draw a smile using Vision and its Face landmarks observations?

I am trying to build a smile detector over a real time video (front cam) using Uibezierpath over the screen coordinates by detecting face landmarks using VNDetectFaceLandmarksRequest and "Landmarks.outerlips", calculating Y offset between upper points, without using CoreML ideally - but I seem only able to get the normalised points for the landmark, where these points have their own coordinate system. I'm not sure how to convert each point to the screen coordinate system.
This answer from #Rickster seems to be in the right direction but I'm not able to fully grasp next steps:
How to convert normalized points retrived from VNFaceLandmarkRegion2D
Current output:
Desired result:

device motion in constrained environment

I am trying to solve a seemingly easy problem related to device motion but couldn't figure out how to solve it. I have a situation where iPhone will move in a circle in the x-y plane. I need to find the angle between the iPhone's x and y axes relative to the center of rotation. The iPhone may be in portrait mode or landscape mode or in any angle in between relative to the line connecting iphone to the center of rotation. See the attached picture that explains the scenario.
The yaw change for a given rotation is the same regardless of this angle, so that doesn't really help. I am hoping that there would be some relationship that I can calculate for every small rotation and then find the best fit for the entire motion - but can't figure out that yet.
I appreciate any help or pointers.
(I am writing in pseudo-code since I don't know the API you are using, sorry.)
Here is how to get the axis and angle of the rotation.
Get the rotation matrices R1 and R2 at the beginning and end of your rotation directly from the API (see CMAttitude and CMRotationMatrix). Then, determine the angle and axis of the rotation R that brings R1 to align with R2. You get R as follows:
R = R1 * transpose(R2)
The angle of rotation R is
angle = acos((trace(R)-1)/2)
and its axis is
axis = { R[3][2]-R[2][3], R[1][3]-R[3][1], R[2][1]-R[1][2] }
For further details, please check Rotation matrix to axis angle and also Axis-angle.
I am not sure how to get the angle you are interested in. Nevertheless, I hope that the above helps.
Please don't use roll, pitch and yaw anything other than display. And don't try to integrate them, nothing good will come out.
Anyways, behind the rotation matrices there is integration. In other words, somebody already did the integration for you.

eye position mapping with the screen pixel

I am currently doing a project called eye controlled cursor using MATLAB.
I have few stages before I extract out the center of the iris (which can be considered as a pupil location). face detetcion - > eye detection -- > iris detection -->And finally i have obtained the center of the iris as show in the figure.
Now, I am trying to map this position (X,Y) to my computer screen pixel (1366 x 768). In most of the journals I have found, they require a reference point such as lips, nose or eye corner. But I am only able to extract the center of iris by doing certain thresholding. How can i map this position (X,Y) to my computer screen pixel (1366 x 768)?
Well you either have to fix the head to a certain position (which isn't very practical) or you will have to adapt to the face position. Depending on your image, you will have to choose points that are always on that image and are easy to detect. If you just have one point (like the nose), you can only adjust for the x/y shift of your head. If you have more points (like the 4 corners of the eye, the nose, maybe the corners of the mouth), you can also extract the 3 rotational values of the head and therefore calculate the direction of sight much better. For a first approach, I guess only the two inner corners of the eye (they are "easy" to detect) will do.
I would also recommend using a calibration sequency. You present the user with a sequence of 4 red points in the corners of the screen and he has to look at them. You can then record the positions of the pupils and interpolate between them.

Using the iPhone accelerometer in a car

I want to use the iPhones's accelerometer to detect motions while driving. I'm a bit confused what the accelerometer actually measures, especially when driving a curve.
As you can see in the picture, a car driving a curve causes two forces. One is the centripetal force and one is the velocity. Imagine the iPhone is placed on the dashboard with +y-axis is pointing to the front, +x-axis to the right and +z-axis to the top.
My Question is now what acceleration will be measured when the car drives this curve. Will it measure g-force on the -x-axis or will the g-force appear on the +y axis?
Thanks for helping!
UPDATE!
For thoses interested, as one of the answers suggested it measures both. The accelerometer is effected by centrifugal force and velocity resulting in an acceleration vector that is a combination of these two.
I think it will measure both. But don't forget that the sensor will measure gravity as well. So when your car is not moving, you will still get accelerometer readings. A nice talk on sensors in smartphones http://www.youtube.com/watch?v=C7JQ7Rpwn2k&feature=results_main&playnext=1&list=PL29AD66D8C4372129 (it's on android, but the same type of sensors are used in iphone).
Accelerometer measures acceleration of resultant force applied to it (velocity is not a force by the way). In this case force is F = g + w + c i.e. vector sum of gravity, centrifugal force (reaction to steering centripetal force, points from the center of the turn) and car acceleration force (a force changing absolute value of instantaneous velocity, points along the velocity vector). Providing Z axis of accelerometer always points along the gravity vector (which is rare case for actual car) values of g, w and c accelerations can be accessed in Z, X and Y coordinates respectively.
Unless you are in free fall the g-force (gravity) is always measured. If I understand your setup correctly, the g-force will appear on the z axis, the axis that is vertical in the Earth frame of reference. I cannot tell whether it will be +z or -z, it is partly convention so you will have to check it for yourself.
UPDATE: If the car is also going up/downhill then you have to take the rotation into account. In other words, there are two frames of reference: the iPhone's frame of reference and the Earth frame of reference. If you would like to deal with this situation, then please ask a new question.