I am using a Raspberry PI camera and the problem in hand is how to find the best position for it in order to fully see an object.
The object looks like this:
Question is how to find the perfect position given that the camera is placed in the centre of the above image. Perfectly the camera will be able to catch the object only, as the idea is to get the camera as close as possible.
Take a picture with you camera, save it as a JPG, then open it in a viewer that allows you to inspect the EXIF header. If you are lucky you should see the focal length (in mm) and the sensor size. If the latter is missing, you can probably work it out from the sensor's spec sheet (see here to start). From the two quantities you can work out the angle of the field of view (HorizFOV = atan(0.5 * sensor_width / focal_length), VertFOV = atan(0.5 * sensor_height / focal_length). From these angles you can derive an approximate distance from your subject that will keep it fully in view.
Note that these are only approximations. Nonlinear lens distortion will produce a slightly larger effective FOV, especially near the corners.
I have a record session for my application. When user started a record session I start collecting data from device's CMMotionManager object and store them on CoreData to process and present later. The data I'm collecting includes gps data, accelerometer data and gyro data. The frequency of data is 10Hz.
Currently I'm struggling to calculate the lean angle of device with motion data. It is possible to calculate which side of device is land by using gravity data but I want to calculate right or left angle between user and ground regardless of travel direction.
This problem requires some linear algebra knowledge to solve. For example for calculation on some point I must calculate the equation of a 3D line on a calculated plane. I am working on this one for a day and it's getting more complex. I'm not good at math at all. Some math examples related to the problem is appreciated too.
It depends on what you want to do with the collected data and what ways the user will go with that recording iPhone in her/his pocket. The reason is that Euler angles are no safe and especially no unique way to express a rotation. Consider a situation where the user puts the phone upright into his jeans' back pocket and then turns left around 90°. Because CMAttitude is related to a device lying flat on the table, you have two subsequent rotations for (pitch=x, roll=y, yaw=z) according to this picture:
pitch +90° for getting the phone upright => (90, 0, 0)
roll +90° for turning left => (90, 90, 0)
But you can get the same position by:
yaw +90° for turning the phone left (0, 0, 90)
pitch -90° for making the phone upright (-90, 0, 90)
You see two different representations (90, 90, 0) and (-90, 0, 90) for getting to the same rotation and there are more of them. So you press Start button, do some fancy rotations to put the phone into the pocket and you are in trouble because you can't rely on Euler Angles when doing more complex motions (s. gimbal lock for more headaches on this ;-)
Now the good news: you are right linear algebra will do the job. What you can do is force your users to put the phone in always the same position e.g. fixed upright in the right back pocket and calculate the angle(s) relative to the ground by building the dot product of gravity vector from CMDeviceMotion g = (x, y, z) and the postion vector p which is the -Y axis (0, -1, 0) in upright position:
g • x = x*0 + y*(-1) + z*0 = -y = ||g||*1*cos (alpha)
=> alpha = arccos (-y/9.81) as total angle. Note that gravitational acceleration g is constantly about 9.81
To get the left-right lean angle and forward-back angle we use the tangens:
alphaLR = arctan (x/y)
alphaFB = arctan (z/y)
[UPDATE:]
If you can't rely on having the phone at a predefined postion like (0, -1, 0) in the equations above, you can only calculate the total angle but not the specific ones alphaLR and alphaFB. The reason is that you only have one axis of the new coordinate system where you need two of them. The new Y axis y' will then be defined as average gravity vector but you don't know your new X axis because every vector perpedicular to y' will be valid.
So you have to provide further information like let the users walk a longer distance into one direction without deviating and use GPS and magnetometer data to get the 2nd axis z'. Sounds pretty error prone in practise.
The total angle is no problem as we can replace (0, -1, 0) with the average gravity vector (pX, pY, pZ):
g•p = xpX + ypY + zpZ = ||g||||p||*cos(alpha) = ||g||^2*cos(alpha)
alpha = arccos ((xpX + ypY + z*pZ) / 9.81^2)
Two more things to bear in mind:
Different persons wear different trowsers with different pockets. So the gravity vector will be different even for the same person wearing other clothes and you might need some kind of normalisation
CMMotionManager does not work in the background i.e. the users must not push the standby button
If I understand your question, I think you are interested in getting the attitude of your device. You can do this using the attitude property of the CMDeviceMotion object that you get from the deviceMotion property of the CMMotionManager object.
There are two different angles that you might be interested in the CMAttitude class: roll and pitch. If you imagine your device as an airplane with the propeller at the top (where the headphone jack is), pitch is the angle the plane/device would make with the ground if the plane were in a climb or dive. Meanwhile, roll is the angle that the "wings" would make with the ground if the plane were to be banking or in mid barrel roll.
(BTW, there is a third angle called yaw that I think is not relevant for your question.)
The angles will be given in radians, but it's easy enough to convert them to degrees if that's what you want (by multiplying by 180 and then dividing by pi).
Assuming I understand what you want, the good news is that you may not need to understand any linear algebra to capture and use these angles. (If I'm missing something, please clarify and I'd be happy to help further.)
UPDATE (based on comments):
The attitude values in the CMAttitude object are relative to the ground (i.e., the default reference frame has the Z-axis as vertical, that is pointing in the opposite direction as gravity), so you don't have to worry about cancelling out gravity. So, for example, if you lie your device on a flat table top, and then roll it up onto its side, the roll property of the CMAttitude object will change from 0 to plus or minus 90 degrees (+- .5pi radians), depending on which side you roll it onto. Meanwhile, if you start it lying flat and then gradually stand it up on its end, the same will happen to the pitch property.
While you can use the pitch, roll, and yaw angles directly if you want, you can also set a different reference frame (e.g., a different direction for "up"). To do this, just capture the attitude in that orientation during a "calibration" step and then use CMAttitude's multiplyByInverseOfAttitude: method to transform your attitude data to the new reference frame.
Even though your question only mentioned capturing the "lean angle" (with the ground), you will probably want to capture at least 2 of the 3 attitude angles (e.g., pitch and either roll or yaw, depending on what they are doing), potentially all three, if the device is going to be in a person's pocket. (The device could rotate in the pocket in various ways if the pocket is baggy, for example.) For the most part, though, I think you will probably be able to rely on just two of the three (unless you see radical shifts in yaw throughout the course of a recording session). So for example, in my jeans pocket, the phone is usually nearly vertical. Thus, for me, pitch would vary a whole bunch as I, say, walk, sit or run. Roll would vary whenever I change the direction I'm facing. Meanwhile, yaw would not vary much at all (unless I do kart-wheels, which I can't!). So yaw can probably be ignored for me.
To summarize the main point: to use these attitude angles, you don't need to do any linear algebra, nor worry about gravity (although you may want to use this for other purposes, of course).
UPDATE 2 (based on Kay's new post):
Kay just replied and showed how to use gravity and linear algebra to make sure your angles are unique. (And, btw, I think you should give the bounty to that post, fwiw.)
Depending on what you want to do, you may want to use this math. You would want to use the linear algebra and gravity if you need a standardized way of "talking about" and/or comparing attitudes over the course of your recording session. If you just want to visualize them, you can probably still get away with not using the increased complexity. (For example, visualizing (pitch=90, roll=0, yaw=0) should be the same as visualizing (pitch=0, roll=90, yaw=90).) In my approach above, while you could have multiple ways of referring to the "same" attitude, none of them is actually wrong, per se. They will still give you the angles relative to the ground.
But the fact that the gyroscope can switch from one valid description of an attitude to another means that what I wrote above about getting away with only 2 of the 3 components needs to be corrected: because of this, you will need to capture all three components, no matter what. Sorry.
I'm currently working on a mapping app for iPhone. I've created some custom maps of various sizes, but I've run into an issue:
I would like to implement the ability for users' locations to be checked automatically, but since Im not using a MapView this is much more dificult. (see below)
given the different coordinate systems, I would like to receive a geolocation (green dot) and translate it into a pixel location on a custom map.
Ive got the geolocations for the 4 corners, but the rect is askew. Ive calculated the angle of rotation, but Im just generally confused.
note: the size of the maps arent big enough for the spherical nature of the earth to come into calculation.
Any help is appreciated!
To convert a geolocation to point you need to first understand the mapping. assuming you are using Mercator.
x = R*long
y = R*(1+sin(lat))/cos(lat)
where lat and long are in radians.R is radius of earth. the scale of the image would be from 0 to R*PI
so to get it within view.frame.size you may have to divide by a scale factor.
for difference between points.
x2-x1 = R* (long2-long1)
y2-y1 = R* ( (1+sin(lat2))/cos(lat2) - (1+sin(lat1))/cos(lat1) )
I am trying to build an app for the iPhone 4 which enables the user to "point" at a hardcoded destination and a dot appears where the destination is located.
First, i use the compass to make a horizontal compass(this will cover the left/right rotation):
// Heading
nowHeading = heading.trueHeading;
// Shift image (horizontal compass)
float shift = bearing - nowHeading;
destinationImage.center = CGPointMake(shift+160, destinationImage.center.y);
I shift the dot 160 pixels because the screen is 320 pixels width. My question is now, how can I expand this code to handle up and down? Meaning that if i point the phone down in the table, the dot wont show.. I have to point (like taking a picture) at the destination in order for it to be drawn on the screen. I've already implemented the accelerator. But i don't know how to unite these components to solve my problem.
Bearing should depend on the field of vision of the camera. For iPhone 4 the horizontal angular view is 47.5 so 320 points/47.5 = xxx points per degree, use that to shift horizontally. You also have to add an adaptive filter to the accelerometers, you can get one from the AccelerometerGraph project from Apple.
You have the rotation in one axis (bearing) you should get the rotation on the other two from the accelerometers. The atan2 of two axis give you the rotation on the third. Go to UIAcceleration and imagine an axis physically piercing the device if that helps and do double xAngle = atan2(acceleration.y, acceleration.z); So once you have the rotation upside down you can repeat what you did for the horizontal with the vertical field of view, eg: 60 for the iPhone.
That is going to be one rough implementation :) but achieving smooth movement is difficult. One thing you can do is use the gyros to get a faster response and correct their signal periodically with the accelerometers. See this talk for the troubles ahead: Sensor Fusion on Android Devices. Here is a website dedicated to the Kalman Filter. If you dare with Quaternions I recommend "Visualizing Quaternions" from Andrew J. Hanson.
It sounds like you are trying to do a style of Augmented Reality. If that. Is the case there are several libraries and sample code suggested here:
Augmented Reality
Is there a way to obtain a relative rotation from core motion?
What I need is: how much it rotated in one axis and which direction (+ sign = anti-clockwise, - = clockwise, according to the right-hand rule).
I have found the property rotationRate, but I am now sure how I would extract the angle out of it, as this is giving me radians per second.
I have done all kind of stuff on the last days but nothing is giving me stable values. I have tried to do a timed sample of core motion data, using a NSTimer and calculate the difference between two samples, so I would have how much it rotated since the last sample, but from times to times it gives me crazy numbers like 13600 degrees even when the iPhone is resting on the table.
Any thoughts on how this can be accomplished?
thanks
There is indeed. You can get what you're looking for by drilling down into the properties of CMMotionManager, through CMDeviceMotion and finally to CMAttitude. The attitude of the device is defined as:
the orientation of a body relative to
a given frame of reference.
In the case of DeviceMotion's CMAttitude, that frame of reference is established by the framework when starting device motion updates. From that point in time on, the attitude of the device is reported relative to that reference frame (not relative to the previous frame).
The CMAttitude class provides some handy built in functionality to convert a CMAttitude to a form that is actually useful for something, like Euler Angles, a rotation matrix, or a quaternion. You sound like you're looking for the Euler Angle representation (Pitch, Yaw, Roll).
The answer provided above isn't quite accurate, though it's probably sufficient to answer this question. Core Motion tries to determine the device's absolute attitude at all times, meaning that the definition of the axes can vary depending on the device's orientation. For example, if the device is face-up, then pitch up/down is a rotation about the y-axis, but if the device is in landscape orientation, then pitch is a rotation about the z-axis (perpendicular to the plane of the screen). This is somewhat helpful if your application will only be used in one orientation, or you want a delta like the question asked for, but makes it excessively complicated if you want to know absolute orientation.