How accurate is the altitude measurement from a mobile phone's GPS? I've gathered that the lat/long can vary by hundreds of meters but is that same level of uncertainty present in the altitude values?
In particular I'm working with Windows Phone 7 but I'm sure that this question applies to other mobile devices. I expect that there are only a few GPS chip manufacturers and the same chip would be used by different phones.
This question deals with how it is calculated but it doesn't mention anything about accuracy or reliability.
I don't know specifically about the iPhone, but elevation is often much less accurate than X,Y information from a GPS. Here are some sources of information about this.
It requires fairly complicated math to understand fully, but no the altitude on ANY gps is not as accurate the lat/long position.
http://weather.gladstonefamily.net/gps_elevation.html
http://gpsinformation.net/main/altitude.htm
http://www.sawmillcreek.org/showthread.php?83752-Do-civilian-GPS-unts-do-accurate-altitude
quote from the third link - "The altitude error is much greater because it is a satellite based system. If you think about it, the best satellite positions for a perfect read are going to be evenly distributed in an imaginary sphere surrounding you. Unfortunately, since you are standing on the earth, that rules out half the sphere because you need line-of-sight to the satellite. As a practical matter, it even rules out a constellation with satellites close to the horizon. So, generally speaking, your fixes will be overhead--which means that the cumulative error is mainly in the vertical plane. So, I think the offhand estimate is vertical error = 1.5x horizontal error."
Vertical error is specified as 1.5 * horizontal error. You must also allow for local deviation between the geodetic model and the planetary surface because the geodetic oblate spheroid model is an approximation only even when local correction tables are in use.
Since the triangulation hands back a point in space, the same inaccuracies would apply to the Z axis like they do to X and Y.
In other words, it's no more and no less accurate than the accuracy of the LAT/LONG.
Related
During the triangulation of a cell phone, we need to find out distance of cell phone from tower using signal strength on that phone. Is there any equation to calculate distance between tower and phone by putting signal strength in that equation? If yes, then what is that equation.
Please help
Theoretically, you should refer to some EM textbooks or Wiki (https://en.wikipedia.org/wiki/Signal_strength_in_telecommunications). It depends on your frequency band, GSM/3G/4G/5G/etc. It also depends on ground building (settlement) type, ground surface with lots of tall concrete buildings tends to block signal much more aggressively than a rural area with a grass plain.
Practically, you should do some physical measurement yourself because how your signal strength is computed (is it in log scale, linear scale, SNR, etc) does affect many things. Take note of near field effect, that is, when your cellphone is very very close to the station, the signal strength variation behaviour can be very different.
No, it is not so simple. At the very list you need to know the transmission power of the base station.
I'm a beginner, take it easy on me.
How would you measure the area of the floor using ARCore/Unity. I figure you somehow measure the area of the plane visualiser, or measure the area of each individual triangle, but I have no idea how to attack it.
The closest thing I can find is measuring distance...
You can get a (somewhat imprecise) estimate by multiplying
plane.getExtentX() * plane.getExtentZ();
This is most likely too high because it assumes the plane to be rectangular and
will suffer from measurement errors in both directions. But it might be good enough depending on your usecase.
A slightly more precise alternative would be to get the plane's polygon
plane.getPolygon()
and then computing the polygon's area
When showing the extrinsic parameters of calibration (the 3D model including the camera position and the position of the calibration checkerboards), the toolbox does not include units for the axes. It seemed logical to assume that they are in mm, but the z values displayed can not possibly be correct if they are indeed in mm. I'm assuming that there is some transformation going on, perhaps having to do with optical coordinates and units, but I can't figure it out from the documentation. Has anyone solved this problem?
If you marked the side length of your squares in mm, then the z-distance shown would be in mm.
I know next to nothing about matlabs (not entirely true but i avoid matlab wherever I can, and that would be almost always possible) tracking utilities but here's some general info.
Pixel dimension on the sensor has nothing to do with the size of the pixel on screen, or in model space. For all purposes a camera produces a picture that has no meaningful units. A tracking process is unaware of the scale of the scene. (the perspective projection takes care of that). You can re insert a scale by taking 2 tracked points and measuring the distance between those points. This is the solver spaces distance is pretty much arbitrary. Now if you know the real distance between these points you can get a conversion factor. By doing:
real distance / solver space distance.
There's really now way to knowing this distance form the cameras settings as the camera is unable to differentiate between different scales of scenes. So a perfect 1:100 replica is no different for the solver than the real deal. So you must allays relate to something you can measure separately for each measuring session. The camera always produces something that's relative in nature.
I am writing an iPhone/iPad app. I need to compute the acceleration and deceleration in the direction of travel of a vehicle traveling in close to a straight horizontal line with erratic acceleration and deceleration. I have the sequence of 3 readings from the X,Y,Z orthogonal accelerometers. But the orientation of the iphone/ipad is arbitrary and the accelerometer readings include vehicle motion and the effect of gravity. The result should be a sequence of single acceleration values which are positive or negative depending on whether the vehicle is decelerating or accelerating. The positive and negative direction is arbitrary so long as acceleration has the opposite sign to deceleration. Gravity should be factored out of the result. Some amount of variable smoothing of the result would be useful.
The solution should be as simple as possible and must be computationally efficient. The answer should be some kind of pseudo-code algorithm, C code or a sequence of equations which could easily be converted to C code. An iPhone specific solution in Objective C would be fine too.
Thanks
You will need some trigonometry for this, for example to get the magnitude you need
magn = sqrt(x*x+y*y+z*z);
to get the angle you will need atan, then c function atan2 is better
xyangel = atan2(y,x);
xymagn = sqrt(x*x+y*y);
vertangle = atan2(z,xymagn)
no how you get negative and positive magnitude is arbitrary, you could for example interpret π/2 < xyangle < 3π/2 as negative. That would be taking the sign of x for the sign of magn, but it would be equally valid to take the sign from y
It is really tough to separate gravity and motion. It's easier if you can analyze the data together with a gyroscope and compass signal.
The gyroscope measures the rate of angular rotation. Its integral is theoretically the angular orientation (plus an unknown constant), but the integral is subject to drift, so is useless on its own. The accelerometer measures angular orientation plus gravity plus linear acceleration. With some moderately complex math, you can isolate all 3 of those quantities from the 2 sensors' values. Adding the compass fixes the XY plane (where Z is gravity) to an absolute coordinate frame.
See this great presentation.
Use userAcceleration.
You don't have to figure out how to remove gravity from the accelerometer readings and how to take into accont the orientation: It is already implemeted in the Core Motion Framework.
Track the mean value of acceleration. That will give you a reference for "down". Then subtract the mean from individual readings.
You'll need to play around with the sensitivity of the mean calculation, since, e.g., making a long slow turn on a freeway will cause the mean to slowly drift outwards.
If you wanted to compensate for this, you could use GPS tracking to compute a coarse-grained global acceleration to calibrate the accelerometer. In fact, you might find that differentiating the GPS velocity reading gives a good enough absolute acceleration all by itself (I haven't tried, so I can't say).
I have an accelerometer and magnetometer each producing raw X, Y and Z readouts. From this I need to determine the magnetic heading of an object.
I'm not that great at trig, but I've put together a formula that does respond pretty well to the rotation of the device, but also responds to movement that one would not think is relevant, such as angling the device in such a way that has no impact on the direction it is pointed. Such as laying it flat and "rolling" the device.
I think the formula I have for calculating the magnetic heading is fine, but I think my pitch and roll radians for input are wrong.
So I guess the core of my question (unless someone actually has a formula around that does this), is how do you calculate angles, in radians, using an accelerometer for pitch and roll.
Then secondly, any info on the heading formula itself would be great.
Depending on the accuracy your application requires, you may need to solve several problems:
Are the accelerometer axes calibrated? I've seen MEMs accelerometers that had axes that were not mutually perpendicular, and had significantly different response characteristics for each axis (typically X and Y would match, and Z would differ). You will need to synthesize ideal XYZ axes from whatever physical reading your device provides. (Google 'accelerometer calibration'.)
Are the magnetometer axes calibrated? Similar problem as above, except much harder to check: It is very difficult to generate uniform calibrated magnetic fields. If you use the ambient geomagnetic field, you will need to carefully control the local magnetism of your work environment and your tools. (Google 'magnetometer calibration'.)
After the accelerometer and magnetometer have been individually calibrated, their axes need to be calibrated relative to each other. Since both of these devices are typically soldered to a PCB, there is almost guaranteed to be significant misalignment. In many cases, the board layout and device parameters may not even permit the XYZ axes to correspond with each other! This may be the hardest part to do from a lab perspective, so I'd recommend you do a direct comparison using other hardware that has both kinds of sensors and is already calibrated (such as an iPhone or Android phone - but verify the device before use). Normally, this is accomplished by adjusting the prior two calibration matrices until they generate vectors that are correctly aligned relative to each other.
Only after you are generating mutually calibrated magnetic and accelerometer vectors can you apply the solutions suggested by the other respondents.
I've only described the static solution, where both the magnetometer and accelerometer are motionless relative to the local gravitational and magnetic fields. If you need to generate responses in real-time while the system is rapidly moving, you will need to account for the time behavior of each sensor. There are two basic ways to do this: 1) Apply time-domain filters to each sensor so that their outputs share a common time domain (generally adding some delay). 2) Use predictive modeling to modify the sensor outputs in real-time (less delay, but more noise).
I've seen Kalman filters used for such applications, but applying them in a vector domain may require using quaternions instead of Euler matrices. Quaternions are easier to use computationally (fewer operations needed compared to matrices), but I found them to be much more difficult to comprehend and get right.
Or, you may choose a completely different path, and use statistics and data fitting to do all the above work in one giant step. Consider the problem as follows: Given 6 input values (XYZ each from uncalibrated magnetometer and accelerometer) and a reference to the device (assuming it is hand-held, and there is an arrow painted on the case), output a single angle representing the magnetic bearing toward which the arrow on the case is pointing, and the elevation of the arrow relative to the gravity vector (tilt of the case).
Using a calibrated reference device (as mentioned above), pair it with the device to be calibrated, and take a several hundred data points, with the device at different orientations. Then use a powerful math package such a Matlab, MathCAD, R or SciPy to setup and solve the nonlinear equations to create the transformation matrices.
I would point to Euler Angles and Roll Pich Yaw.
You're not thinking in enough dimensions. This would be the answer in only 2 dimensions, and it works great if you can find a way to ensure "Z" always aligns with gravity.
int heading=180-atan2(mag_datX, mag_datY)/0.0174532925; // 0/359=N, 90=E, 180=S, 270=W
(if you're reading directly form the device - beware that it probably returns X, Z, Y - not X, Y, Z !)
However - this is not a 2D compass problem - imagine you take the needle out of the compass, balance it so that gravity plays no part in keeping it "level", and you'll find that "north" will point a bit up or down - depending where on earth you are (or, if at the poles, directly up or down!).
So you need to try and compute the THREE DIMENSIONAL vector from all 3 values - which is a matrix operation.