iPhone GPS Performance - iphone

I'm writing an iPhone App that relies on getting the device location. Management have tasked me with producing data on how well the GPS performs in cities (tall buildings) and within buildings.
Do any developers have data on reliability of GPS and the fall back to cell/wifi tower triangulation?
Management-friendly info would be best but raw data that I can translate would also be fine.

To determine an object's location, the GPS system must receive a radio signal from at least three satellites.
GPS accuracy is affected by a number
of factors, including satellite
positions, noise in the radio signal,
atmospheric conditions, and natural
barriers to the signal. Noise can
create an error between 1 to 10 meters
and results from static or
interference from something near the
receiver or something on the same
frequency. Clouds and other
atmospheric phenomena, and objects
such a mountains or buildings between
the satellite and the receiver can
also produce error, sometimes up to 30
meters.
From here:
Multipath and masking effects of urban
canyons degrade the accuracy of GPS
ranging and increase geometric
dilution of precision in receivers
that operate in dense urban areas. In
the case of GPS applications designed
for vehicles, the effects of these
phenomena on accuracy can be reduced,
thanks to the velocity of the user
that contributes in averaging
multipath and thanks to the use of map
matching. But pedestrians do not
benefit from the same circumstances,
and GPS-based positioning for
pedestrians in dense urban areas
suffers from inadequate accuracy and
integrity. Tests performed in downtown
urban areas over a variety of mass
market terminals with integrated GPS
receivers show 95 percent circular
error probable (CEP) performances
between 50 and 100 meters.
Wiki article: Global Positioning System

Related

Reducing external magnetic field effects using gyroscope

Over the past year I have used many different methods of combining Accelerometers, gryos and Magnetometers to get accurate readings of Head angles.
I have also started looking into using a Kalman filter to further improve these readings.
Yet I am still to find a method of removing external magnetic field influences using the other sensors, for example;
If my heading angle was accurate, and suddenly an external magnetic field approaches, my heading angle will be influenced, but to my gyro and accelerometer I haven’t moved.
Is there any algorithms or calculations anyone can think of to override the magnetometer in a way that can determine whether you have moved or not?
Any help would be much appreciated!
One simple solution is to use the gyro/accelerometer as you mentioned, and combine that with delayed filtering, where you wait for a couple of seconds before providing an estimate of the attitude.
Compute the short term attitude from gyro/accel only (start with any arbitrary heading) using gyro integration with accel measurements, and then compute the short term attitude from the magnetometer/accel only using, say TRIAD. Compute the error between these two quantities and decide on a threshold. If the you exceed the threshold, it means there is a magnetic disturbance, and you can thus stop using it in your attitude solution. If they are within threshold, you can continue using the magnetometer.
If you think of more metrics to decide whether you are in a magnetic disturbance or not (such as the magnetometer norm rising to a ridiculous number), then you can add those metric to an HMM, which will combine these metrics and give you an estimate of whether you are in a disturbance or not.

Is the iPhone accelerometer calibrated? Gravity measurement changes depending on orientation

I'm doing some test with iPhone 4S accelerometer. If I take the raw data in Z-axis (telephone rest over desktop) I get an acceleration 9.65-9.70 m/s2 (after g conversion by 9.8261).
But if i have the telephone resting over edge, the measurement of the accelerometer value in the X-axis is so different, aprox. 9.80-9.85 m/s2 (after the same g conversion).
My question is, if the gravity is the same, why this difference? It is not callibrated?
On the other hand, I check the module value at both situations and the difference is the same.
Thanks.
I don't know what kind of answer you expect, but you should be more precise when you're talking about calibration.
Of course, the g-sensors are calibrated and as always: every calibration comes with an error. In your case the error is under 1%.
So if you want an answer:
Yes, the iPhone accelerometer is calibrated and has an error under 1% in your case. If you collect measurements from other (hundreds of) users, you could calculate the mean error of the device (I guess it's about 1% though).
The problem is that it's not possible to determine gravity 100% exactly when all of the sensors (gyro and compass as well) show an intrinsic error. The lack of a precise external reference system leads to this error. Accelerometer and gyroscope are corrected mutually and if there is a slight drift it does affect the direction where the sensor fusion algorithm (Kalman-Filter or others) calculates gravity should be.
While gyroscope is very fast in detecting the direction it tends to drifting effects. Accelerometers are slower in reaction but provide a way to detect gravity. Magnetometers are even slower but can contribute to stabilising the overall result. Combine Gyroscope and Accelerometer Data shows some graphs of the raw and the processed sensor data.
I continued working with accelerometers. The results are not bad. About iPhone accelerometers calibrating, I can say that STMicroelectronics does calibration over his own sensor. Later, iPhone factory assemblies accelerometer onto circuit board. The soldering affects to accelerometer accuracy (thermal effects) and probably, the accelerometer requires a new calibration, but for consumer requirements, the accuracy is already good, but if you need high requirements, you need a new calibration.

accelerometer - Movement pattern recognition (iphone)

I have to find the best approach for tackling a problem for trying to recognize physical movements - with an iPhone in a pocket - like waling, stopping, turning left/right, sitting.
I was thinking on just heuristically find the data corresponding to each action, then to check the incoming values against this data (with a threshold) and see what's happening.
That's a very rough approach, of course, so maybe using something like the Support Vector Machine
methods, but this seems too complicated for the amount of time I have to develop this.
Which approach would you suggest here?
Walking: Do an fft on the gravity direction signal. Measure its frequency response for walking at different speeds and then set a simple threshold.
Stopping: if the average power i.e. total energy in the signal over the last few seconds drops below a certain threshold then you can say the user has stopped.
Turning left,right: Use the gravity vector and the gyroscopes rotation speed vector to determine whether the user is rotating clockwise or counterclockwise
Sitting: This will be very hard to determine but if youre lucky the SVM will find the right pattern.
Each of the above can be given a weighting and then you will have to find a good way to obtain training data to train your SVM. Maybe stream the signals from the phone to a webserver and simultaneously record the users motions by hand.
Your best starting point is Apples Sample code: CoreMotionTeapot
Alternatively you could analyze the GPS signal. This will give you a very good way to determine the users larger scale motion like walking/moving or changing heading etc.

Accelerometer to relative position

Before I reinvent the wheel I wanted to see if anyone can share code or tips for the following:
In order to get relative position of the iPhone, one needs to
Set the accelerometer read rate
Noise filter the accelerometer response
Convert it to a vector
Low pass filter the vector to find gravity
Subtract gravity from the raw reading to find the user caused acceleration
Filter the user caused acceleration to get the frequencies you are interested in ( probably bandpass depending on the application)
Integrate to find relative speed
Integrate to find position
So what I'm hoping is that people have already written some or all of the above and can provide tips, or better yet code.
A few questions I haven't found the answer to:
What is the frequency response of the iPhone accelerometer? What hardware filters exist between the accelerometer and the analog to digital converter?
What is the fastest reading rate the accelerometer delegate can be called without duplicating reading values?
Differences in the above for the various phones?
Any good tips for designing the filters, such as cutoff frequency for separating gravity and user motion?
Any code or tips for the integration steps? Any reason to integrate in the cartesion coordinate system rather than as vector, or vise versa?
Any other experiences, tips, or information that one should know prior to implementing this?
As I find information out, I'll be collecting it in this answer.
Hardware
The 3GS uses an ST LIS331DL 3-axis ±2g/±8g digital accelerometer.
The iPhone 4 and iPad use an ST LIS331DLH 3-axis ±2g/±4g/±8g digital accelerometer.
They are both capable of being read at 100Hz and 400Hz, although on the iPhone 3G (under iOS 4.1) the accelerometer delegate is not called more frequently than 100Hz even if setUpdateInterval is set for faster updates. I do not know if the API permits faster updates on the iPhone 4, and Apple's documentation merely states that the maximum is determined by the hardware of the iPhone. (TBD)
The A/D converter is on the same silicon as the MEM sensor, which is good for noise immunity.
The DL version is 8 bits (3GS) while the DLH version is 12 bits (iPhone 4). The maximum bias (offset) in the DL version is twice the bias of the DLH (0.04g vs 0.02g) version.
The data sheet for the DLH reports acceleration noise density, but that value is not reported on the DL datasheet. Noise density is reasonably low at 218 μg/√Hz for the DLH.
Both sensors give either 100Hz sampling or 400Hz sampling speeds, with no custom rate. The sensor discards values if the iPhone doesn't read the output register at the set sampling rate.
The "typical" full scale value for the DL sensor is ±2.3g, but ST only guarantees that it's at least ±2g.
Temperature effects on the sensor are present and measurable, but not very significant.
TBD:
Is the hardware filter turned on, and what are the filtering characteristics?
How noisy is the power supply to the accelerometer? (Anybody just happen to have the iPhone schematic laying around?)
The accelerometer uses an internal clock to provide timing for the sample rate and A/D conversion. The datasheet does not indicate the accuracy, precision, or temperature sensitivity of this clock. For accurate time analysis the iPhone must use an interrupt to sense when a sample is done and record the time in the interrupt. (whether this is done or not is unknown, but it's the only way to get accurate timing information)
API
Requesting lower than 100Hz sampling rates results in getting selected samples, while discarding the rest. If a sampling rate that is not a factor of 100Hz is requested in software, the time intervals between real sensor readings cannot be even. Apple does not guarantee even sampling rates even if a factor of 100 is used.
It appears that the API provides no software filtering.
The API does scale the raw accelerometer value into a double representing Gs. The scaling factor used is unknown, and whether this is different for each phone (ie, calibrated) and whether the calibration occurs on an ongoing basis to account fo sensor drift is unknown. Online reports seem to suggest that the iPhone does re-calibrate itself on occasion when it's lying flat on a surface.
Results from simple testing suggest that the API sets the sensor to ±2g for the 3GS, which is generally fine for handheld movements.
TBD:
Does Apple calibrate each unit so that the UIAccelerometer reports 1G as 1G? Apple's documentation specifically warns against using the device for sensitive measurement applications.
Does the reported NSTimeInterval represent when the values were read from the accelerometer, or when the accelerometer interrupt indicated that new values were ready?
I'm just dealing with the same problem. The only difference to your approach is that I don't want to rely on the low pass filter to find gravity. (TBH I don't see how I can reliably tell the gravity vector from the accelerometer readings)
Am trying it with the gyros right now.

Core Location and speed measurements

Does anyone know if Core Location in the iPhone OS uses anything but simple vector math to calculate speed? I've read that the GPS system can provide speed measurements that can be accurate when position is not (I believe using the Doppler shifts of the signals).
I've tried and failed to see if the iPhone does this. The question is basically, does this data contain information or is it just convenience functions, using (filtered?) location data?
I suppose my question is if anyone have tried this in reality, or knows beyond what is in the documentation.
The Core Location documentation describes the speed reading thus:
This value reflects the instantaneous speed of the device in the direction of its current heading.
While not absolutely definitive, this strongly suggests that the reading is direct, rather than an interpolation of positions, which cannot be described as "instantaneous" by any reasonable definition.
The GPS system in itself is not able to provide speed measurements. The only way this can practically be done is by comparing to discrete position measurements and the time between those. It's just a matter of applying simple math to get the speed and direction traveled. More samples can be used to get a more accurate measurement.
It is not feasible to measure the speed directly by simple GPS receivers, e.g. by use of Doppler shift. This is due to the fact that each satellite itself is traveling at very high speed around the globe. Each satellite orbits the globe twice every day, resulting at a speed of almost 14000 km/hour. Since the direction of the satellite compared to the GPS unit varies depending on where it is on the sky, the difference in the measure Doppler shift would be huge compared to the Doppler shift resulting from moving of the GPS receiver itself.
I'm however not saying that this couldn't be done by very sophisticated hardware and algorithms, but the cost/benefit would probably not be worth even considering it.