Core Location and speed measurements - iphone

Does anyone know if Core Location in the iPhone OS uses anything but simple vector math to calculate speed? I've read that the GPS system can provide speed measurements that can be accurate when position is not (I believe using the Doppler shifts of the signals).
I've tried and failed to see if the iPhone does this. The question is basically, does this data contain information or is it just convenience functions, using (filtered?) location data?
I suppose my question is if anyone have tried this in reality, or knows beyond what is in the documentation.

The Core Location documentation describes the speed reading thus:
This value reflects the instantaneous speed of the device in the direction of its current heading.
While not absolutely definitive, this strongly suggests that the reading is direct, rather than an interpolation of positions, which cannot be described as "instantaneous" by any reasonable definition.

The GPS system in itself is not able to provide speed measurements. The only way this can practically be done is by comparing to discrete position measurements and the time between those. It's just a matter of applying simple math to get the speed and direction traveled. More samples can be used to get a more accurate measurement.
It is not feasible to measure the speed directly by simple GPS receivers, e.g. by use of Doppler shift. This is due to the fact that each satellite itself is traveling at very high speed around the globe. Each satellite orbits the globe twice every day, resulting at a speed of almost 14000 km/hour. Since the direction of the satellite compared to the GPS unit varies depending on where it is on the sky, the difference in the measure Doppler shift would be huge compared to the Doppler shift resulting from moving of the GPS receiver itself.
I'm however not saying that this couldn't be done by very sophisticated hardware and algorithms, but the cost/benefit would probably not be worth even considering it.

Related

How do I determine processor speed required for optical flow?

I'd like to use an optical flow system to get velocities from surrounding environment. I've read papers about how optical flow works, but they don't treat details about optic sensors.
My question is: How do I determine how much computational power is required to perform optical flow analysis?
I'd like to use a low-power system (like microcontrollers), but I don't know what kind of camera I could use with such a system. I mean, could it be color or does it need to be B/W? Rolling shutter or global shutter? Which frame rate or number of pixels?
I'd like to specify the system myself but, without knowing how those camera attributes impact the processing load, I'm not sure where to start.
As Chuck already said in the comment. You first need to start with something. Opticalflow calculation really depends on what you are using it for and what you are trying to achieve. For realtime applications you might want to consider using faster processors (this is always true though).
Continuing to my answer.
Opticalflow calculation performance depends on few main things:
The optical-flow method you choose (dense or sparse), you can read more about it here and here. Of course that you should take into account not only that sparse is faster than dense, also that sparse might be less accurate in some cases. Again, this depends on what you're trying to achieve.
In addition, you will see that there are different optical-flow algorithms. Some might be faster than others. There are many algorithms such as Lucas-Kanade, Horn-Schunck, TVL1, Farneback, etc.
Most optical-flow methods from libraries such as OpenCV gives you the ability to change some parameters in order to play with the trade-off between accuracy and performance. See this and also check the OpenCV methods such as this and this for example - see the different arguments.
The resolution of your image. Smaller image usually means faster calculation.
Few things you might also want to consider:
If you are using a processor that has multiple cores, make sure that you are using all the cores in the optical-flow calculation. Some libraries may already do this for you, but in some cases you will need to do it by yourself. Take a look at my question and answer in this post, it might give you some idea and help you getting starting with such case.
If you want more accurate optical-flow results you must use global shutter camera. Rolling shutter cameras, such as most of the web-cams, will give you an extra error you don't want.
You don't need color image, if you have a grayscale camera it will be even better. If not, you will need to convert it to grayscale (not B/W) for faster performance as well.
Some libraries such as OpenCV has an option (in some cases) to run these algorithms on a GPU. If using a GPU is an option you might want to consider this as well.
From my own experience, the main thing that gave me a boost in performance was changing my resolution from 640x480 to 320x240 and even 160x120. In my case it didn't really hurt the accuracy.
I used an Odroid U3 mini-pc with OpenCV PyrLK algorithm and input frames of 320x240 resolution. After applying what's described here (splitting the image to 4 for parallel calculation) it worked pretty well (realtime).
The answer given by Sarid has some strong points, and many of them are shared by researchers around the world. My opinions are shared by anyone who has actually worked with these topics in the real-world setting.... with real world, i mean implementing optical flow in drones, on mobile phones and IP cameras that are not sitting in a protected office, and where other systems (such as humans) need to interact and be co-dependent.
First of all, depending on your problem, you may want to invest time in looking for ready-made solutions. Optical flow sensors are readily available, cheap and robust (but usually not strong in accuracy). These are the kind of sensors you find in optical mice. They are low power, and easily interfaced with micro-controllers. Some have staggering sample rates of thousands of fps. They commonly have low spatial resolution however, and (to emphasize) high robustness but low accuracy.
If instead you are looking for the kind of optical flow that can be used for shape from motion, pedestrian detection and video-encoding, for example, then you are probably better off to look for something more advanced, and thats where Sarids answer becomes relevant.
Since your question has been migrated from robotics stack exchange, I am going to assume you are interested applications close to machine control and human machine interaction. In that case, the most important aspects are the ones usually most ignored by people working in the field of optical flow estimation, namely:
Latency. If you have a human interfacing at the front-end... then the common term is "glass-to-glass latency". This is completely different from the fps of your system, which is connected to throughput. If you find that you are in a discussion with someone, and they do not understand the difference between latency and fps, then they are not the expert you are interested in. For example, almost all researchers in computer vision who do GPU implementations of optical flow add massive latency by allowing for frame delays and ineffecient memory handling (inefficient from perspective of latency, but efficient in terms of throughput and hard-ware utilization). Consider the problem of controlling a drone, say make it self-stabilizing, it is better to receive a bad optical flow estimation 10 ms earlier, then a good one with 10 ms extra delay.... especially if the optical system does not give you any upper bounds of the delay for any given time.
Algorithm stability. This is completely different from accuracy. Accuracy is what 99% of all research in optical flow has been obsessing about for the last 30 years. Stability is not at all something evaluated in the Middlebury benchmark for example. Stability deals with how small changes in your data will guarantee small changes in the estimated optical flow. While some good work has been done in the community (on robust statistics most interestingly) in the end the final evaluation of any algortihm disregards stability. Consider the optical mouse as a good example. The first generations of optical mice had higher accuracy (the average error from the true motion was smaller) but they had lower stability (especially when you ran the mice over "bad textures", with rotational motions). Later generations of optical mouse have worse accuracy, but are focusing on the stability, as that is the most important thing. You dont experience the mouse cursor jumping around as much as you did the earlier days of the devices.... but if you move the mouse on your mat, left and right repeatedly, you will see the cursor slowly drifting (i.e. low accuracy).
Heat. Any device that will estimate high accuracy optical flow, will require lots of computations. When it comes to computations per watt, GPUs are not that good. In drones, you may be able to get away with this, because it is a setting where you have active cooling as a by-product of the propulsion system. In the real-world, you most often can not assume active cooling nor unlimited power supply.
To conclude, its a fascinating area, and I hope you have a great experience coding solutions.

Reducing external magnetic field effects using gyroscope

Over the past year I have used many different methods of combining Accelerometers, gryos and Magnetometers to get accurate readings of Head angles.
I have also started looking into using a Kalman filter to further improve these readings.
Yet I am still to find a method of removing external magnetic field influences using the other sensors, for example;
If my heading angle was accurate, and suddenly an external magnetic field approaches, my heading angle will be influenced, but to my gyro and accelerometer I haven’t moved.
Is there any algorithms or calculations anyone can think of to override the magnetometer in a way that can determine whether you have moved or not?
Any help would be much appreciated!
One simple solution is to use the gyro/accelerometer as you mentioned, and combine that with delayed filtering, where you wait for a couple of seconds before providing an estimate of the attitude.
Compute the short term attitude from gyro/accel only (start with any arbitrary heading) using gyro integration with accel measurements, and then compute the short term attitude from the magnetometer/accel only using, say TRIAD. Compute the error between these two quantities and decide on a threshold. If the you exceed the threshold, it means there is a magnetic disturbance, and you can thus stop using it in your attitude solution. If they are within threshold, you can continue using the magnetometer.
If you think of more metrics to decide whether you are in a magnetic disturbance or not (such as the magnetometer norm rising to a ridiculous number), then you can add those metric to an HMM, which will combine these metrics and give you an estimate of whether you are in a disturbance or not.

quadrotor accelerometer unstable

I am currently working on my project quadrotor. I am using ADXL335 accelerometer and L3G4200D gyroscope interfaced with an atemga 128. When I check reading from accelerometer without running motors, values are accurate and stable. But when I start motors, values start to fluctuate. The more I increase the speed the more they fluctuate. I tried Kalman filter, the results seem same just less fluctuating but still not enough for a stable flight. My gyroscope readings also give too much drift. Is this suppose to happen or am I doing something wrong.
It is quite difficult to help you as "fluctuations" can be caused by several things.
Just checked out the datasheet of the ADXL335 accelerometer. Have you added the bandwidth limiting capasitors on the output Cx,Cy and CZ. If not, they might help you in reducing the fluctuations.
Another thing that might cause your fluctuations interference from the motor cables to the signal cables.
If you havent used screened/shielded cables for the accelerometer change them and make sure that you try to reduce whatever interference you might find.
You might find some hints for how to do good emc design Here
From your statement, I would assume that the motors would be causing the interference. The way I see it, it could be caused one of two ways:
The PCB was custom designed, houses both power electronics and the sensitive measurement units and not enough care was taken to isolate the sensitive parts from the interference generated on the board.
The magnetic field from the motors are causing the fluctuations. This could be because the IMU is too close to the motors, improperly isolated or improperly positionned. Try avoiding installing the IMU on the same plane as the motors. We currently have our IMU placed about 20 cm above the center of gravity of our drone. I cannot confirm that it caused the accelerometer to fluctuate, but it did have enormous influences on the compass when it was placed only 10 cm above the center of gravity.

Is the iPhone accelerometer calibrated? Gravity measurement changes depending on orientation

I'm doing some test with iPhone 4S accelerometer. If I take the raw data in Z-axis (telephone rest over desktop) I get an acceleration 9.65-9.70 m/s2 (after g conversion by 9.8261).
But if i have the telephone resting over edge, the measurement of the accelerometer value in the X-axis is so different, aprox. 9.80-9.85 m/s2 (after the same g conversion).
My question is, if the gravity is the same, why this difference? It is not callibrated?
On the other hand, I check the module value at both situations and the difference is the same.
Thanks.
I don't know what kind of answer you expect, but you should be more precise when you're talking about calibration.
Of course, the g-sensors are calibrated and as always: every calibration comes with an error. In your case the error is under 1%.
So if you want an answer:
Yes, the iPhone accelerometer is calibrated and has an error under 1% in your case. If you collect measurements from other (hundreds of) users, you could calculate the mean error of the device (I guess it's about 1% though).
The problem is that it's not possible to determine gravity 100% exactly when all of the sensors (gyro and compass as well) show an intrinsic error. The lack of a precise external reference system leads to this error. Accelerometer and gyroscope are corrected mutually and if there is a slight drift it does affect the direction where the sensor fusion algorithm (Kalman-Filter or others) calculates gravity should be.
While gyroscope is very fast in detecting the direction it tends to drifting effects. Accelerometers are slower in reaction but provide a way to detect gravity. Magnetometers are even slower but can contribute to stabilising the overall result. Combine Gyroscope and Accelerometer Data shows some graphs of the raw and the processed sensor data.
I continued working with accelerometers. The results are not bad. About iPhone accelerometers calibrating, I can say that STMicroelectronics does calibration over his own sensor. Later, iPhone factory assemblies accelerometer onto circuit board. The soldering affects to accelerometer accuracy (thermal effects) and probably, the accelerometer requires a new calibration, but for consumer requirements, the accuracy is already good, but if you need high requirements, you need a new calibration.

accelerometer - Movement pattern recognition (iphone)

I have to find the best approach for tackling a problem for trying to recognize physical movements - with an iPhone in a pocket - like waling, stopping, turning left/right, sitting.
I was thinking on just heuristically find the data corresponding to each action, then to check the incoming values against this data (with a threshold) and see what's happening.
That's a very rough approach, of course, so maybe using something like the Support Vector Machine
methods, but this seems too complicated for the amount of time I have to develop this.
Which approach would you suggest here?
Walking: Do an fft on the gravity direction signal. Measure its frequency response for walking at different speeds and then set a simple threshold.
Stopping: if the average power i.e. total energy in the signal over the last few seconds drops below a certain threshold then you can say the user has stopped.
Turning left,right: Use the gravity vector and the gyroscopes rotation speed vector to determine whether the user is rotating clockwise or counterclockwise
Sitting: This will be very hard to determine but if youre lucky the SVM will find the right pattern.
Each of the above can be given a weighting and then you will have to find a good way to obtain training data to train your SVM. Maybe stream the signals from the phone to a webserver and simultaneously record the users motions by hand.
Your best starting point is Apples Sample code: CoreMotionTeapot
Alternatively you could analyze the GPS signal. This will give you a very good way to determine the users larger scale motion like walking/moving or changing heading etc.