How does the complementary filter work? - accelerometer

I'm trying to combine the data from an accelerometer and a gyroscope to accurately measure the pitch and yaw angles of an object. After researching the complementary filter and attempting to implement it, i have a few questions on how it works.
I've read that the filter "trusts" the gyroscope data if there is a lot of angular movement and that it "trusts" the accelerometer data if the object is stable.
http://www.pieter-jan.com/node/11
In this article the complementary filter is described in this way:
*angle = 0.98(angle + gyrData * dt) + 0.02*(accData)*
To me it, seems as if the gyroscope data is being favoured. In the following image, http://www.pieter-jan.com/images/resize/Complementary_Filter.png , found at the bottom of the page, the filtered data seems to "keep close" to the accelerometer data, even though the gyroscope data has drifted. I don't understand why this occurs when the calculation suggests the gyroscope data is being favoured. I have observed this in other photos as well. During my own testing i needed to "swap" the 0.98 and 0.02, suggesting the accelerometer data is being favoured, to obtain similar results. Am i missing completely misunderstanding how this filter works? Is it normal to "favour" the accelerometer data?
Furthermore when the angle of an object needs to be monitored for a long length of time, doesn't the gyroscope data become useless as the drift is so large, how does the filter compensate?

I Realise where i was going wrong.
I had essentially calculated the angle using only the gyroscope data and used that in the filter. i.e
GyroAngle += d°/s * time_between_cycles
FilteredAngle = 0.98*GyroAngle + 0.02*AccelerometerAngle
Instead i should've been doing this:
FilteredAngle = 0.98*(FilteredAngle + d°/s * time_between_cycles) + 0.02*AccelerometerAngle
Doing this has yielded much better results

I'm trying to do something similar to this. I'm implementing a complementary filter in my Arduino code using an LSM9DS0 sensor (It has a gyro/accelerometer built-in https://www.adafruit.com/product/2021
There's a filter value I have been playing around and a calibration method I'm trying to use but I can't seem to get rid of it 100%. There is always a small deviation from the angle and I can never get a 100% filtered angle with no error.

Related

Can I use maths equations in Grafana Graph Metrics?

My situation:-
I'm measuring the level in my rain water tanks using a float on an arm, connected to a variable potentiometer, monitored by an Arduino.
As the tank goes down the voltage reading decreases in a sinusoidal way (for a range of pi/2 radians, i.e 90 degrees).
Currently I'm reading the values from an InfluxDB remotely via Grafana, which is just displaying the voltage level. This reading will become more and more inaccurate as the level goes down due to the change in angle as it gets lower.
To fix this I want to add a sin(theta) equation into the Grafana/Graph/Metrics section. But I can not find out if that is possible to do that.
From what I've found it appears it may be beyond Grafana to do that, but I've got my fingers crossed as the only other options will be to try and add something into the InfuxDB (way beyond my knowledge) or add the code directly onto the Arduino, which by the way it's all mounted and attached inside my rain water tanks, is something I'm not keen to do.
If anyone can let me know if it's possible (or not) to do it via Grafana it would be greatly appreciated.
Thanks you.
Q: Can I use maths equations in Grafana Graph Metrics?
A: Short answer no. Grafana is a data visualisation tool for making sense out of a dataset.
The job of a timeseries panel like Graph is to take in a set of timeseries dataset formatted as below, and plot the data point accordingly.
Example dataset:
[
{
"target":"upper_75", // The field being queried for
"datapoints":[
[622,1450754160000], // Metric value as a float , unixtimestamp in milliseconds
[365,1450754220000]
]
},
{
"target":"upper_90",
"datapoints":[
[861,1450754160000],
[767,1450754220000]
]
}
]
InfluxDb do support the Sin mathematical operation but I am not sure whether is that sufficient to you or not. I failed my maths in high school so I'm not sure if sin(theta) == sin() or not.
If the influxdb datasource UI does not support the SIN operation yet, you can actually toggle the "advance query edition" mode to manually build it. See img below.
Also, you can learn more about the influx Mathematical syntax here.
References:
https://github.com/grafana/simple-json-datasource
https://play.grafana.org/d/Zzlfq17mk/influx-table-merge?panelId=2&fullscreen&edit&orgId=1
https://docs.influxdata.com/influxdb/v1.7/query_language/functions/#sin

Counting steps using accelerometer or gyroscope

I am developing an application to count the steps of the user while putting the phone in the pocket of his pants.
And I need to know which is better to use, Should I use the accelerometer Sensor or the GyroScope sensor.
Also I have tried the accelerometer sensor, and it worked but I'm asking to check whether the GyroScope is more accurate to this function or the accelerometer?
Thanks in advance for help.
I think you may have already seen my ideas on this subject, but for completeness, I think it's good to record them here.
I think the best results will come from using all sensors available. However, I got reasonable results from just using accelerometer data, see my answer here. What I did was to get a lot of friends to walk for me, and I counted how many steps they took. As they were walking, my Android device was logging all sensor output. I then developed a program in C# (because that's my favourite language) that analysed all the log files, and hence optimised a methodology for counting steps that I then ported to Android java.
Whatever sensors you end up using, logging a whole load of data and then analyzing to work out how best to count the steps is what I would recommend.
Accelerometer sensor detects accelerations along an axis, meanwhile gyroscope can detect rotations, so they have different (and complementary) uses
Take a look a this more detailed explanation of their differences, and the values you can filter from their raw data
https://github.com/hadimichael/V-Tracker/wiki/Hardware
You can try this.. it perfectly works for me..
var motionmanager = CMMotionManager()
motionManager.deviceMotionUpdateInterval = 0.1
motionManager.startDeviceMotionUpdatesToQueue(NSOperationQueue.currentQueue(), withHandler:{
deviceManager, error in
var accelerationThreshold:Double = 1;
var userAcceleration:CMAcceleration = deviceManager.userAcceleration;
if(fabs(userAcceleration.x) > accelerationThreshold) || (fabs(userAcceleration.y) > accelerationThreshold) || (fabs(userAcceleration.z) > accelerationThreshold)
{
println("LowPassFilterSignal")
}
})

How to get the reading of deciBels from iOS AVAudioRecorder in a correct scale?

I'm trying to obtain a noise level in my iOS app, using AVAudioRecorder.
The code I'm using is:
[self.recorder updateMeters];
float decibels = [self.recorder averagePowerForChannel:0];
// 160+db here, to scale it from 0 to 160, not -160 to 0.
decibels = 160+decibels;
NSLog(#"Decibels: %.3f", decibels);
The readings I get, when the phone sits on my desk are at about 90-100dB.
I checked this this link and the table I saw there shows that:
Vacuum Cleaner - 80dB
Large Orchestra - 98dB
Walkman at Maximum Level - 100dB
Front Rows of Rock Concert - 110dB
Now, however my office might seem to be a loud one, it's not near the walkman at maximum level.
Is there something I should do here to get the correct readings? As it seems my iPhone's mic is very sensitive. It's an iPhone4S, if it makes a difference.
Forget my previous answer. I figured out a better solution (correct me if I am wrong). I think what both of us want to achieve is the decibel SPL but the averagePowerChannel method gives us the mic's output voltage. The decibel SPL is a logarithmic unit that indicates ratio. We need to convert that output in decibel SPL which is not so easy because for that you need reference values. In other words you need a DB SPL values and the according voltage values to them. You can also try to estimate them by comparing your results with an app like decibel Ultra. To come straight to the point: The formula you need is as follows:
SPL = 20 * log10(referenceLevel * powf(10, (averagePowerForChannel/20)) * range) + offset;
you can set the referenceLevel to 5. That gives me good results on my iPhone. The averagePowerForChannel is the value you gain from the method averagePowerForChannel: method and range indicates the upper limit of the range. I set that to 160. Finally offset is an offset you can add to get into the area you want. I added 50 here.
Still, if anybody got a better solution to this. It would be great!

Getting displacement from accelerometer data with Core Motion

I am developing an augmented reality application that (at the moment) wants to display a simple cube on top of a surface, and be able to move in space (both rotating and displacing) to look at the cube in all the different angles. The problem of calibrating the camera doesn't apply here since I ask the user to place the iPhone on the surface he wants to place the cube on and then press a button to reset the attitude.
To find out the camera rotation is very simple with the Gyroscope and Core Motion. I do it this way:
if (referenceAttitude != nil) {
[attitude multiplyByInverseOfAttitude:referenceAttitude];
}
CMRotationMatrix mat = attitude.rotationMatrix;
GLfloat rotMat[] = {
mat.m11, mat.m21, mat.m31, 0,
mat.m12, mat.m22, mat.m32, 0,
mat.m13, mat.m23, mat.m33, 0,
0, 0, 0, 1
};
glMultMatrixf(rotMat);
This works really well.
More problems arise anyway when I try to find the displacement in space during an acceleration.
The Apple Teapot example with Core Motion just adds the x, y and z values of the acceleration vector to the position vector. This (apart from having not much sense) has the result of returning the object to the original position after an acceleration. (Since the acceleration goes from positive to negative or vice versa).
They did it like this:
translation.x += userAcceleration.x;
translation.y += userAcceleration.y;
translation.z += userAcceleration.z;
What should I do to find out displacement from the acceleration in some istant? (with known time difference). Looking some other answers, it seems like I have to integrate twice to get velocity from acceleration and then position from velocity. But there is no example in code whatsoever, and I don't think that is really necessary. Also, there is the problem that when the iPhone is still on a plane, accelerometer values are not null (there is some noise I think). How much should I filter those values? Am I supposed to filter them at all?
Cool, there are people out there struggling with the same problem so it is worth to spent some time :-)
I agree with westsider's statement as I spent a few weeks of experimenting with different approaches and ended up with poor results. I am sure that there won't be an acceptable solution for either larger distances or slow motions lasting for more than 1 or 2 seconds. If you can live with some restrictions like small distances (< 10 cm) and a given minimum velocity for your motions, then I believe there might be the chance to find a solution - no guarantee at all. If so, it will take you a pretty hard time of research and a lot of frustration, but if you get it, it will be very very cool :-) Maybe you find these hints useful:
First of all to make things easy just look at one axis e.g x but consider both left (-x) and right (+x) to have a representable situation.
Yes you are right, you have to integrate twice to get the position as function of time. And for further processing you should store the first integration's result (== velocity), because you will need it in a later stage for optimisation. Do it very careful because every tiny bug will lead to huge errors after short period of time.
Always bear in mind that even a very small error (e.g. <0.1%) will grow rapidly after doing integration twice. Situation will become even worse after one second if you configure accelerometer with let's say 50 Hz, i.e. 50 ticks are processed and the tiny neglectable error will outrun the "true" value. I would strongly recommend to not rely on trapezoidal rule but to use at least Simpson or a higher degree Newton-Cotes formula.
If you managed this, you will have to keep an eye on setting up the right low pass filtering. I cannot give a general value but as a rule of thumb experimenting with filtering factors between 0.2 and 0.8 will be a good starting point. The right value depends on the business case you need, for instance what kind of game, how fast to react on events, ...
Now you will have a solution which is working pretty good under certain circumstances and within a short period of time. But than after a few seconds you will run into trouble because your object is drifting away. Now you will enter the difficult part of the solution which I failed to handle eventually within the given time scope :-(
One promising approach is to introduce something I call "synthectic forces" or "virtual forces". This is some strategy to react on several bad situations triggering the object to drift away although the device remains fixed (? no native speaker, I mean without moving) in your hands. The most troubling one is a velocity greater than 0 without any acceleration. This is an unavoidable result of error propagation and can be handled by slowing down artificially that means introducing a virtual deceleration even if there is no real counterpart. A very simplified example:
if (vX > 0 && lastAccelerationXTimeStamp > 0.3sec) {
vX *= 0.9;
}
`
You will need a combination of such conditions to tame the beast. A lot of try and error is required to get a feeling for the right way to go and this will be the hard part of the problem.
If you ever managed to crack the code, pleeeease let me know, I am very curious to see if it is possible in general or not :-)
Cheers Kay
When the iPhone 4 was very new, I spent many, many hours trying to get an accurate displacement using accelerometers and gyroscope. There shouldn't have been much concern about incremental drift as device needed only move a couple of meters at most and the data collection typically ran for a few minutes at most. We tried all sorts of approaches and even had help from several Apple engineers. Ultimately, it seemed that the gyroscope wasn't up to the task. It was good for 3D orientation but that was it ... again, according to very knowledgable engineers.
I would love to hear someone contradict this - because the app never really turned out as we had hoped, etc.
I am also trying to get displacement on the iPhone. Instead of using integration I used the basic physics formula of d = .5a * t^2 assuming an initial velocity of 0 (doesn't sound like you can assume initial velocity of 0). So far it seems to work quite well.
My problem is that I'm using the deviceMotion.and the values are not correct. deviceMotion.gravity read near 0. Any ideas? - OK Fixed, apparently deviceMotion.gravity has a x, y, and z values. If you don't specify which you want you get back x (which should be near 0).
Find this question two years later, I just find a AR project on iOS 6 docset named pARk, It provide a proximate displacement capture and calculation using Gyroscope, aka CoreMotion.Framework.
I'm just starting leaning the code.
to be continued...

Whate are the basic concepts for implementing anti-shock and anti-shake algorithms?

I have some animations happening upon fine acceleration detections. But when the user sits in a car or is walking it may get annoying.
Basically, all that stuff has to be disabled automatically as soon as there is too much vibration or shaking. Conceptually, I think that it's very hard to filter those vibrations out , since the "vibration phase" changes permanently. I woul define "unwanted vibration or shocks" as acceleration values that change very fast by an large interval of values, or, an permanently changing accumulated value that does not exceed an specified treshold range in an specified minimum period of time.
I am looking for "proven" concepts, before I start reinventing the wheel for a couple of days.
I don't have any concrete answers for you, but you might want to Google band-pass filters or anti-aliasing filters for some ideas on how to approach this. Basically, if you can identify the frequency range of accelerations that you want to consider real, you can filter out frequencies that fall outside this range.
Before you start doing too much pre-optimization, I think you should implement a low pass filter and see if that does the job. Most iPhone apps effectively use a variation of an LPF to get rid of unwanted accelerometer noise.
You could also go the other way and use a high pass filter. Once you get a certain power level passing through the HPF, stop processing data.