I'm using an IMU (3 axis accelerometer, 3 axis gyro, 3 axis magnetometer), and I want to get the linear acceleration from the accelerometer data. I knew about sensor fusion and the ability to use the gyroscope data (and get orientations) to get the gravity vector, and hence removing its effect from the corresponding axes.
Am I on the right path, and could you help if you can?
after that I'll integrate the acceleration twice to get the position as in following
CurrentAcceleration[0] = e.Accelerometer[0];
CurrentAcceleration[1] = e.Accelerometer[1];
CurrentAcceleration[2] = e.Accelerometer[2];
//we need to get the linear acceleration instead of the read data !!
CurrentVelocity[0] += (CurrentAcceleration[0] + PreviousAcceleration[0]) / 2;
CurrentVelocity[1] += (CurrentAcceleration[1] + PreviousAcceleration[1]) / 2;
CurrentVelocity[2] += (CurrentAcceleration[2] + PreviousAcceleration[2]) / 2;
Position[0] += (CurrentVelocity[0] + PreviousVelocity[0]) / 2 ;
Position[1] += (CurrentVelocity[1] + PreviousVelocity[1]) / 2 ;
Position[2] += (CurrentVelocity[2] + PreviousVelocity[2]) / 2 ;
PreviousAcceleration[0] = CurrentAcceleration[0];
PreviousAcceleration[1] = CurrentAcceleration[1];
PreviousAcceleration[2] = CurrentAcceleration[2];
PreviousVelocity[0] = CurrentVelocity[0];
PreviousVelocity[1] = CurrentVelocity[1];
PreviousVelocity[2] = CurrentVelocity[2];
Won't work.
You cannot get accurate location or even velocity. On the above link you find tips what you actually could do instead.
By the way, this question pops up surprisingly often.
Related
I need to move some objects lets say 50 in a space (i.e a grid of [-5,5]) and making sure that if the grid is divided into 100 portions most of the portions (90% or more) are once visited by any object
constraints :
object should move in random directions in the grid changing their velocities frequently (change speed and direction in each iteration)
I was thinking of bouncing balls ( BUT moving in random directions even if not hit by anything in space, not they way a real ball moves) , if we could leave them into space in different positions with different forces and each time they hit each other (or getting closer to a specific distance ) they move to different directions with different speed and could give us a result near to 90% hit of portions in the grid .
I also need to make sure objects are not getting out of grid ( could make lb and ub limits and get them back in case they try to leave the grid)
My code is different from the idea I have written above ...
ux = 1;
uy = 15;
g = 9.81;
t = 0; x(1) = 0;
y(1) = 0;
tf = 2.0 * uy / g; % time of flight back to the ground
dt = tf / 20; % time increment - taking 20 steps
while t < tf
t = t + dt;
if((uy - 0.5 * g * t) * t >= 0)
x(end + 1) = ux * t;
y(end + 1) = (uy - 0.5 * g * t) * t;
end
end
plot(x,y)
this code makes the ball to go with Newton's law which is not the case
Bottom line i just need to be able to visit many portions of grid in a short time so this is why i want the objects to moves in a chaotic way in the space in a random manner (each time running the code i need different result so it needs to be random path) and to get a better result i could make the objects bounce to different directions if they hit or visit each other in the same portions , this probably give me a better result .
I have taken from a data set the values of x and z of activity (e.g. walking, running) detected by an accelerometer. Since the data collected also contains the gravity values, I removed it with the following filter in Matlab:
fc = 0.3;
fs = 50;
x = ...;
y = ...;
z = ...;
[but,att] = butter(6,fc/(fs/2));
gx = filter(but,att,x);
gy = filter(but,att,y);
gz = filter(but,att,z);
new_x = x-gx;
new_y = y-gy;
new_z = z-gz;
A = magnitude(new_x,new_y,new_z);
plot(A)
Then I calculated the magnitude value and plotted the magnitude value on a graph.
However, every graph, even after removing gravity, starts with a magnitude of 1g (9.8 m / s ^ 2), why? Should not it start at 0 since I removed gravity?
You need to wait for the filter value to ramp up. Include some additional data that you don't graph at the beginning of the file for this purpose.
How accurate do your calculations need to be? With walking and running the angle of the accelerometer can change, so the orientation of the gravity vector can change throughout the gait cycle. How much of a change in orientation you can expect to see depends on the sensor location and the particular motion you are trying to capture.
I am interested in building a hexagonal Torus using a mesh of points?
I think I can start with a 2-d polygon, and then iterate 360 times (1 deg resolution) to build a complete solid.
Is this the best way to do this? What I'm really after is building wing profiles with variable cross section geometry over it's span.
In Your way You can do this with polyhedron(). Add an appropriate number of points per profile in defined order to a vector „points“, define faces by the indices of the points in a second vector „faces“ and set both vectors as parameter in polyhedron(), see documentation. You can control the quality of the surface by the number of points per profile and the distance between the profiles (sectors in torus).
Here an example code:
// parameter:
r1 = 20; // radius of torus
r2 = 4; // radius of polygon/ thickness of torus
s = 360; // sections per 360 deg
p = 6; // points on polygon
a = 30; // angle of the first point on Polygon
// points on cross-section
// angle = 360*i/p + startangle, x = r2*cos(angle), y = 0, z = r2*sin(angle)
function cs_point(i) = [r1 + r2*cos(360*i/p + a), 0, r2*sin(360*i/p + a)];
// returns to the index in the points - vector the section number and the number of the point on this section
function point_index(i) = [floor(i/p), i - p*floor(i/p)];
// returns the points x-, y-, z-coordinates by rotatating the corresponding point from crossection around the z-axis
function iterate_cs(i) = [cs[point_index(i)[1]][0]*cos(360*floor(i/p)/s), cs[point_index(i)[1]][0]*sin(360*floor(i/p)/s), cs[point_index(i)[1]][2]];
// for every point find neighbour points to build faces, ( + p: point on the next cross-section), points ordered clockwise
// to connect point on last section to corresponding points on first section
function item_add1(i) = i >= (s - 1)*p ? -(s)*p : 0;
// to connect last point on section to first points on the same and the next section
function item_add2(i) = i - p*floor(i/p) >= p-1 ? -p : 0;
// build faces
function find_neighbours1(i) = [i, i + 1 + item_add2(i), i + 1 + item_add2(i) + p + item_add1(i)];
function find_neighbours2(i) = [i, i + 1 + + item_add2(i) + p + item_add1(i), i + p + item_add1(i)];
cs = [for (i = [0:p-1]) cs_point(i)];
points = [for (i = [0:s*p - 1]) iterate_cs(i)];
faces1 = [for (i = [0:s*p - 1]) find_neighbours1(i)];
faces2 = [for (i = [0:s*p - 1]) find_neighbours2(i)];
faces = concat(faces1, faces2);
polyhedron(points = points, faces = faces);
here the result:
Since openscad 2015-03 faces can have more than 3 points, if all points of the face are on the same plane. So in this case faces could be build in one step too.
Are you building smth. like NACA airfoils? https://en.wikipedia.org/wiki/NACA_airfoil
There are a few OpenSCAD designs for those floating around, see e.g. https://www.thingiverse.com/thing:898554
I am currently using the following code to count the number of steps a user takes in my indoor navigation application. As I am holding the phone around my chest level with the screen facing upwards, it counts the number of steps I take pretty well. But common actions like a tap on the screen or panning through the map register step counts as well. This is very frustrating as the tracking of my movement within the floor plan will become highly inaccurate. Does anyone have any idea how I can improve the accuracy of tracking in this case? Any comments will be much appreciated! To have a better idea of what I'm trying to do, you guys can check out a similar Android application at http://www.youtube.com/watch?v=wMgIa44mJXY. Thanks!
-(void)accelerometer:(UIAccelerometer *)accelerometer didAccelerate:(UIAcceleration *)acceleration {
float xx = acceleration.x;
float yy = acceleration.y;
float zz = acceleration.z;
float dot = (px * xx) + (py * yy) + (pz * zz);
float a = ABS(sqrt(px * px + py * py + pz * pz));
float b = ABS(sqrt(xx * xx + yy * yy + zz * zz));
dot /= (a * b);
if (dot <= 0.9989) {
if (!isSleeping) {
isSleeping = YES;
[self performSelector:#selector(wakeUp) withObject:nil afterDelay:0.3];
numSteps += 1;
}
}
px = xx; py = yy; pz = zz;
}
The data from the accelerometer is basically a unidimensional (time) non uniform sampling of a three dimensional vector signal. The best way to figure out how to count steps will be to write an app that records and store the samples over a certain period of time, then export the data to a mathematical application like Wolfram's Mathematica for analysis and visualization. Remember that the sampling is non uniform, you may or may not want to transform it into a uniformly sampled digital signal.
Then you can try different signal processing algorithms to see what works best.
It's possible that, once you know the basic shape of a step in accelerometer data, you can recognize them by simple convolution.
I'm trying to convert the geomagnetic and accelerometer to rotate the camera in opengl ES1, I found some code from android and changed this code for iPhone, actually it is working more or less, but there are some mistakes, I´m not able to find this mistake, I put the code, also the call to Opengl Es1: glLoadMatrixf((GLfloat*)matrix);
- (void) GetAccelerometerMatrix:(GLfloat *) matrix headingX: (float)hx headingY:(float)hy headingZ:(float)hz;
{
_geomagnetic[0] = hx * (FILTERINGFACTOR-0.05) + _geomagnetic[0] * (1.0 - FILTERINGFACTOR-0.5)+ _geomagnetic[3] * (0.55);
_geomagnetic[1] = hy * (FILTERINGFACTOR-0.05) + _geomagnetic[1] * (1.0 - FILTERINGFACTOR-0.5)+ _geomagnetic[4] * (0.55);
_geomagnetic[2] = hz * (FILTERINGFACTOR-0.05) + _geomagnetic[2] * (1.0 - FILTERINGFACTOR-0.5)+ _geomagnetic[5] * (0.55);
_geomagnetic[3]=_geomagnetic[0] ;
_geomagnetic[4]=_geomagnetic[1];
_geomagnetic[5]=_geomagnetic[2];
//Clear matrix to be used to rotate from the current referential to one based on the gravity vector
bzero(matrix, sizeof(matrix));
//MAGNETIC
float Ex = -_geomagnetic[1];
float Ey =_geomagnetic[0];
float Ez =_geomagnetic[2];
//ACCELEROMETER
float Ax= -_accelerometer[0];
float Ay= _accelerometer[1] ;
float Az= _accelerometer[2] ;
float Hx = Ey*Az - Ez*Ay;
float Hy= Ez*Ax - Ex*Az;
float Hz = Ex*Ay - Ey*Ax;
float normH = (float)sqrt(Hx*Hx + Hy*Hy + Hz*Hz);
float invH = 1.0f / normH;
Hx *= invH;
Hy *= invH;
Hz *= invH;
float invA = 1.0f / (float)sqrt(Ax*Ax + Ay*Ay + Az*Az);
Ax *= invA;
Ay *= invA;
Az *= invA;
float Mx = Ay*Hz - Az*Hy;
float My = Az*Hx - Ax*Hz;
float Mz = Ax*Hy - Ay*Hx;
// if (mOut.f != null) {
matrix[0] = Hx; matrix[1] = Hy; matrix[2] = Hz; matrix[3] = 0;
matrix[4] = Mx; matrix[5] = My; matrix[6] = Mz; matrix[7] = 0;
matrix[8] = Ax; matrix[9] = Ay; matrix[10] = Az; matrix[11] = 0;
matrix[12] = 0; matrix[13] = 0; matrix[14] = 0; matrix[15] = 1;
}
Thank you very much for the help.
Edit: The iPhone it is permantly in landscape orientation and I know that something is wrong because the object painted in Opengl Es appears two times.
Have you looked at Apple's GLGravity sample code? It does something very similar to what you want here, by manipulating the model view matrix in response to changes in the accelerometer input.
I'm unable to find any problems with the code posted, and would suggest the problem is elsewhere. If it helps, my analysis of the code posted is that:
The first six lines, dealing with _geomagnetic 0–5, effect a very simple low frequency filter, which assumes you call the method at regular intervals. So you end up with a version of the magnetometer vector, hopefully with high frequency jitter removed.
The bzero zeroes the result, ready for accumulation.
The lines down to the declaration and assignment to Hz take the magnetometer and accelerometer vectors and perform the cross product. So H(x, y, z) is now a vector at right angles to both the accelerometer (which is presumed to be 'down') and the magnetometer (which will be forward + some up). Call that the side vector.
The invH and invA stuff, down to the multiplication of Az by invA ensure that the side and accelerometer/down vectors are of unit length.
M(x, y, z) is then created, as the cross product of the side and down vectors (ie, a vector at right angles to both of those). So it gives the front vector.
Finally, the three vectors are used to populate the matrix, taking advantage of the fact that the inverse of an orthonormal 3x3 matrix is its transpose (though that's sort of hidden by the way things are laid out — pay attention to the array indices). You actually set everything in the matrix directly, so the bzero wasn't necessary in pure outcome terms.
glLoadMatrixf is then the correct thing to use because that's how you multiply by an arbitrary column-major matrix in OpenGL ES 1.x.