I've tried several ways of measuring the steps a user makes with an iPhone by reading the accelerometer, but none have been very accurate. The most accurate implementation I've used is the following:
float xx = acceleration.x;
float yy = acceleration.y;
float zz = acceleration.z;
float dot = (mOldAccX * xx) + (mOldAccY * yy) + (mOldAccZ * zz);
float a = ABS(sqrt(mOldAccX * mOldAccX + mOldAccY * mOldAccY + mOldAccZ * mOldAccZ));
float b = ABS(sqrt(xx * xx + yy * yy + zz * zz));
dot /= (a * b);
if (dot <= 0.994 && dot > 0.90) // bounce
{
if (!isChange)
{
isChange = YES;
mNumberOfSteps += 1;
} else {
isChange = NO;
}
}
mOldAccX = xx;
mOldAccY = yy;
mOldAccZ = zz;
}
However, this only catches 80% of the user's steps. How can I improve the accuracy of my pedometer?
Here is some more precise answer to detect each step. But yes in my case I am getting + or - 1 step with every 25 steps. So I hope this might be helpful to you. :)
if (dot <= 0.90) {
if (!isSleeping) {
isSleeping = YES;
[self performSelector:#selector(wakeUp) withObject:nil afterDelay:0.3];
numSteps += 1;
self.stepsCount.text = [NSString stringWithFormat:#"%d", numSteps];
}
}
- (void)wakeUp {
isSleeping = NO;
}
ok, I'm assuming this code is within the addAcceleration function...
-(void)addAcceleration:(UIAcceleration*)accel
So, you could increase your sampling rate to get a finer granularity of detection. So for example, if you are currently taking 30 samples per second, you could increase it to 40, 50, or 60 etc... Then decide if you need to count a number of samples that fall within your bounce and consider that a single step. It sounds like you are not counting some steps due to missing some of the bounces.
Also, what is the purpose of toggling isChange? Shouldn't you use a counter with a reset after x number of counts? If you are within your bounce...
if (dot <= 0.994 && dot > 0.90) // bounce
you would have to hit this sweet spot 2 times, but the way you have set this up, it may not be two consecutive samples in a row, it may be a first sample and a 5th sample, or a 2nd sample and an 11th sample. That is where you are loosing step counts.
Keep in mind that not everyone makes the same big steps. So the dot calculation should be adjusted according to someone's length, step size.
You should adjust the bounce threshold accordingly. Try to make the program learn about it's passenger.
Related
I am currently using the following code to count the number of steps a user takes in my indoor navigation application. As I am holding the phone around my chest level with the screen facing upwards, it counts the number of steps I take pretty well. But common actions like a tap on the screen or panning through the map register step counts as well. This is very frustrating as the tracking of my movement within the floor plan will become highly inaccurate. Does anyone have any idea how I can improve the accuracy of tracking in this case? Any comments will be much appreciated! To have a better idea of what I'm trying to do, you guys can check out a similar Android application at http://www.youtube.com/watch?v=wMgIa44mJXY. Thanks!
-(void)accelerometer:(UIAccelerometer *)accelerometer didAccelerate:(UIAcceleration *)acceleration {
float xx = acceleration.x;
float yy = acceleration.y;
float zz = acceleration.z;
float dot = (px * xx) + (py * yy) + (pz * zz);
float a = ABS(sqrt(px * px + py * py + pz * pz));
float b = ABS(sqrt(xx * xx + yy * yy + zz * zz));
dot /= (a * b);
if (dot <= 0.9989) {
if (!isSleeping) {
isSleeping = YES;
[self performSelector:#selector(wakeUp) withObject:nil afterDelay:0.3];
numSteps += 1;
}
}
px = xx; py = yy; pz = zz;
}
The data from the accelerometer is basically a unidimensional (time) non uniform sampling of a three dimensional vector signal. The best way to figure out how to count steps will be to write an app that records and store the samples over a certain period of time, then export the data to a mathematical application like Wolfram's Mathematica for analysis and visualization. Remember that the sampling is non uniform, you may or may not want to transform it into a uniformly sampled digital signal.
Then you can try different signal processing algorithms to see what works best.
It's possible that, once you know the basic shape of a step in accelerometer data, you can recognize them by simple convolution.
I basically want to take two images taken from the camera on the iPhone or iPad 2 and compare them to each other to see if they are pretty much the same. Obviously due to light etc the image will never be EXACTLY the same so I would like to check for around 90% compatibility.
All the other questions like this that I saw on here were either not for iOS or were for locating objects in images. I just want to see if two images are similar.
Thank you.
As a quick, simple algorithm, I'd suggest iterating through about 1% of the pixels in each image and either comparing them directly against each other or keeping a running average and then comparing the two average color values at the end.
You can look at this answer for an idea of how to determine the color of a pixel at a given position in an image. You may want to optimize it somewhat to better suit your use-case (repeatedly querying the same image), but it should provide a good starting point.
Then you can use an algorithm roughly like:
float numDifferences = 0.0f;
float totalCompares = width * height / 100.0f;
for (int yCoord = 0; yCoord < height; yCoord += 10) {
for (int xCoord = 0; xCoord < width; xCoord += 10) {
int img1RGB[] = [image1 getRGBForX:xCoord andY: yCoord];
int img2RGB[] = [image2 getRGBForX:xCoord andY: yCoord];
if (abs(img1RGB[0] - img2RGB[0]) > 25 || abs(img1RGB[1] - img2RGB[1]) > 25 || abs(img1RGB[2] - img2RGB[2]) > 25) {
//one or more pixel components differs by 10% or more
numDifferences++;
}
}
}
if (numDifferences / totalCompares <= 0.1f) {
//images are at least 90% identical 90% of the time
}
else {
//images are less than 90% identical 90% of the time
}
Based on aroth's idea, this is my full implementation. It checks if some random pixels are the same. For what I needed it works flawlessly.
- (bool)isTheImage:(UIImage *)image1 apparentlyEqualToImage:(UIImage *)image2 accordingToRandomPixelsPer1:(float)pixelsPer1
{
if (!CGSizeEqualToSize(image1.size, image2.size))
{
return false;
}
int pixelsWidth = CGImageGetWidth(image1.CGImage);
int pixelsHeight = CGImageGetHeight(image1.CGImage);
int pixelsToCompare = pixelsWidth * pixelsHeight * pixelsPer1;
uint32_t pixel1;
CGContextRef context1 = CGBitmapContextCreate(&pixel1, 1, 1, 8, 4, CGColorSpaceCreateDeviceRGB(), kCGImageAlphaNoneSkipFirst);
uint32_t pixel2;
CGContextRef context2 = CGBitmapContextCreate(&pixel2, 1, 1, 8, 4, CGColorSpaceCreateDeviceRGB(), kCGImageAlphaNoneSkipFirst);
bool isEqual = true;
for (int i = 0; i < pixelsToCompare; i++)
{
int pixelX = arc4random() % pixelsWidth;
int pixelY = arc4random() % pixelsHeight;
CGContextDrawImage(context1, CGRectMake(-pixelX, -pixelY, pixelsWidth, pixelsHeight), image1.CGImage);
CGContextDrawImage(context2, CGRectMake(-pixelX, -pixelY, pixelsWidth, pixelsHeight), image2.CGImage);
if (pixel1 != pixel2)
{
isEqual = false;
break;
}
}
CGContextRelease(context1);
CGContextRelease(context2);
return isEqual;
}
Usage:
[self isTheImage:image1 apparentlyEqualToImage:image2
accordingToRandomPixelsPer1:0.001]; // Use a value between 0.0001 and 0.005
According to my performance tests, 0.005 (0.5% of the pixels) is the maximum value you should use. If you need more precision, just compare the whole images
using this. 0.001 seems to be a safe and well-performing value. For large images (like between 0.5 and 2 megapixels or million pixels), I'm using 0.0001 (0.01%) and it works great and incredibly fast, it never makes a mistake.
But of course the mistake-ratio will depend on the type of images you are using. I'm using UIWebView screenshots and 0.0001 performs well, but you can probably use much less if you are comparing real photographs (even just compare one random pixel in fact). If you are dealing with very similar computer designed images you definitely need more precision.
Note: I'm always comparing ARGB images without taking into account the alpha channel. Maybe you'll need to adapt it if that's not exactly your case.
I'm trying to convert the geomagnetic and accelerometer to rotate the camera in opengl ES1, I found some code from android and changed this code for iPhone, actually it is working more or less, but there are some mistakes, I´m not able to find this mistake, I put the code, also the call to Opengl Es1: glLoadMatrixf((GLfloat*)matrix);
- (void) GetAccelerometerMatrix:(GLfloat *) matrix headingX: (float)hx headingY:(float)hy headingZ:(float)hz;
{
_geomagnetic[0] = hx * (FILTERINGFACTOR-0.05) + _geomagnetic[0] * (1.0 - FILTERINGFACTOR-0.5)+ _geomagnetic[3] * (0.55);
_geomagnetic[1] = hy * (FILTERINGFACTOR-0.05) + _geomagnetic[1] * (1.0 - FILTERINGFACTOR-0.5)+ _geomagnetic[4] * (0.55);
_geomagnetic[2] = hz * (FILTERINGFACTOR-0.05) + _geomagnetic[2] * (1.0 - FILTERINGFACTOR-0.5)+ _geomagnetic[5] * (0.55);
_geomagnetic[3]=_geomagnetic[0] ;
_geomagnetic[4]=_geomagnetic[1];
_geomagnetic[5]=_geomagnetic[2];
//Clear matrix to be used to rotate from the current referential to one based on the gravity vector
bzero(matrix, sizeof(matrix));
//MAGNETIC
float Ex = -_geomagnetic[1];
float Ey =_geomagnetic[0];
float Ez =_geomagnetic[2];
//ACCELEROMETER
float Ax= -_accelerometer[0];
float Ay= _accelerometer[1] ;
float Az= _accelerometer[2] ;
float Hx = Ey*Az - Ez*Ay;
float Hy= Ez*Ax - Ex*Az;
float Hz = Ex*Ay - Ey*Ax;
float normH = (float)sqrt(Hx*Hx + Hy*Hy + Hz*Hz);
float invH = 1.0f / normH;
Hx *= invH;
Hy *= invH;
Hz *= invH;
float invA = 1.0f / (float)sqrt(Ax*Ax + Ay*Ay + Az*Az);
Ax *= invA;
Ay *= invA;
Az *= invA;
float Mx = Ay*Hz - Az*Hy;
float My = Az*Hx - Ax*Hz;
float Mz = Ax*Hy - Ay*Hx;
// if (mOut.f != null) {
matrix[0] = Hx; matrix[1] = Hy; matrix[2] = Hz; matrix[3] = 0;
matrix[4] = Mx; matrix[5] = My; matrix[6] = Mz; matrix[7] = 0;
matrix[8] = Ax; matrix[9] = Ay; matrix[10] = Az; matrix[11] = 0;
matrix[12] = 0; matrix[13] = 0; matrix[14] = 0; matrix[15] = 1;
}
Thank you very much for the help.
Edit: The iPhone it is permantly in landscape orientation and I know that something is wrong because the object painted in Opengl Es appears two times.
Have you looked at Apple's GLGravity sample code? It does something very similar to what you want here, by manipulating the model view matrix in response to changes in the accelerometer input.
I'm unable to find any problems with the code posted, and would suggest the problem is elsewhere. If it helps, my analysis of the code posted is that:
The first six lines, dealing with _geomagnetic 0–5, effect a very simple low frequency filter, which assumes you call the method at regular intervals. So you end up with a version of the magnetometer vector, hopefully with high frequency jitter removed.
The bzero zeroes the result, ready for accumulation.
The lines down to the declaration and assignment to Hz take the magnetometer and accelerometer vectors and perform the cross product. So H(x, y, z) is now a vector at right angles to both the accelerometer (which is presumed to be 'down') and the magnetometer (which will be forward + some up). Call that the side vector.
The invH and invA stuff, down to the multiplication of Az by invA ensure that the side and accelerometer/down vectors are of unit length.
M(x, y, z) is then created, as the cross product of the side and down vectors (ie, a vector at right angles to both of those). So it gives the front vector.
Finally, the three vectors are used to populate the matrix, taking advantage of the fact that the inverse of an orthonormal 3x3 matrix is its transpose (though that's sort of hidden by the way things are laid out — pay attention to the array indices). You actually set everything in the matrix directly, so the bzero wasn't necessary in pure outcome terms.
glLoadMatrixf is then the correct thing to use because that's how you multiply by an arbitrary column-major matrix in OpenGL ES 1.x.
I've been racking my brains over this problem for two days, I've tried different things but none of them work. I'm building an app which is a kind of quizz. There are three subjects which contain questions. I would like to use 3 sliders to define the percentage of questions they want on each subject.
ex : slider one = History
slider two = Maths
slider three = Grammar
If I choose to have more history, I slide the history slider up and the other sliders should decrease according to the values they have to reach 100% for the 3 sliders...
Any idea for an algorithm ? And what happens when one slider reach a zero value ?
Maths has never been my scene.
Any Help would be very much appreciated.
Thanks in advance.
Mike
Though Snowangelic's answer is good, I think it makes more sense to to constrain the ratio between the unchanged values as follows.
Let s1, s2, s3 be the current values of the sliders, so that s1+s2+s3=100. You want to solve for n1, n2, n3 the new values of the sliders, so that n1+n2+n3=100. Assume s1 is changed to n1 by the user. Then this adds the following constraint:
n2/n3 = s2/s3
So the solution to the problem, with n1+n2+n3=100, is
n2 = (100-n1)/(s3/s2 + 1) or 0 (if s2=0) and
n3 = (100-n1)/(s2/s3 + 1) or 0 (if s3=0)
Start all three sliders at 33.333...%
When the users moves a slider up say 10% : move the two other sliders down of 5%. But if one of two slider reaches 0 => only move the other one of ten percent. So it gives something like this :
User moved slider of x (my be positive or negative)
for first slider
if slider -x/2 > 0 and x/2 < 100
move this slider of -x/2
else
move the other slider of -x/2
for second slider
if slider -x/2 > 0 and x/2 < 100
move this slider of -x/2
else
move the other slider of -x/2
end
Another possibility would be to consider that the sum os the available ressources is 100, the ressources are separated into n buckets (in your case 3). When the user moves a slider, he fixes the number of ressources in the corresponding bucket. And so you may either take ressources from other bucket or put ressources in these other buckets.
You have something like :
state 1 ; modified bucket ; new number of ressources in that bucket
modification = new number of ressources in the bucket - number of rescources in the state 1
for (int i=0 ; modification > 0 ; i++){
i=i%nbr of buckets;
if(bucket i != modified bucket){
if(number of ressources in bucket i-- > 0 && number of ressources in bucket i-- < 100){
number of ressources in bucket i--;
modification --;
}
}
}
That is assuming the modification is positive (new number in the modified bucket is higher than before). This small algorithm would work with any number of buckets (sliders in your case).
The following algorithm should be reviewed and of course optimized. It is only something that I have put together and I've not tested it.
initialize each slider with a max and minimum value and set the inital value as desired, but respecting that x + y + z = 1.
[self.slider1 setMinimumValue:0.0];
[self.slider1 setMaximumValue:1.0];
[self.slider1 setValue:0.20];
[self.slider2 setMinimumValue:0.0];
[self.slider2 setMaximumValue:1.0];
[self.slider2 setValue:0.30];
[self.slider3 setMinimumValue:0.0];
[self.slider3 setMaximumValue:1.0];
[self.slider3 setValue:0.50];
Set the three slider to the same selector:
[self.slider1 addTarget:self action:#selector(valueChanged:) forControlEvents:UIControlEventValueChanged];
[self.slider2 addTarget:self action:#selector(valueChanged:) forControlEvents:UIControlEventValueChanged];
[self.slider3 addTarget:self action:#selector(valueChanged:) forControlEvents:UIControlEventValueChanged];
The selector should do something like that:
- (void)valueChanged:(UISlider *)slider {
UISlider *sliderX = nil;
UISlider *sliderY = nil;
UISlider *sliderZ = nil;
if (slider == self.slider1) {
sliderX = self.slider1;
sliderY = self.slider2;
sliderZ = self.slider3;
} else if (slider == self.slider2) {
sliderY = self.slider1;
sliderX = self.slider2;
sliderZ = self.slider3;
} else {
sliderY = self.slider1;
sliderZ = self.slider2;
sliderX = self.slider3;
}
float x = sliderX.value;
float y = sliderY.value;
float z = sliderZ.value;
// x + y + z = 1
// Get the amout x has changed
float oldX = 1 - y - z;
float difference = x - oldX;
float newY = y - difference / 2;
float newZ = z - difference / 2;
if (newY < 0) {
newZ += y + newY;
newY = 0;
}
if (newZ < 0) {
newY += z + newZ;
newZ = 0;
}
[sliderY setValue:newY animated:YES];
[sliderZ setValue:newZ animated:YES];
}
If there is something wrong with this code, please let me know, and I can fix it!
I'm looking at the aurioTouch sample application for the iPhone SDK. It has a basic spectrum analyzer implemented when you choose the "FFT" option. One of the things the app is lacking is X axis labels (i.e. the frequency labels).
In the aurioTouchAppDelegate.mm file, in the function - (void)drawOscilloscope at line 652, it has the following code:
if (displayMode == aurioTouchDisplayModeOscilloscopeFFT)
{
if (fftBufferManager->HasNewAudioData())
{
if (fftBufferManager->ComputeFFT(l_fftData))
[self setFFTData:l_fftData length:fftBufferManager->GetNumberFrames() / 2];
else
hasNewFFTData = NO;
}
if (hasNewFFTData)
{
int y, maxY;
maxY = drawBufferLen;
for (y=0; y<maxY; y++)
{
CGFloat yFract = (CGFloat)y / (CGFloat)(maxY - 1);
CGFloat fftIdx = yFract * ((CGFloat)fftLength);
double fftIdx_i, fftIdx_f;
fftIdx_f = modf(fftIdx, &fftIdx_i);
SInt8 fft_l, fft_r;
CGFloat fft_l_fl, fft_r_fl;
CGFloat interpVal;
fft_l = (fftData[(int)fftIdx_i] & 0xFF000000) >> 24;
fft_r = (fftData[(int)fftIdx_i + 1] & 0xFF000000) >> 24;
fft_l_fl = (CGFloat)(fft_l + 80) / 64.;
fft_r_fl = (CGFloat)(fft_r + 80) / 64.;
interpVal = fft_l_fl * (1. - fftIdx_f) + fft_r_fl * fftIdx_f;
interpVal = CLAMP(0., interpVal, 1.);
drawBuffers[0][y] = (interpVal * 120);
}
cycleOscilloscopeLines();
}
}
From my understanding, this part of the code is what is used to decide which magnitude to draw for each frequency in the UI. My question is how can I determine what frequency each iteration (or y value) represents inside the for loop.
For example, if I want to know what the magnitude is for 6kHz, I'm thinking of adding a line similar to the following:
if (yValueRepresentskHz(y, 6))
NSLog(#"The magnitude for 6kHz is %f", (interpVal * 120));
Please note that although they chose to use the variable name y, from what I understand, it actually represents the x-axis in the visual graph of the spectrum analyzer, and the value of the drawBuffers[0][y] represents the y-axis.
I believe that the frequency of each bin it is using is given by
yFract * hwSampleRate * .5
I'm fairly certain that you need the .5 because yFract is a fraction of the total fftLength and the last bin of the FFT corresponds with half of the sampling rate. Thus, you could do something like
NSLog(#"The magnitude for %f Hz is %f.", (yFract * hwSampleRate * .5), (interpVal * 120));
Hopefully that helps to point you in the right direction at least.