I'm trying to use the accelerometer to move a UIImage. I works well but my problem is that with my code
self.character.center = CGPointMake(160+acceleration.x*175, 230-acceleration.y*175);
my picture moves even on a stable surface because of the precision of the acceleration.x value. So I decided to use a workaround by multiplying it with a value, casting it to an INT and then dividing it and cast it to a float (i.e i just remove some numbers after the coma)
self.character.center = CGPointMake(160+(float)((int)((acceleration.x*100000))/100000)*175, 230-(float)((int)((acceleration.y*100000))/100000)*175);
But after i use this code, my little picture isn't moving anymore.
So my question is : do you know why it doesn't work anymore ?
Is there a proper way to remove numbers after the coma in a float ?
Thanks a lot
Fred.
Instead of trying to remove decimals after the comma, you should better use a low pass filter. A lowpass filter will let only pass changes to your acceleration that happen below a certain cutoff frequency. Therefore, it will keep steady changes to the acceleration but remove fluctations and jitter with very high frequencies.
Wikipedia has a good explanation how a simple RC Lowpass filter works and shows a possible implementation. Apple shows a similar implementation in the AccelerometerGraph sample code.
Related
Please suggest some beginning point in this process of finding distance displaced by an iPhone. The requirement of accuracy in current system is in cm, and displacement can be in 3D.
What I have already done is
1. Tried using sound to calculated distance between between 2 iPhones, but I need distance calculation with one iPhone only, i.e need displacement.
2. Tried CMMotionManager and its accelerometer data, but values received is helpless.
I think I need a good filter to get useful data out of that junk. I already used Kalman Filter and gone through link
iphone accelerometer speed and distance,
How to calculate distance using accelerometer using iphone sdk?,
How do I measure the distance traveled by an iPhone using the accelerometer?,
Basic calculus behind this problem is in the expression
Tried DCT-II algorithm and Multidimensional DCTs to filter data.
I dont know what did I miss, or where should I go from here, as it is hard to believe that no one has used accelerometer for such an accuracy, because there are so many practical examples of it being used for greater accuracy.
Please provide me some pointer that suggest some way out of current situation.
You can't achieve cm accuracy. The reason is, surprisingly, the orientation error.
The above link contains some tips what you can do if you need displacement.
An even better alternative is to use orientation in you application, if you can.
I'm trying to write a code The helps me in my biology work.
Concept of code is to analyze a video file of contracting cells in a tissue
Example 1
Example 2: youtube.com/watch?v=uG_WOdGw6Rk
And plot out the following:
Count of beats per min.
Strenght of Beat
Regularity of beating
And so i wrote a Matlab code that would loop through a video and compare each frame vs the one that follow it, and see if there was any changes in frames and plot these changes on a curve.
Example of My code Results
Core of Current code i wrote:
for i=2:totalframes
compared=read(vidObj,i);
ref=rgb2gray(compared);%% convert to gray
level=graythresh(ref);%% calculate threshold
compared=im2bw(compared,level);%% convert to binary
differ=sum(sum(imabsdiff(vid,compared))); %% get sum of difference between 2 frames
if (differ ~=0) && (any(amp==differ)==0) %%0 is = no change happened so i dont wana record that !
amp(end+1)=differ; % save difference to array amp wi
time(end+1)=i/framerate; %save to time array with sec's, used another array so i can filter both later.
vid=compared; %% save current frame as refrence to compare the next frame against.
end
end
figure,plot(amp,time);
=====================
So thats my code, but is there a way i can improve it so i can get better results ?
because i get fealing that imabsdiff is not exactly what i should use because my video contain alot of noise and that affect my results alot, and i think all my amp data is actually faked !
Also i actually can only extract beating rate out of this, by counting peaks, but how can i improve my code to be able to get all required data out of it ??
thanks also really appreciate your help, this is a small portion of code, if u need more info please let me know.
thanks
You say you are trying to write a "simple code", but this is not really a simple problem. If you want to measure the motion accuratly, you should use an optical flow algorithm or look at the deformation field from a registration algorithm.
EDIT: As Matt is saying, and as we see from your curve, your method is suitable for extracting the number of beats and the regularity. To accuratly find the strength of the beats however, you need to calculate the movement of the cells (more movement = stronger beat). Unfortuantly, this is not straight forwards, and that is why I gave you links to two algorithms that can calculate the movement for you.
A few fairly simple things to try that might help:
I would look in detail at what your thresholding is doing, and whether that's really what you want to do. I don't know what graythresh does exactly, but it's possible it's lumping different features that you would want to distinguish into the same pixel values. Have you tried plotting the differences between images without thresholding? Or you could threshold into multiple classes, rather than just black and white.
If noise is the main problem, you could try smoothing the images before taking the difference, so that differences in noise would be evened out but differences in large features, caused by motion, would still be there.
You could try edge-detecting your images before taking the difference.
As a previous answerer mentioned, you could also look into motion-tracking and registration algorithms, which would estimate the actual motion between each image, rather than just telling you whether the images are different or not. I think this is a decent summary on Wikipedia: http://en.wikipedia.org/wiki/Video_tracking. But they can be rather complicated.
I think if all you need is to find the time and period of contractions, though, then you wouldn't necessarily need to do a detailed motion tracking or deformable registration between images. All you need to know is when they change significantly. (The "strength" of a contraction is another matter, to define that rigorously you probably would need to know the actual motion going on.)
What are the structures we see in the video? For example what is the big dark object in the lower part of the image? This object would be relativly easy to track, but would data from this object be relevant to get data about cell contraction?
Is this image from a light microscop? At what magnification? What is the scale?
From the video it looks like there are several motions and regions of motion. So should you focus on a smaller or larger area to get your measurments? Per cell contraction or region contraction? From experience I know that changing what you do at the microscope might be much better then complex image processing ;)
I had sucsess with Gunn and Nixons Dual Snake for a similar problem:
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.64.6831
I placed the first aproximation in the first frame by hand and used the segmentation result as starting curv for the next frame and so on. My implementation for this is from 2000 and I only have it on paper, but if you find Gunn and Nixons paper interesting I can probably find my code and scan it.
#Matt suggested smoothing and edge detection to improve your results. This is good advise. You can combine smoothing, thresholding and edge detection in one function call, the Canny edge detector.Then you can dialate the edges to get greater overlap between frames. Little overlap will probably mean a big movement between frames. You can use this the same way as before to find the beat. You can now make a second pass and add all the dialated edge images related to one beat. This should give you an idea about the area traced out by the cells as they move trough a contraction. Maybe this can be used as a useful measure for contraction of a large cluster of cells.
I don't have access to Matlab and the Image Processing Toolbox now, so I can't give you tested code. Here are some hints: http://www.mathworks.se/help/toolbox/images/ref/edge.html , http://www.mathworks.se/help/toolbox/images/ref/imdilate.html and http://www.mathworks.se/help/toolbox/images/ref/imadd.html.
Similar to this question:
CMDeviceMotion userAcceleration drift
I'm using CMDeviceMotion.userAcceleration in iOS5 SDK to plot its x, y, z components over time. Like the above post, I see z acceleration component shows always small positive values (0.005 - 0.015) while x and y components are centering along zero (-0.005 - 0.005) when my iPhone 4s is sitting on a flat surface.
This small bias keeps adding up to the estimated velocity (which I compute by integrating the acceleration data) even when my phone is not moving a bit. Is there any known way to remove this bias from the accelerometer data? I cannot simply subtract the bias from z component because it seems that the bias spreads over x y and z along the gravity axis if the device is in some arbitrary orientation.
I know that the data in CMDeviceMotion.userAcceleration has already factored out the gravity using Gyro data but wonder if there is any effective way to remove this residual bias?
First, you need some external reference that does not drift such as GPS. Then you have to perform sensor fusion (Kalman filter comes to mind). Otherwise you cannot remove the bias and the integration error will grow indefinitely.
UPDATE: You cannot get relative displacement just by integrating the acceleration, see my answer to Android accelerometer accuracy (Inertial navigation). However, I give some examples there what you actually can do.
If you check my answer you will see that it is the gyro white noise that makes the integration hopeless.
Old question, but I wanted to share some insight. Part of the bias in the accelerometers actually does not come from any inaccuracies in the sensors, but from an oversight in the calculations that Apple does. The calculations assume that gravity always is 1 G (which is by definition 9.80665 m/s2). Any left-over must then be user acceleration.
However, gravity varies slightly all over the world. If the gravity in your area is not exactly 9.80665 m/s2, then there will be a small bias in the user acceleration, which is detectable with a low-pass filter. Such a bias can removed with the following calculation:
- (void) handleDeviceMotion:(CMDeviceMotion *)m atTime:(NSDate *)time
{
// calculate user acceleration in the direction of gravity
double verticalAcceleration = m.gravity.x * m.userAcceleration.x +
m.gravity.y * m.userAcceleration.y +
m.gravity.z * m.userAcceleration.z;
// update the bias in low pass filter (bias is an object variable)
double delta = verticalAcceleration - bias;
if (ABS(delta) < 0.1) bias += 0.01 * delta;
// remove bias from user acceleration
CMAcceleration acceleration;
acceleration.x = m.userAcceleration.x - bias * m.gravity.x;
acceleration.y = m.userAcceleration.y - bias * m.gravity.y;
acceleration.z = m.userAcceleration.z - bias * m.gravity.z;
// do something with acceleration
}
Mind you, even with that bias removed, there is still a lot of noise, and there could also be a manufacturing bias different for each accelerometer chip. Therefore, you will still have a hard time deriving velocity and certainly position from this.
Thanks Ali for updating your answer and other references. They certainly helped my understanding on this issue (and I was surprised to see how many people are interested in this issue). I may sound a bit stubborn but I still think I didn't find the answer for my original question from anywhere. Let's forget about integration now. With more experiments I see some constant biases (though even smaller) on x and y axes as well when I averaged the user acceleration data over time. I was just wondering if there's any way to remove these biases from "user" acceleration data which I get from iOS5 CMDeviceMotion. If they were caused by the white noise of the gyroscope in the process of filtering out the gravity, I guess we may see random noise in the user accelerometer data but not those biases. But based on my impression so far, it seems that those biases were caused by the limited "accuracy" of both accelerometer and gyroscope and there's nothing we can do about that although I'm not 100% sure. I was trying to put my impression in comment (not in answer section) but SO didn't allow because it was too long but I was wondering how many people would back up my impression by voting so I decided to put it in answer section... Sorry if I was rambling a bit.
I'm trying to convert an image to a sound where you can see the image if you were to view the spectrogram of that sound. Kind of like the aphex twin had done in window licker.
So far I have written an iPhone app that takes a photograph and then converts it to grayscale. I then use this gray scale as a magnitude which I'd like to plug back through an inverse FFT.
The problem I have, though, is how do I go from magnitude into the imaginary and real parts.
mag = sqrtf( (imag * imag) + (real * real));
Obviously I can't solve for 2 unknowns. Furthermore I can't find out if those real and imaginary parts are negative or not.
So I'm at a bit of a loss. It must be possible. Can anyone point me in the direction of some useful information?
A spectrogram contains no phase information, so you can just set the imaginary parts to 0 and set the real parts equal to the magnitude. Remember that you need to maintain complex conjugate symmetry if you want to end up with a purely real time domain signal after you have applied the inverse FFT.
The math wonks are right about regenerating from greyscale, but why limit yourself thus? Have you considered keeping a portion of the phase information in the color channels?
Specifically, why not process the LEFT channel into BLUE, the RIGHT channel into RED, and for the GREEN color element, run the transform again on (LEFT-RIGHT), so that you have three spectra.
In one version of "Surround Sound", L-R encodes the rear channel - there is good stuff there.
When regenerating your sound, assign the "real" values to the corresponding channels.
Try the following (formulas - but this editor insists on calling them code..)
LEFT.real=+BLUE
RIGHT.real=+RED
LEFT.imag=+GREEN
RIGHT.imag=-GREEN
Experiment with variations on this, while listening thru some sort of surround sound setup, to see which provides the most pleasing results. Make sure not to drive the thing into clipping, since phase changes occur, regeneration of a complex saturated signal is likely to create clipping.
I want to create an application could detect the number of spin when user rotates the iPhone device. Currently, I am using the Compass API to get the angle and try many ways to detect spin. Below is the list of solutions that I've tried:
1/ Create 2 angle traps (piece on the full round) on the full round to detect whether the angle we get from compass passed them or not.
2/ Sum all angle distance between times that the compass is updated (in updateHeading function). Let try to divide the sum angle to 360 => we could get the spin number
The problem is: when the phone is rotated too fast, the compass cannot catch up with the speed of the phone, and it returns to us the angle with latest time (not continuously as in the real rotation).
We also try to use accelerometer to detect spin. However, this way cannot work when you rotate the phone on a flat plane.
If you have any solution or experience on this issue, please help me.
Thanks so much.
The iPhone4 contains a MEMS gyrocompass, so that's the most direct route.
As you've noticed, the magnetometer has sluggish response. This can be reduced by using an anticipatory algorithm that uses the sluggishness to make an educated guess about what the current direction really is.
First, you need to determine the actual performance of the sensor. To do this, you need to rotate it at a precise rate at each of several rotational speeds, and record the compass behavior. The rotational platform should have a way to read the instantaneous position.
At slower speeds, you will see a varying degree of fixed lag. As the speed increases, the lag will grow until it approaches 180 degrees, at which point the compass will suddenly flip. At higher speeds, all you will see is flipping, though it may appear to not flip when the flips repeat at the same value. At some of these higher speeds, the compass may appear to rotate backwards, opposite to the direction of rotation.
Getting a rotational table can be a hassle, and ensuring it doesn't affect the local magnetic field (making the compass useless) is a challenge. The ideal table will be made of aluminum, and if you need to use a steel table (most common), you will need to mount the phone on a non-magnetic platform to get it as far away from the steel as possible.
A local machine shop will be a good place to start: CNC machines are easily capable of doing what is needed.
Once you get the compass performance data, you will need to build a model of the observed readings vs. the actual orientation and rotational rate. Invert the model and apply it to the readings to obtain a guess of the actual readings.
A simple algorithm implementation will be to keep a history of the readings, and keep a list of the difference between sequential readings. Since we know there is compass lag, when a difference value is non-zero, we will know the current value has some degree of inaccuracy due to lag.
The next step is to create a list of 'corrected' readings, where the know lag of the prior actual values is used to generate an updated value that is used to create an updated value that is added to the last value in the 'corrected' list, and is stored as the newest value.
When the cumulative correction (the difference between the latest values in the actual and corrected list exceed 360 degrees, that means we basically don't know where the compass is pointing. Hopefully, that point won't be reached, since most rotational motion should generally be for a fairly short duration.
However, since your goal is only to count rotations, you will be off by less than a full rotation until the accumulated error reaches a substantially higher value. I'm not sure what this value will be, since it depends on both the actual compass lag and the actual rate of rotation. But if you care only about a small number of rotations (5 or so), you should be able to obtain usable results.
You could use the velocity of the acceleration to determine how fast the phone is spinning and use that to fill in the blanks until the phone has stopped, at which point you could query the compass again.
If you're using an iPhone 4, the problem has been solved and you can use Core Motion to get rotational data.
For earlier devices, I think an interesting approach would be to try to detect wobbling as the device rotates, using UIAccelerometer on a very fine reporting interval. You might be able to get some reasonable patterns detected from the motion at right angles to the plane of rotation.