horizontal and vertical shake count using accelerometer in iPhone/iPad - iphone

I want to count number of shake horizontally and vertically, I have referred to UIAcceleration
I have also referred to Motion Events
But couldn't come up with better approach.
Any kind of help is highly appreciated , code , reference, or any type.
i just want to count the number of shake user make by shaking the iphone device, the shake can be vertically or horizontally holding iphone in normal way(home key at the bottom)

Try DiceShaker. You'll need to use the code for "Isolating Instantaneous Motion from Acceleration Data" given in Listing 4-6 of the motion events (also called high-pass filter computation) documentation to detect acceleration provided by user.
EDIT: The accelerometer constantly provides the gravity component readings because the accelerometer works with a bunch of springs that determine the force component (in the direction of each spring's length) by the increase/decrease in the spring's length. So just remove the constant gravity(the force that's ALWAYS working) component to detect the change provided by the user (hence the name high-pass). Luckily, we don't need to figure out how to because Apple has done the hard work and given the equations in their documentation!

Related

How to calculate the diameter of the TouchPoint on an iPhone,iPad and Android Device?

Now i understand we can have 5 touch points by default in iPhone and have varied touch points enabled onto the different SDKs. However i have accomplished registering the touch points and getting distances, actual number of touch points. I would want to know if there's a way to accomplish and get the Diameter of a particular touch point for e.g. calculating the thumb touch in comparison with the index finger, Any ideas?
I think Apple makes it fairly clear that they don't intend to give third-party developers access to low-level multi-touch information. From Apple's documentation on Event Handling in iOS:
A finger on the screen affords a much different level of precision
than a mouse pointer. When a user touches the screen, the area of
contact is actually elliptical and tends to be offset below the point
where the user thinks he or she touched. This “contact patch” also
varies in size and shape based on which finger is touching the screen,
the size of the finger, the pressure of the finger on the screen, the
orientation of the finger, and other factors. The underlying
Multi-Touch system analyzes all of this information for you and
computes a single touch point.
I can’t speak to Android, but the public APIs in the iOS SDK don’t give you any information about a touch other than its position. These guys found a private API (i.e. one that’ll get you rejected from the App Store if you use it) for getting the diameter of a touch on the screen, but they haven’t provided any further information or released the library.
It's likely possible on Android (4.3 I have, but most likely others too), as you can both activate an overlay display of touch properties in the Developer options as Pointer location (in Input category), which shows you coordinates, their Delta (=difference) and two properties you might be interested in: Prs, that is pressure and Size. It seems you can find the thresholds that would recognize a thumb and the index finger in most cases. That's just what I would imply from the values I have seen in that display.
A proof that this is possible within the allowed API is Yet Another MultiTouch Test Android App listed on on Google Play which for me shows the pressure of each touch in it's own interface (so it must be available in the standard API).

Detecting particular objects in the image i.e image segmentation with opencv

I have to select any particular object visible in my image on i-phone.
Basically my project is to segment image objects on the basis of my touch.
The method I am following is to first detect contours of the image and then select a particular sequence based on finger touch.
Is there any other method which would be more robust because I have to run it on video frames?
I am using OpenCV and iphone for the project.
PLease help if there is any other idea which has been implemented or is feasible to implement.
Have you looked at SIFT or SURF implementations? They both track object features and are resilient (to a certain degree) to rotation, translation and scale.
Also check out FAST which is a corner detection algorithm which might help you, they have an app on the app store showing how quick it is too.

Measuring distance with iPhone camera

How to implement a way to measure distances in real time (video camera?) on the iPhone, like this app that uses a card to compare the size of the card with the actual distance?
Are there any other ways to measure distances? Or how to go about doing this using the card method? What framework should I use?
Well you do have something for reference, hence the use of the card. Saying that after watching the a video for the app I can't seem it seems too user friendly.
So you either need a reference of an object that has some known size, or you need to deduct the size from the image. One idea I just had that might help you do it is what the iPhone's 4 flash (I'm sure it's very complicated by it might just work for some stuff).
Here's what I think.
When the user wants to measure something, he takes a picture of it, but you're actually taking two separate images, one with flash on, one with flash off. Then you can analyze the lighting differences in the image and the flash reflection to determine the scale of the image. This will only work for close and not too shining objects I guess.
But that's about the only other way I thought about deducting scale from an image without any fixed objects.
I like Ron Srebro's idea and have thought about something similar -- please share if you get it to work!
An alternative approach would be to use the auto-focus feature of the camera. Point-and-shoot camera's often have a laser range finder that they use to auto-focus. iPhone doesn't have this and the f-stop is fixed. However, users can change the focus by tapping the camera screen. The phone can also switch between regular and macro focus.
If the API exposes the current focus settings, maybe there's a way to use this to determine range?
Another solution may be to use two laser pointers.
Basically you would shine two laser pointers at, say, a wall in parallel. Then, the further back you go, the beams will look closer and closer together in the video, but they will still remain the same distance apart. Then you can easily come up with some formula to measure the distance based on how far apart the dots are in the photo.
See this thread for more details: Possible to measure distance with an iPhone and laser pointer?.

Measuring a room with an iPhone

I have a need to measure a room (if possible) from within an iPhone application, and I'm looking for some ideas on how I can achieve this. Extreme accuracy is not important, but accuracy down to say 1 foot would be good. Some ideas I've had so far are:
Walk around the room and measure using GPS. Unlikely to be anywhere near accurate enough, particularly for iPod touch users
Emit sounds from the microphone and measure how long they take to return. There are some apps out there that do this already, such as PocketMeter. I suspect this would not be user friendly, and more gimmicky than practical.
Anyone have any other ideas?
You could stand in one corner and throw the phone against the far corner. The phone could begin measurement at a certain point of acceleration and end measurement at deceleration
1) Set iPhone down on the floor starting at one wall with base against the wall.
2) Mark line where iPhone ends at top.
3) Pick iPhone up and move base to where the line is you just drew.
4) Repeat steps 1->3 until you reach the other wall.
5) Multiply number of lines it took to reach other wall by length of iPhone to reach final measurement.
=)
I remember seeing programs for realtors that involved holding a reference object up in a picture. The program would identify the reference object and other flat surfaces in the image and calculate dimensions from that. It was intended for measuring the exterior of houses. It could follow connected walls that it could assume were at right angles.
Instead of shipping with a reference object, as those programs did, you might be able to use a few common household objects like a piece of printer paper. Let the user pick from a list of common objects what flat item they are holding up to the wall.
Detecting the edges of walls, and of the reference object, is some tricky pattern recognition, followed by some tricky math to convert the found edges to planes. Still better than throwing you phone at the far wall though.
Emit sounds from the microphone and measure how long they take to return. There are some apps out there that do this already, such as PocketMeter. I suspect this would not be user friendly, and more gimmicky than practical.
Au contraire, mon frère.
This is the most user friendly, not to mention accurate, way of measuring the dimensions of a room.
PocketMeter measures the distance to one wall with an accuracy of half an inch.
If you use the same formulas to measure distance, but have the person stand near a corner of the room (so that the distances to the walls, floor, and ceiling are all different), you should be able to calculate all three measurements (length, width, and height) with one sonar pulse.
Edited, because of the comment, to add:
In an ideal world, you would get 6 pulses, one from each of the surfaces. However, we don't live in an ideal world. Here are some things you'll have to take into account:
The sound pulse causes the iPhone to vibrate. The iPhone microphone picks up this vibration.
The type of floor (carpet, wood, tile) will affect the time that the sound travels to the floor and back to the device.
The sound reflects of off more than one surface (wall) and returns to the iPhone.
If I had to guess, because I've done something similar in the past, you're going to have to emit a multi-frequency tone, made up of a low frequency, a medium frequency, and a high frequency. You'll have to perform a fast Fourier Transform on the sound wave you receive to pick out the frequencies that you transmitted.
Now, I don't want to discourage you. The calculations can be done. However, it's going to take some work. After all PocketMeter has been at it for a while, and they only measure the distance to one wall.
I think an easier way to do this would be to use the Pythagorean theorem. Most rooms are 8 or 10 feet tall and if the user can guess accurately, you can use the camera to do some analysis and crunch the numbers. (You might have to have some clever way to detect the angle)
How to do it
I expect 5 points off of your bottom line for this ;)
Let me see if it helps. Take an object of known length and keep it beside the wall and with Iphone, take pic of wall along with the object that you kept beside the wall. Now get the ratio of wall width and object width from the image in Iphone. And as you know the width of the object, you can easily calcualte the width of wall. repeat it for each wall and you will have a room measurement.
Your users could measure a known distance by pacing it off, and thereby calibrate the length of their pace. Then they could enter the distance of each wall in paces, and the phone would convert it to feet. This would probably be very convenient, and would probably be accurate to within 10%.
If they may need more accurate readings, then give them the option of entering in a measurement from a tape measure.
This answer is somewhat similar to Jitendra's answer, but the method he suggests will only work where you can fit the whole wall in a single shot.
Get an object of know size and photograph it held against the wall with the iphone held against the other wall (two people or blutac needed). Then you can calculate the distance between the walls by looking at the size of the object (in pixels) in the photo. You could use a PDF to make a printed document the object of known size and use a 2D barcode to get the iphone to pick it up.
When the user wants to measure something, he takes a picture of it, but you're actually taking two separate images, one with flash on, one with flash off. Then you can analyze the lighting differences in the image and the flash reflection to determine the scale of the image. This will only work for close and not too shining objects I guess.
But that's about the only other way I thought about deducting scale from an image without any fixed objects.

Is my understanding of the functions of compass & GPS correct in AR apps ?

In an AR app whereby you annotate objects or buildings in a camera view, I want to understand the role, that different hardware bits - on the phone (iPhone/Android) - play to achieve the AR effect. Please elaborate more on the following:
Camera: provides the 2D view of reality.
GPS: provides the longitude,latitude of the device.
Compass: direction with respect to magnetic north.
Accelerometer: (does it have a role?)
Altimeter: (does it have a role?)
An example: if the camera view is showing the New York skyline, how does the information from the hardware listed above help me to annotate the view ? Assuming I have the longitude & latitude for the Chrysler building and it is visible in my camera view, how does one calculate with accuracy where to annotate the name on the 2D picture ? I know that given 2 pairs of (longitude,latitude), you can calculate the distance between the points.
use the camera to get the field of view.
use the compass to determine the direction the device is oriented in. The direction determines the set of objects that fall into the field view and need to be reflected with AR adorners.
use the GPS to determine the distance between your location and each object. The distance is usually reflected in the size of the AR adorner you show for that object or in the number of details you show.
use the accelerometer to determine the horizon of the view (a 3-way accelerometer sensitive enough to measure the gravity force). The horizon can be combined with the object altitude to position the AR adorners properly vertically.
use the altimeter for added precision of vertical positioning.
if you have a detailed terrain/buildings information, you can also use the altimeter to determine line of visibility to the various objects and clip out (or partially) the AR adorners for partially obscured or invisible objects.
if the AR device is moving, use the accelerometers to determine the speed and do some either throttling of the number of objects downloaded per view or smart pre-fetching of the objects that will come into view to optimize for the speed of view changes.
I will leave the details of calculating all this data from the devices as an exercise to you. :-)