Is my understanding of the functions of compass & GPS correct in AR apps ? - iphone

In an AR app whereby you annotate objects or buildings in a camera view, I want to understand the role, that different hardware bits - on the phone (iPhone/Android) - play to achieve the AR effect. Please elaborate more on the following:
Camera: provides the 2D view of reality.
GPS: provides the longitude,latitude of the device.
Compass: direction with respect to magnetic north.
Accelerometer: (does it have a role?)
Altimeter: (does it have a role?)
An example: if the camera view is showing the New York skyline, how does the information from the hardware listed above help me to annotate the view ? Assuming I have the longitude & latitude for the Chrysler building and it is visible in my camera view, how does one calculate with accuracy where to annotate the name on the 2D picture ? I know that given 2 pairs of (longitude,latitude), you can calculate the distance between the points.

use the camera to get the field of view.
use the compass to determine the direction the device is oriented in. The direction determines the set of objects that fall into the field view and need to be reflected with AR adorners.
use the GPS to determine the distance between your location and each object. The distance is usually reflected in the size of the AR adorner you show for that object or in the number of details you show.
use the accelerometer to determine the horizon of the view (a 3-way accelerometer sensitive enough to measure the gravity force). The horizon can be combined with the object altitude to position the AR adorners properly vertically.
use the altimeter for added precision of vertical positioning.
if you have a detailed terrain/buildings information, you can also use the altimeter to determine line of visibility to the various objects and clip out (or partially) the AR adorners for partially obscured or invisible objects.
if the AR device is moving, use the accelerometers to determine the speed and do some either throttling of the number of objects downloaded per view or smart pre-fetching of the objects that will come into view to optimize for the speed of view changes.
I will leave the details of calculating all this data from the devices as an exercise to you. :-)

Related

Measuring object width

I'd like to develop an iPhone app that does the following:
1. Starts the device camera.
2. Places a layer on the screen containing a stretchable frame for the user to fit to a desired object.
3. Measures the object's width & height.
You may look at this app which does practically what I need and more:
http://itunes.apple.com/us/app/easymeasure-measure-your-camera!/id349530105?mt=8
Notice that it doesn't need to be super accurate and can definitely bear some aberration.
Any clue how to do it?
10x
The clue: Geometry and Trigonometry.
By knowing the camera Field-of-View angles, entering the height of the camera above ground and assuming a planar, i.e. flat, ground, you can use basic geometry and trigonometry to work out everything.

horizontal and vertical shake count using accelerometer in iPhone/iPad

I want to count number of shake horizontally and vertically, I have referred to UIAcceleration
I have also referred to Motion Events
But couldn't come up with better approach.
Any kind of help is highly appreciated , code , reference, or any type.
i just want to count the number of shake user make by shaking the iphone device, the shake can be vertically or horizontally holding iphone in normal way(home key at the bottom)
Try DiceShaker. You'll need to use the code for "Isolating Instantaneous Motion from Acceleration Data" given in Listing 4-6 of the motion events (also called high-pass filter computation) documentation to detect acceleration provided by user.
EDIT: The accelerometer constantly provides the gravity component readings because the accelerometer works with a bunch of springs that determine the force component (in the direction of each spring's length) by the increase/decrease in the spring's length. So just remove the constant gravity(the force that's ALWAYS working) component to detect the change provided by the user (hence the name high-pass). Luckily, we don't need to figure out how to because Apple has done the hard work and given the equations in their documentation!

How to calculate the diameter of the TouchPoint on an iPhone,iPad and Android Device?

Now i understand we can have 5 touch points by default in iPhone and have varied touch points enabled onto the different SDKs. However i have accomplished registering the touch points and getting distances, actual number of touch points. I would want to know if there's a way to accomplish and get the Diameter of a particular touch point for e.g. calculating the thumb touch in comparison with the index finger, Any ideas?
I think Apple makes it fairly clear that they don't intend to give third-party developers access to low-level multi-touch information. From Apple's documentation on Event Handling in iOS:
A finger on the screen affords a much different level of precision
than a mouse pointer. When a user touches the screen, the area of
contact is actually elliptical and tends to be offset below the point
where the user thinks he or she touched. This “contact patch” also
varies in size and shape based on which finger is touching the screen,
the size of the finger, the pressure of the finger on the screen, the
orientation of the finger, and other factors. The underlying
Multi-Touch system analyzes all of this information for you and
computes a single touch point.
I can’t speak to Android, but the public APIs in the iOS SDK don’t give you any information about a touch other than its position. These guys found a private API (i.e. one that’ll get you rejected from the App Store if you use it) for getting the diameter of a touch on the screen, but they haven’t provided any further information or released the library.
It's likely possible on Android (4.3 I have, but most likely others too), as you can both activate an overlay display of touch properties in the Developer options as Pointer location (in Input category), which shows you coordinates, their Delta (=difference) and two properties you might be interested in: Prs, that is pressure and Size. It seems you can find the thresholds that would recognize a thumb and the index finger in most cases. That's just what I would imply from the values I have seen in that display.
A proof that this is possible within the allowed API is Yet Another MultiTouch Test Android App listed on on Google Play which for me shows the pressure of each touch in it's own interface (so it must be available in the standard API).

draw route on static image map iphone

I'm developing a campus navigation app.
I have an image which displays building on the campus.
I want draw a route from the user location to destination building the use wants to go.
Wondering how to draw a route on static custom image.
Been searching on internet but cannot find any clue how to develop.
All documentation on internet are about drawing route on Google map.
any hint will be much much appreciated.
You'll have to manually collect latitude and longitude information for each of your map image's four corners. You'll also have to manually specify, in terms of co-ordinates on the image, the position of every possible turning point in the building's corridors, stairs etc. Then you can get the device's current latitude and longitude (see the Location Awareness Programming Guide), translate it into a position on your image, and overlay a transparent view with a red line on it stopping at each of your manually collected waypoints. There remains the graph theory problem of finding the shortest route through the network of waypoints. I suggest the A* algorithm.

how to get current location inside a building iphone

How is it possible to obtain the current location inside a building. I want to develop an application similar to this http://matadornetwork.com/goods/point-inside-indoor-map-application-for-the-iphone-and-android/ (on a smaller/less complicated scale) and wondering on the approach i should take. Are there any tutorials/examples/articles that would point me in the right direction? Thanks ...
Indoor Navigation System for Handheld Devices shows up a few possibilities how indoor navigation could work.
Using kCLLocationAccuracyBestForNavigation or kCLLocationAccuracyBest you can expect to get a relatively accurate location of your user [see Apple's Location Awareness Programming Guide]. You need to consider that
Indoor use of the GPS will make precise location difficult
iPod Touch and iPad users won't be able to get precise location
I doubt altitude will be precise enough to make the difference between floors in high rises
Also consider you'll have to gather a ton of information on the buildings you want to map (not only draw their floorplan but also get their precise coordinates).
After you have all this information, presenting a glowing dot on an image shouldn't be difficult, just a matter of transforming the geo coordinates into something more manageable (at the scale of the CGRect of the different images representing the buildings). No need to use mapkit.
As of the software your linking to, I have doubts it can accurately delivers results in term of the current location of users (but still being useful to get phone numbers and compass information for example). Unfortunately though, this kind of technology is very promising.