iPhone detecting touches on a map image - iphone

I have a static map image with a bunch of circles and squares on it that depict cities. I have loaded the image into an imageView that is sub-classed under a scrollView so that I can capture user touches and zoom/scroll across the map. My challenge is that I want to pop-up a label whenever a user touches one of these circles/squares for a city to tell them which city it is and possibly load a detail view for the city. I figured I could pre-load all the relative CGPoints for the cities based on the imageView map into a dictionary so I can reference them during a "touchesBegan" event, but I'm quickly getting in over my head and possibly going about this the wrong way.
So far everything is working and I can capture the CGPoint x and y coordinates of touches. The biggest issue I have is determining the proximity of the user touches to a discrete point I may have in the dictionary. In other words if the dictionary has "Boston = NSPoint: {235, 118};" how can I tell when a user is close to that point without making them repeat the touch until it is exact? Is there an easy way to determine if a user touch is "close" to a pre-existing point? Am I going about this the right way?
Any advice or slaps in the back of the head are welcome.
Thanks, Mike

You could use UIButtons to represent the cities. Then you'll get the standard touch, highlight, etc, behaviors with less effort. Adding the buttons as subviews on your map should cause them to scale and scroll along with the map.

if i understand it correctly, you want to know if the point at which the user tapped is "close" enough to a point that is marked as a city.
you would have to quantify close i.e. set a threshold value after which the tap is farther, before which the tap is closer.
once you do that, calculate the cartesian coordinate distance sqrt ( (x1-x2)^2 + (y1-y2)^2)
for each element ( read dictionary with x,y values for cities) in the array and store the results in another array. then take the minimum of the result. the index of that result is the city that is closest to the tap if it is lesser than the said threshold.

you can either use an R-Tree, or you can calculate the proximity of the touch to each visible point in the current view. To calculate the proximity you would normally use the Pythagorean theorem but in this case you can skip the square-root because you're only comparing the relative sizes. Also you can declare a distance cut off if you like say 50 pixels squared to 2500. So you'd put the result into an object containing distance and reference point and put the objects in an NSMutableArray, not adding the results under your cutoff, and select the minimum result.
So if you have a touched point pT, then for each point pN, you'd calculate:
d=(pT.x-pN.x)*(pT.x-pN.x) + (pT.y-pN.y)*(pT.y-pN.y); //d is the squared distance
The point pN with the minimum d is the point that was closest to pT. And like I said if you want only touches within 10 pixels to count, you can test that d <= 10*10;
The method of testing for touches within a 20x20 square area works too, except if two points are within 20 pixels of each other, then you need to know which is the closest touched point.

Related

rotary dial - rotation limitation

I am making a vintage phone and got a working starting code where user moves his fingers over a UIImageView numbers and it rotates dial. It then moves it back to original position. See screenshot.
The three problems that I can't seem to figure out are;
How can I restrict user to rotate only in clockwise direction? Currently user can move it in any direction (clockwise and counter clockwise)
How can I detect which number that user selected? Meaning user touched 1 or 3 or 5? I need this info so that I can stop the rotation when that number reaches the bar on the right.
In my current code when I stop the rotation and let go of the circle, it moves back to it's place by moving back counter clockwise. It works well if I select 1,2,3,4 but for any number 5 and up the dial moves clockwise back to its original position. How can I force counter clockwise motion on touchesEnded?
Let’s assume that you’re talking about this gesture:
Source.
Build a single-touch rotation gesture recognizer. After building the gesture recognizer correctly, you can just look at the rotation and see what to do with the rotary pad.
There are several things you’ll consider when building a single-touch rotation gesture recognizer. If you look at UIRotationGestureRecognizer, it uses connection between two touches, backed by two fingers, to derive the current angle, then compares the angle to the previous angle, derived from an earlier touch change event, to see the delta.
Measuring the current angle
It takes two points to form a line and you need a line to know the angle. If you’re working with only one touch, you need an anchor point. There are many ways to send an anchor point to your gesture recognizer, and since you’re likely going to build a custom class, use delegation.
Accumulating rotation counts
If you simply note the angle and send off messages during touch changes, it’ll sometimes work. However, if you’d like to implement hysteresis (e.g. this rotary dial will only rotate once clockwise, then it tightens up), you’ll need to accumulate rotation counts for both clockwise and counter-clockwise directions.
Fortunately, you can assume that a) the touch events will not get dropped too often, and b) simply comparing the current angle against the past angle, seeing if they cross quadrant boundaries, will suffice.
For example:
If the touch moved from the top-left quadrant into the top-right quadrant, add one to the rotation count.
If the touch moved from the top-right quadrant into the top-left quadrant, subtract one from the rotation count.
(Yup, this actually works.)
Emitting the correct, accumulated rotation
If you want to emit rotation information exactly like how UIRotationGestureRecognizer did, there will be four things you’re tracking.
Starting Angle: The angle between a connection from the anchor point to the starting touch, and a connection from the anchor point to a fixed reference point.
Current Angle: The angle between a connection from the anchor point to the current touch, and a connection from the anchor point to a fixed reference point.
Rotation Count: The number of clockwise revolutions derived from continuously comparing the current value of Current Angle against its last value (as talked about in the last section). If the touch is moving counter-clockwise, then this count will go into negative.
You’ll provide Rotation Count * 2_PI + (Current Angle - Starting Angle) as the rotation.
OK, I would take a different approach. First, you want to create a RotaryDial class to encapsulate all of the behavior. Then you can just plug it into any view as you see fit.
To keep things simple I would consider making each number button a movable UIImageView, call it RotaryDialDigit or something like that. You would instantiate and place ten of those.
The dial "frame" would just tag along for the ride as the user moves one of the RotaryDialDigit buttons. It's just an image (unless you want the user to be able to touch it and do something with it.
From there, knowing which button is being held down and limiting its rotation to a given direction as well as stopping at at the bar is fairly easy stuff.
By using a protocol you can then have the RotaryDial instance tell the container when a number has been dialed. To the container RotaryDial would feel like a keypad sending a message every time a button is pressed. You really don't want the container bothering with anything other than completed number selections.
To detect which number is touched, when you create each number you should set the tag value of its UIView. Then when the user touches the number you can detect which UIView object it was by checking that tag value.
For the rotation problem, I'd suggest looking at how you are calculating the angle. At a guess I'd say for numbers greater than 4 (which you discern from the tag) you need to do something like subtract the angle you are currently calculating from 360 degrees (well 2Pi). (But I have a head cold right now so the actual math is escaping me :-) )
Without seeing your code, I assume the numbers are a static image and you are animating the finger holes as they rotate past each number. If so:
Detecting which number: defina a CGRect around each button. When the user taps the screen, check which rectangle contains the tap location.
Controlling rotation direction: as the user drags their finger, comtinuously calculate the angle from the dial stop to the current tap location. If the angle moves in the wrong direction, dont update the position of the finger hole. Note that trig functions return vales from +Pi to -Pi radians. For the digits greater than 5, rather than handle negative angles you will probably want to add 2Pi radians ( or 360 degrees) to the angle.
Rotating wrong way: the digits below 5 are generatting angles in the range of 0 to -Pi. Without seeing code, I suspect adding 2Pi to the angle will correct your rotation direction.
Here is a better dial:
Have fun!

How do I optimize point-to-circle matching?

I have a table that contains a bunch of Earth coordinates (latitude/longitude) and associated radii. I also have a table containing a bunch of points that I want to match with those circles, and vice versa. Both are dynamic; that is, a new circle or a new point can be added or deleted at any time. When either is added, I want to be able to match the new circle or point with all applicable points or circles, respectively.
I currently have a PostgreSQL module containing a C function to find the distance between two points on earth given their coordinates, and it seems to work. The problem is scalability. In order for it to do its thing, the function currently has to scan the whole table and do some trigonometric calculations against each row. Both tables are indexed by latitude and longitude, but the function can't use them. It has to do its thing before we know whether the two things match. New information may be posted as often as several times a second, and checking every point every time is starting to become quite unwieldy.
I've looked at PostgreSQL's geometric types, but they seem more suited to rectangular coordinates than to points on a sphere.
How can I arrange/optimize/filter/precalculate this data to make the matching faster and lighten the load?
You haven't mentioned PostGIS - why have you ruled that out as a possibility?
http://postgis.refractions.net/documentation/manual-2.0/PostGIS_Special_Functions_Index.html#PostGIS_GeographyFunctions
Thinking out loud a bit here... you have a point (lat/long) and a radius, and you want to find all extisting point-radii combinations that may overlap? (or some thing like that...)
Seems you might be able to store a few more bits of information Along with those numbers that could help you rule out others that are nowhere close during your query... This might avoid a lot of trig operations.
Example, with point x,y and radius r, you could easily calculate a range a feasible lat/long (squarish area) that could be used to help rule it out if needless calculations against another point.
You could then store the max and min lat and long along with that point in the database. Then, before running your trig on every row, you could Filter your results to eliminate points obviously out of bounds.
If I undestand you correctly then my first idea would be to cache some data and eliminate most of the checking.
Like imagine your circle is actually a box and it has 4 sides
you could store the base coordinates of those lines much like you have lines (a mesh) on a real map. So you store east, west, north, south edge of each circle
If you get your coordinate and its outside of that box you can be sure it won't be inside the circle either since the box is bigger than the circle.
If it isn't then you have to check like you do now. But I guess you can eliminate most of the steps already.

How to detect a circle motion with UIGestureRecognizer

I want to be able to detect someone's finger drawing a circular motion on the screen - as if they were drawing an 'O'. Is this possible with UIGestureRecognizer?
I think the answer to this depends on your definition of circular motion and how you intend to use it. For example, do you want to know how many degrees along a circle the users finger has travelled? Or, do you only care about a circle being completed? What is the degree of accuracy you require? Do you want to allow for the motion to be interrupted or does this have to be more of a touch-down > draw-circle > touch-up (in other words, single motion)?
One approach would be to define a bunch of rectangular zones along the circumference and detect if the user is touching these in sequence. This can provide you with direction and a coarse indication of angle.
Another approach is to store the points between touch down and touch up and do some filtering and curve fitting to figure out what shape is approximated by the points. First low-pass-filter using a basic FIR filter and then look at the dx and dy from point to point. A circle (as a series of arcs) will have to fall within a certain range of slope changes from point to point, otherwise you have some other shape.
Yet another approach is to use a Neural Network to take the points and tell you what the shape looks like.
I think this may be what you need
How to detect circular gesture via Gesture Recognizer?
Instead using a gesture recognizer, this project reacts to circular motions tracking the angle of UITouch events.
My answer to my question:
I used this: http://iphonedevelopment.blogspot.com/2009/04/detecting-circle-gesture.html
.. but turned the CircleView into a custom UIGestureRecognizer. Everything lovely.
No, it doesn't recognize natively a circular motion.
You have to implement your own method to do that.
Here's how i needed to do it using the touches callbacks in my view controller but this could be made into a gesture too. Note, I was trying to detect multiple circle motions (2 or more clockwise or counterclockwise circles made during a touch event.
Store touchesMoved CGPoints in an array.
Create a min/max rect of all the points in your history array.
Divide this min/max rect into 4 smaller rects.
Assign each history point a quadrant using CGRectContainsPoint() for each of the 4 quadrants
A clockwise motion will have quadrants ascending. A counter-clockwise motion will have quadrants descending.
Check the ratio of width/height if you want to detect circles vs ovals

How to determine if iPad user taps within an irregular shaped image?

I've hooked up a UITapGestureRecognizer to a UIImageView containing the image I'd like to display on an iPad screen and am able to consume the user taps just fine. However, my image is that of a hand on a table and I'd like to know if the user has tapped on the hand or on the table part of the image. I can get the x,y coordinates of the user tap with CGPoint tapLocation = [recognizer locationInView:self.view]; but I'm at a loss for how to map that CGPoint to, say, the region of the image that contains the hand vs. the region that contains the table. Everything I've read so far deals with determining if a CGPoint is in a particular rectangular area, but what if you need to determine if that CGPoint is located in the boundaries of a more irregular shape? Is that even possible? Any suggestions or just pointing me in the right direction would be a big help. Thanks!
You could use pointInside:withEvent: to define the hit area programmatically.
To elaborate, you just take the point and evaluate to see if it falls in the area you're after with a series of if statements. If it does, return TRUE. If it doesn't, return FALSE. If this is related to this post, then you could use a circular conditional to compare the distance of the point to the center of your circle using Pythagorean Theorem.
late to the party,
but the core tool you want here is a "point in polygon" routine.
this is a generic approach, independent of iOS.
google has lots of info,
but the general approach is:
1) define your closed polygon.
- it sounds like this might be a bit of work in your case.
2) choose any point not equal to your original point.
(yes, any point)
3) for each edge in the polygon,
determine if the ray from your original point through the seconds point intersects with that polygon edge.
- this requires a line-segment-intersect-ray routine, also available on the 'tubes.
4) if the number of intersections is odd, it's inside the polygon.
if the count is even, it's outside.
for general geometry-type issues,
i highly recommend Paul Bourke: http://local.wasp.uwa.edu.au/~pbourke/geometry/insidepoly/
You can use a bounding rectangle that covers most or all of the hand.
If the user is using his finger to tap either the hand or the table, I doubt that you want him or her to be extremely precise with the tap.
An extension of the bounding rectangle answer,
you could define several smaller bounding rectangles that would approximate a hand without covering the rest of the screen.
OR
you could use a list of rectangles, for each of your objects and put the hand at the end of the list. In this case, if you had a tap on button X on the top right hand of the screen which is technically inside the hand rectangle, it would choose the button X because that rectangle is found first.
define the shape by a black and white bitmap (1 bit per pixel). Check if the particular bit is set. This would eat a lot of memory if you had a lot of large shapes, but for one bitmap with a hand, it should not be a big deal.
define the shape as a polygon. Then you need to do point-in-polygon test. Wikipedia has a wonderful article on this, with links to code here: http://en.wikipedia.org/wiki/Point_in_polygon
iPad libraries might have this already implemented. Sorry, I cannot help you there, not an iPad developer.

Cocoa Touch - How to check if a non-rectangular object in a UIImageView intersects another object?

Say I have a UIImageView that contains an image of an object that is not rectangular, i.e. a round ball. How can I check if another UIImageView (rectangular or not) intersects, or contains a point in, that object (not its frame)?
Basic example:
I have two balls rolling around on the screen, and I want to check for collision. But I don't want to check if their rects intersects eachother, since the balls are not rectangular.
I think if you have limited set of possible shapes then it is better to perform the check for each possible pair of object shapes rather then some generic algorithm. For example two circles intersect if the distance between their centers is less then the sum of their radiuses etc.