Draw area from given points on iphone - iphone

I want to draw an area from given points on a map using the Google API on iPhone (I always have more than 2 points). If it is possible I want this area to be with alfa 0.5 (so people will be able to see routes underneath this area).
I would appreciate any code and links.
There are a few similar questions, but I didn't find anything just like this, so please correct me if I'm wrong.

Just use a MKOverlayView and give it a proper frame (of type CGRect) calculated from the given points. Set the alpha of the view to 0.5. Note that frame and alpha are properties of UIView, of which MKOverlayView is a subclass.
Read everything about this class here.

Related

UIGesture recognition on different areas of a UIImageView

I have this image:
What I want to do is to add a UITapGestureRecognizer to this image (or I can split the image in the different parts it consists of and add for each part a UITapGestureRecognizer) in order to have different actions according to the leaf tapped. If I split the image in different images each for each leaf the UIImageViews will probably overlap and tapping on one will be recognized as a tap on another one. Having just one image implies knowing the points of the screen that belongs to a leaf rather than to another one.
Any clues on how to do it would be really appreciated.
Thanks
Change your behavior by examining the gesture recognizer's locationInView:.
If you handle the image as one unit, implement this in your gesture recognizer call back to decide which "leaf" (if any) was tapped.
If you handle the image as multiple images, you could also implement it in your callback, or you could also implement in, e.g., your delegate's gestureRecognizerShouldBegin: to suppress events for touches outside the leaf as drawn.
EDIT: I didn't realize that you might also be looking for assistance on figuring out whether a point lies within a leaf. #PhillipMills is correct on this point: we need to know how you are drawing the image.
FOLLOW-UP: This is somewhat outside my area of expertise.
The easiest approach (from a hit-testing standpoint) is to do what #PhillipMills suggested, using Quartz drawing and CGPathContainsPoint(). If you have detailed graphics that you need rendered as a PNG, you could certainly construct a simple path that would be (virtually) overlayed to allow hit testing.
Your other options, AFAIK, are to do hit testing mathematically, but you would basically be reimplementing CGPathContainsPoint() but without a path, or to employ various tricks that look at the color of the pixels at your touch point to do hit testing. Googling will turn up some useful results if you go this route, but honestly for a shape as simple as what you've drawn, just use some UIBezierPath code to recreate in code.
Not sure if this will be helpful but if you get stuck on figuring out which leaf was clicked, you could use an old image map trick we used to use in CD-ROM projects for pixel accurate click tracking on images.
You have your full size image. Make a 25% (or less) scaled version of it. Fill each of the leaf regions you want to track clicks on with a different color; anything you want to ignore make black. When the full size image is clicked, get the x/y coordinates and scale them by the percentage of your scaled image. Then get the pixel color of the scaled image at the scaled x/y coordinate. By determining the pixel color you will know which leaf was clicked.
Sounds clunky but it works really well and is fast.
(all that said, I don't think alpha areas of images trigger the gesture recognizer - so breaking the image up would be less complicated/code intensive.)
If you can break the shape apart into the constituent elements, then you can put each into it's own layer and use the method discussed in this stackoverflow discussion to determine which was touched: Hit Testing with CALayer using the alpha properties of the CALayer contents

Draw color on country (Mapkit) when it is selected

I want to color the country on selection of the country from a tableView. Can you help me please?
Considering your case, let me give you a heads up that this would require edge detection (so if you haven't done that before, it will take a LONG time), though not lots of it and the following is just one way of approaching this problem:
1) Take out an image context from the map you have.
2) Apply relevant edge detection algorithms in the area you want and use a bright color to differentiate. Note that this way, the inside would not be colored and I can't tell you for sure if that's possible or not.
3) Add that context as a subView on top of the map.
Also take a look at the Quartz 2D programming guide for more tips.
I would suggest something different, though. Keep pre-stored images for all the possibilities and just put a UIImageView with that as its image in front of the map - this will save you a lot of headache.

Creating a bounding box for a UIImageView

How can a bounding box be created for a UIImageView that is not a CGRect?
I would like to have objects in my view which should display images as well as detection collisions.
The issue is I would like these object to be whatever shape they are rather than fitting them into a CGRect and detecting collisions on areas which are inside the box but are nit the actual image.
How does one achieve this?
This is a non-trivial problem. But the basics are a CGRect is a rectangle and a hit test inside of a rectangle is fairly easy to understand. However, you sound like you want a more complex shape. UIImageView displays an image. It does not have any idea about what shape you want to use for your collision test. So you are going to have to tell it.
One easy thing to do is to look at the alpha/transparent values of the display image to create a shape. So to answer the question is a point hitting an image we figure out the location of point in the image and return true if the alpha is greater than 0. If you do this you can create any image with a transparent background and the code will just work.
If that will not work for you then can can also run a hit test on a point and a polygon this post covers that in detail.
How can I determine whether a 2D Point is within a Polygon?

implementing stretchable dialog borders in iphone sdk

I want to implement dialog borders that scale to the size I require the dialog to be. Perhaps there is a better more conventional name for this sort of thing. If there is, if someone would edit the title, that'd be great.
Anyhow, I'd like to do this so I can have dialogs of any size without the visual artifacts that come with scaling border art to small, large, or wacky unproportional dimentions. I have a few ideas on how this is done, but am not sure which is better for iphone. I have a few questions.
1) Should I make a containing view object that basically overloads its drawRect method and draws the images where they should be at their appropriate scale when the method is called, or should I main a containing view object that simply contains 8 UIImageViews? I suspect the latter approach won't work if I need to actively scale the resulting dialog class like in an animation.
1b) If overloading drawRect is the way to go, does someone have some sample code or a link to an example that demonstrates drawing an image directly from drawRect()?
2) Is it generally better to create
a) a 3 x 3 image where the segments are in their appropriate 1x1 grid of the image? If so, is it simple to draw from a portion of this image onto my target view in drawRect (if the former assumption is correct that I should use drawRect)?
b) The pieces separately in 8 different files?
UPDATE:
To clarify, the idea is to take any customized border art and be able to stretch the 2nd, 4th, 6th, and 8th cell (in a 3x3-cell grid) to form a border of any size with just those assets. Stretching just a plain image would result in distortion of the corners, so I'd like to stretch those even numbered cells as needed and tack on the corners so there is no distortion. I'd seen this done before so thought it might be a standard thing and have a standard naming to it other than what I called it.
Anyhow, I was advised that adding 8 UIImageViews to a container would not be as efficient as drawing the UIImages on the fly in drawRect so took that approach using CGContextDrawImage() after applying the necessary transformations to the context to translate and scale the Y. Because this function draws from the bottom left corner of an image but onto a top-left origined UIView, the image is upside down without the Y axis invert. I noticed the suggestion to use UIImage functions like drawAtPoint works as well and similarly but for the invert since UIImage draws in the same orientation as UIViews. I will continue my implementation with the former and see how it goes, but one other question.
Would someone happen to know which of these approaches is more efficeint, faster, etc?
I'm not sure I follow, but here's my best shot at an answer...
Using drawRect: or adding individual UIImageViews to a parent view is entirely up to you. UIImageView gives you a bit of encapsulated functionality for free, but otherwise they are the same as far as appearances go.
If you do want to go the drawRect route, you just need to use UIImage's drawAtPoint: method. Do the math for where you want it to be, and draw it. You can calculate your points based on the parent view's dimensions.
As far as scaling, it's impossible to resize these images without scaling them, so I'd plan ahead and make your originals as large or larger than you ever expect to display them.
Hope that helps a little?
Cheers
If you want a border on a dialog box, assuming the box is a UIView (or subclass), then set the layer's border properties and let the system draw the border for you.
#import <QuartzCore/QuartzCore.h>
// ...
view.layer.borderWidth = 2;
view.layer.borderColor = [UIColor whiteColor].CGColor;
view.layer.cornerRadius = 0; // 0=square corners, >0 for rounded

Getting the pixel color from a image

I'm working in a view based application and am trying to find some code that will let me grab some pixel colors from one of my images and use it for collision detection against one of my UIImageViews but haven't had any luck finding anything on this subject. So if my UIImageView for my player collides with the UIImageView of my map && collides with the color black in my image that's placed inside of my map view... then run collision code... or something along those lines.
Is your question about getting the pixel color, or about doing collision detection?
If you want to get the pixel color, I'm not sure there's an easy way to do it - you may have to mess with your current graphics context to get it, and nothing is coming up in the docs.
If it's just collision detection you want to do, take a look at UIView's convertPoint:toView: and convertPoint:fromView: methods. They let you take defined points within a given view and get their equivalents in other views. With some basic math on the resultant points, you could theoretically do some pretty good collision detection without having to worry about pixel colors.