Creating a bounding box for a UIImageView - iphone

How can a bounding box be created for a UIImageView that is not a CGRect?
I would like to have objects in my view which should display images as well as detection collisions.
The issue is I would like these object to be whatever shape they are rather than fitting them into a CGRect and detecting collisions on areas which are inside the box but are nit the actual image.
How does one achieve this?

This is a non-trivial problem. But the basics are a CGRect is a rectangle and a hit test inside of a rectangle is fairly easy to understand. However, you sound like you want a more complex shape. UIImageView displays an image. It does not have any idea about what shape you want to use for your collision test. So you are going to have to tell it.
One easy thing to do is to look at the alpha/transparent values of the display image to create a shape. So to answer the question is a point hitting an image we figure out the location of point in the image and return true if the alpha is greater than 0. If you do this you can create any image with a transparent background and the code will just work.
If that will not work for you then can can also run a hit test on a point and a polygon this post covers that in detail.
How can I determine whether a 2D Point is within a Polygon?

Related

UIImage rotating around its own axis? Possible?

I need a UIImageView initialized with an image that rotates around its own axis in the same way it does in this
GIF example.
My task is actually a little more complicated because I also need to turn a flat image into a thick, coin-like 3D image, but I won't bother anyone here with that. The only thing I want to know right now is whether it's possible to animate a UIImage like the example above, and if so, then how do I do it?

Blur Partial Part of Image

I am new for iOS Development . After googling I found that, it is easy to blur whole image but it is difficult to blur specific part of image such like rectangular or circular. So please help me how can I blur specific part of image rather then whole image ?
Thanks in advance.
Blur the whole image, then crop to the part you care about. You can use a mask for non-rectangular/non-sharp-edged blurs, but don't skip the crop.
The lovely, but sometimes tricky, thing about
Core Image is that it's extremely lazy. It doesn't work from the start to the end; it's more of a pull model, working from the last thing you asked for all the way back to the original rasters. Moreover, it won't actually filter any pixels you have not asked for.
So, in your case, a crop means not asking for any blurred pixels outside of the crop. Since you didn't ask for them, they don't get blurred. The blur only runs on the pixels you ask for—the ones inside the crop.
Masking works differently; by definition, it needs to look at every pixel in the mask image, and I would be surprised if it didn't also look at every pixel in the source (even to multiply it by zero). This is why you should still crop, even with a mask.
Note that the blurred-and-cropped portion of the image will still be where it is in the original image. It doesn't copy/move the pixels within the image, because that would be expensive; instead, it returns an image with a different extent—namely, the crop rectangle. You'll want to retrieve that extent and subtract its origin from the coordinates where you want to draw the image—either that or use an affine transform filter, but, again, that would probably be expensive.

UIGesture recognition on different areas of a UIImageView

I have this image:
What I want to do is to add a UITapGestureRecognizer to this image (or I can split the image in the different parts it consists of and add for each part a UITapGestureRecognizer) in order to have different actions according to the leaf tapped. If I split the image in different images each for each leaf the UIImageViews will probably overlap and tapping on one will be recognized as a tap on another one. Having just one image implies knowing the points of the screen that belongs to a leaf rather than to another one.
Any clues on how to do it would be really appreciated.
Thanks
Change your behavior by examining the gesture recognizer's locationInView:.
If you handle the image as one unit, implement this in your gesture recognizer call back to decide which "leaf" (if any) was tapped.
If you handle the image as multiple images, you could also implement it in your callback, or you could also implement in, e.g., your delegate's gestureRecognizerShouldBegin: to suppress events for touches outside the leaf as drawn.
EDIT: I didn't realize that you might also be looking for assistance on figuring out whether a point lies within a leaf. #PhillipMills is correct on this point: we need to know how you are drawing the image.
FOLLOW-UP: This is somewhat outside my area of expertise.
The easiest approach (from a hit-testing standpoint) is to do what #PhillipMills suggested, using Quartz drawing and CGPathContainsPoint(). If you have detailed graphics that you need rendered as a PNG, you could certainly construct a simple path that would be (virtually) overlayed to allow hit testing.
Your other options, AFAIK, are to do hit testing mathematically, but you would basically be reimplementing CGPathContainsPoint() but without a path, or to employ various tricks that look at the color of the pixels at your touch point to do hit testing. Googling will turn up some useful results if you go this route, but honestly for a shape as simple as what you've drawn, just use some UIBezierPath code to recreate in code.
Not sure if this will be helpful but if you get stuck on figuring out which leaf was clicked, you could use an old image map trick we used to use in CD-ROM projects for pixel accurate click tracking on images.
You have your full size image. Make a 25% (or less) scaled version of it. Fill each of the leaf regions you want to track clicks on with a different color; anything you want to ignore make black. When the full size image is clicked, get the x/y coordinates and scale them by the percentage of your scaled image. Then get the pixel color of the scaled image at the scaled x/y coordinate. By determining the pixel color you will know which leaf was clicked.
Sounds clunky but it works really well and is fast.
(all that said, I don't think alpha areas of images trigger the gesture recognizer - so breaking the image up would be less complicated/code intensive.)
If you can break the shape apart into the constituent elements, then you can put each into it's own layer and use the method discussed in this stackoverflow discussion to determine which was touched: Hit Testing with CALayer using the alpha properties of the CALayer contents

Fastest / most efficient way to draw moving speech bubbles on screen - CoreAnimation, Quartz2D?

I am adding some functionality to an iPhone app, and could use some help in picking the fastest / most efficient / best practice approach for solving this problem:
At the upper-half of my screen, I have speech bubbles (think comic book) that are UIImageViews translating across the screen (dynamic x & y position). It is a UIImageView because there is an image as the background of the speech bubble.
Each speech bubble has a matching image moving around the bottom of the screen (elsewhere in the layer tree)
I would like to draw a tail (that triangle bit from a speech bubble) so the point of the triangle is tracking the lower image, with the base of the triangle being attached to the bottom of the upper UIImageView. (technically the base doesn't have to be butted against, it can overlap as long as I can match the color of my background image to the triangle).
I have already done all the tracking & drawn a line with CGContextStrokePath methods, and now I am stuck on how to replace the line with a triangle.
I have looked at drawing a triangle in Quartz and filling it. My concern is the speech bubbles are repositioned every 1/10th of a second, and it looks like drawing just the line used for proof of concept had a pretty severe performance / visual smoothness impact.
One idea I have is to do the trigonometry myself, and stretch & rotate an image of a triangle to connect each of these speech bubbles with the lower spot. Something is telling me there is a more efficient / more elegant solution, but I am not able to see it looking through the documentation. Any help on how you have or would approach this issue is appreciated. Thanks.
If the speech bubbles are fixed in size, just use a static UIImage. Set the image view's layer.position property at the point of the triangle. Then you can use view animation to move the bubbles around.
If you need the speech bubbles to be different sizes, I'd create a resizeable image using resizableImageWithCapInsets. Then I'd do the same as above to position it.
If there was something special about the speech bubble that I could't achieve with either a static image or a resizable image, I'd probably create a custom CA Layer or layers to get the effect I wanted (Like a gradient layer with a shape layer as it's mask layer)

Getting the pixel color from a image

I'm working in a view based application and am trying to find some code that will let me grab some pixel colors from one of my images and use it for collision detection against one of my UIImageViews but haven't had any luck finding anything on this subject. So if my UIImageView for my player collides with the UIImageView of my map && collides with the color black in my image that's placed inside of my map view... then run collision code... or something along those lines.
Is your question about getting the pixel color, or about doing collision detection?
If you want to get the pixel color, I'm not sure there's an easy way to do it - you may have to mess with your current graphics context to get it, and nothing is coming up in the docs.
If it's just collision detection you want to do, take a look at UIView's convertPoint:toView: and convertPoint:fromView: methods. They let you take defined points within a given view and get their equivalents in other views. With some basic math on the resultant points, you could theoretically do some pretty good collision detection without having to worry about pixel colors.