I am new for iOS Development . After googling I found that, it is easy to blur whole image but it is difficult to blur specific part of image such like rectangular or circular. So please help me how can I blur specific part of image rather then whole image ?
Thanks in advance.
Blur the whole image, then crop to the part you care about. You can use a mask for non-rectangular/non-sharp-edged blurs, but don't skip the crop.
The lovely, but sometimes tricky, thing about
Core Image is that it's extremely lazy. It doesn't work from the start to the end; it's more of a pull model, working from the last thing you asked for all the way back to the original rasters. Moreover, it won't actually filter any pixels you have not asked for.
So, in your case, a crop means not asking for any blurred pixels outside of the crop. Since you didn't ask for them, they don't get blurred. The blur only runs on the pixels you ask for—the ones inside the crop.
Masking works differently; by definition, it needs to look at every pixel in the mask image, and I would be surprised if it didn't also look at every pixel in the source (even to multiply it by zero). This is why you should still crop, even with a mask.
Note that the blurred-and-cropped portion of the image will still be where it is in the original image. It doesn't copy/move the pixels within the image, because that would be expensive; instead, it returns an image with a different extent—namely, the crop rectangle. You'll want to retrieve that extent and subtract its origin from the coordinates where you want to draw the image—either that or use an affine transform filter, but, again, that would probably be expensive.
Related
I want to merge an image to another image in one shape. Example:
1- People image
2- Shape Image:
So how to do draw that. I already implement for merging but it's not fill to that shape.
It's possible to do this using the masking functions in the Quartz 2D framework. It's a little bit more involved than using the higher level image functions of UI Kit, but Quartz 2D gives you a lot more power to do cool graphics techniques.
The relevant Apple Developer guide to this can be found here: https://developer.apple.com/library/mac/documentation/GraphicsImaging/Conceptual/drawingwithquartz2d/dq_images/dq_images.html
For this example, you'd want to create a mask shape for the inside part of the shape image. There are two ways you can do this. One way is to use image editing software to create a second mask image, with the same size as your shape image, with pure black in the area where you want the people image to appear, and white where you don't want to appear. In this example, that would be the area inside the blue shape. It is important to not crop this image, or else they won't match up exactly.
The other way to create the masking image would be to do that dynamically based on the shape image, and honestly, this is the way I would do it. This would mean that you're including fewer images in your app, and if you made any changes to the shape image, you wouldn't have to recreate the mask image as well. You could do this by making a small change to the way your shape image is formatted. You would need to use a format that allows transparency - png is preferred - so that there is alpha transparency in the part of the image outside of the shape, which is white in your JPEG image. Make sure the section in the center of the image is white (really, any color that is NOT USED in the wanted part of the shape image would work, but I'll say white for this example) and that you don't have parts of it that aren't pure white after image compression.
You will then use Quartz to select the area that's white, and create a mask from that. This technique is a bit more involved, but what you need can be found in the document I linked to above. Because of this, you might start with a static masking image, and then convert to the more involved technique after you've got the code to make the first technique work.
When you have your masking image, you would create the mask itself with the function CGImageMaskCreate(::::::::). You can then apply the mask to the people image using the function CGImageCreateWithMask(::), which will give you an image with the person's portrait, with the correct shape cropped from the center.
Finally, you would display this in your app by placing the masked people image on top of the shape image, and voila, you'll have what you're looking for.
Also, keep in mind, when using the Quartz 2D framework, you'll have to make sure you release images when they are no longer needed, or else you could have memory leaks.
I'm not even sure how to phrase this question... but basically I have a small image, and I want to stretch it to cover the screen. Because of the enlargement, the image is fuzzy, and I would prefer it to look blocky, so I can see the individual pixels as squares. Is there any simple way to achieve this effect?
Draw the image as the content of a layer and then enlarge it by enlarging the layer (give it a scale transform, or even better, use the contentsRect property to make the small image area fill the layer). The layer is backed only by the original bitmap, so you'll now be seeing the bits of the bitmap. IIRC it's even blockier if you use shouldRasterize.
I have this image:
What I want to do is to add a UITapGestureRecognizer to this image (or I can split the image in the different parts it consists of and add for each part a UITapGestureRecognizer) in order to have different actions according to the leaf tapped. If I split the image in different images each for each leaf the UIImageViews will probably overlap and tapping on one will be recognized as a tap on another one. Having just one image implies knowing the points of the screen that belongs to a leaf rather than to another one.
Any clues on how to do it would be really appreciated.
Thanks
Change your behavior by examining the gesture recognizer's locationInView:.
If you handle the image as one unit, implement this in your gesture recognizer call back to decide which "leaf" (if any) was tapped.
If you handle the image as multiple images, you could also implement it in your callback, or you could also implement in, e.g., your delegate's gestureRecognizerShouldBegin: to suppress events for touches outside the leaf as drawn.
EDIT: I didn't realize that you might also be looking for assistance on figuring out whether a point lies within a leaf. #PhillipMills is correct on this point: we need to know how you are drawing the image.
FOLLOW-UP: This is somewhat outside my area of expertise.
The easiest approach (from a hit-testing standpoint) is to do what #PhillipMills suggested, using Quartz drawing and CGPathContainsPoint(). If you have detailed graphics that you need rendered as a PNG, you could certainly construct a simple path that would be (virtually) overlayed to allow hit testing.
Your other options, AFAIK, are to do hit testing mathematically, but you would basically be reimplementing CGPathContainsPoint() but without a path, or to employ various tricks that look at the color of the pixels at your touch point to do hit testing. Googling will turn up some useful results if you go this route, but honestly for a shape as simple as what you've drawn, just use some UIBezierPath code to recreate in code.
Not sure if this will be helpful but if you get stuck on figuring out which leaf was clicked, you could use an old image map trick we used to use in CD-ROM projects for pixel accurate click tracking on images.
You have your full size image. Make a 25% (or less) scaled version of it. Fill each of the leaf regions you want to track clicks on with a different color; anything you want to ignore make black. When the full size image is clicked, get the x/y coordinates and scale them by the percentage of your scaled image. Then get the pixel color of the scaled image at the scaled x/y coordinate. By determining the pixel color you will know which leaf was clicked.
Sounds clunky but it works really well and is fast.
(all that said, I don't think alpha areas of images trigger the gesture recognizer - so breaking the image up would be less complicated/code intensive.)
If you can break the shape apart into the constituent elements, then you can put each into it's own layer and use the method discussed in this stackoverflow discussion to determine which was touched: Hit Testing with CALayer using the alpha properties of the CALayer contents
I'm new to iPhone graphics and it's a bit daunting.
The problem: I have UIImageA, and UIImageB. Both are the same picture, except UIImageB has all the pixels values darkened.
I'd like to copy an arbitrary piece of UIImageA onto the top of UIImageB. The end result would be a dark image, with the part of the original image bright.
My guess is that I will need to:
Create a "path" that is the arbitrary shape to copy. I think I can figure this out.
Take UIImageA and somehow crop it or mask it to the path.
Copy the part of UIImageA onto UIImageB at the exact same position.
It's steps 2 and 3 that have me confused. I've seen many examples of cropping images to a rectangle, or masking images with another pre-defined image, but nothing that exactly does this.
Does anyone have any general pointers?
You could try Core Image.
You can do all that with CGImage in your environment. You can
use a bitmap or layer context and then later render it on whatever view
you wish.
There is a good Core Graphics Quartz tutorial at
http://developer.apple.com/library/mac/#documentation/GraphicsImaging/Conceptual/drawingwithquartz2d/Introduction/Introduction.html
How can a bounding box be created for a UIImageView that is not a CGRect?
I would like to have objects in my view which should display images as well as detection collisions.
The issue is I would like these object to be whatever shape they are rather than fitting them into a CGRect and detecting collisions on areas which are inside the box but are nit the actual image.
How does one achieve this?
This is a non-trivial problem. But the basics are a CGRect is a rectangle and a hit test inside of a rectangle is fairly easy to understand. However, you sound like you want a more complex shape. UIImageView displays an image. It does not have any idea about what shape you want to use for your collision test. So you are going to have to tell it.
One easy thing to do is to look at the alpha/transparent values of the display image to create a shape. So to answer the question is a point hitting an image we figure out the location of point in the image and return true if the alpha is greater than 0. If you do this you can create any image with a transparent background and the code will just work.
If that will not work for you then can can also run a hit test on a point and a polygon this post covers that in detail.
How can I determine whether a 2D Point is within a Polygon?