iOS- how to make a UIImage look pixellated instead of fuzzy - iphone

I'm not even sure how to phrase this question... but basically I have a small image, and I want to stretch it to cover the screen. Because of the enlargement, the image is fuzzy, and I would prefer it to look blocky, so I can see the individual pixels as squares. Is there any simple way to achieve this effect?

Draw the image as the content of a layer and then enlarge it by enlarging the layer (give it a scale transform, or even better, use the contentsRect property to make the small image area fill the layer). The layer is backed only by the original bitmap, so you'll now be seeing the bits of the bitmap. IIRC it's even blockier if you use shouldRasterize.

Related

How to merge an image to fill in other image with shape?

I want to merge an image to another image in one shape. Example:
1- People image
2- Shape Image:
So how to do draw that. I already implement for merging but it's not fill to that shape.
It's possible to do this using the masking functions in the Quartz 2D framework. It's a little bit more involved than using the higher level image functions of UI Kit, but Quartz 2D gives you a lot more power to do cool graphics techniques.
The relevant Apple Developer guide to this can be found here: https://developer.apple.com/library/mac/documentation/GraphicsImaging/Conceptual/drawingwithquartz2d/dq_images/dq_images.html
For this example, you'd want to create a mask shape for the inside part of the shape image. There are two ways you can do this. One way is to use image editing software to create a second mask image, with the same size as your shape image, with pure black in the area where you want the people image to appear, and white where you don't want to appear. In this example, that would be the area inside the blue shape. It is important to not crop this image, or else they won't match up exactly.
The other way to create the masking image would be to do that dynamically based on the shape image, and honestly, this is the way I would do it. This would mean that you're including fewer images in your app, and if you made any changes to the shape image, you wouldn't have to recreate the mask image as well. You could do this by making a small change to the way your shape image is formatted. You would need to use a format that allows transparency - png is preferred - so that there is alpha transparency in the part of the image outside of the shape, which is white in your JPEG image. Make sure the section in the center of the image is white (really, any color that is NOT USED in the wanted part of the shape image would work, but I'll say white for this example) and that you don't have parts of it that aren't pure white after image compression.
You will then use Quartz to select the area that's white, and create a mask from that. This technique is a bit more involved, but what you need can be found in the document I linked to above. Because of this, you might start with a static masking image, and then convert to the more involved technique after you've got the code to make the first technique work.
When you have your masking image, you would create the mask itself with the function CGImageMaskCreate(::::::::). You can then apply the mask to the people image using the function CGImageCreateWithMask(::), which will give you an image with the person's portrait, with the correct shape cropped from the center.
Finally, you would display this in your app by placing the masked people image on top of the shape image, and voila, you'll have what you're looking for.
Also, keep in mind, when using the Quartz 2D framework, you'll have to make sure you release images when they are no longer needed, or else you could have memory leaks.

Blur Partial Part of Image

I am new for iOS Development . After googling I found that, it is easy to blur whole image but it is difficult to blur specific part of image such like rectangular or circular. So please help me how can I blur specific part of image rather then whole image ?
Thanks in advance.
Blur the whole image, then crop to the part you care about. You can use a mask for non-rectangular/non-sharp-edged blurs, but don't skip the crop.
The lovely, but sometimes tricky, thing about
Core Image is that it's extremely lazy. It doesn't work from the start to the end; it's more of a pull model, working from the last thing you asked for all the way back to the original rasters. Moreover, it won't actually filter any pixels you have not asked for.
So, in your case, a crop means not asking for any blurred pixels outside of the crop. Since you didn't ask for them, they don't get blurred. The blur only runs on the pixels you ask for—the ones inside the crop.
Masking works differently; by definition, it needs to look at every pixel in the mask image, and I would be surprised if it didn't also look at every pixel in the source (even to multiply it by zero). This is why you should still crop, even with a mask.
Note that the blurred-and-cropped portion of the image will still be where it is in the original image. It doesn't copy/move the pixels within the image, because that would be expensive; instead, it returns an image with a different extent—namely, the crop rectangle. You'll want to retrieve that extent and subtract its origin from the coordinates where you want to draw the image—either that or use an affine transform filter, but, again, that would probably be expensive.

Create a masking layer without direct image processing?

I'm basically trying to achieve this effect. It can be done with pretty much a PNG with a transparent hole cut through it, then stacked on top of a photo.jpg UIView. OR, I've also seen a method where you can directly create a mask with CGImageMaskCreate. I don't want to use that feature because I want the user to be able to interact with the photo.jpg layer (by moving it, rotating it, etc.):
Its essentially two UIViews stacked directly on top of each other.
However, what if rather than using blue, I wanted to use another color, or even pattern an image with [UIColor colorPatternWithImage:] for the masking layer? I wouldn't want to make a million different PNGs for each case.
Would I need some way to programmatically recreate my mask? Is there some way to convert my mask shapes into code? Any help is appreciated. Thanks
CALayer has a property called mask, which is another CALayer that defines the mask to use. You could use a CAShapeLayer to define the mask, then set that as the mask of another layer that renders your obscuring image/color/pattern/whatever. You could also use a regular CALayer as the mask with your semi-transparent image as the contents, it depends on whether or not you want the ability to customize the size/shape of the hole.
Caveats: CAShapeLayer is slower than normal layers, and mask is slower than non-masking as well. You may need to make sure performance is acceptable for you. You may also want to try out the shouldRasterize flag, though this will only improve performance (at the cost of memory) as long as the layer is static (i.e. not animating).

How do I determine if a CALayer is fully covered by other CALayers?

I have a series of layers that are randomly placed on the screen. As each layer is added, it is positioned on top of all of the others.
Eventually, a layer is completely covered by other layers. At this point, I'd like to remove the layer from memory.
Is there any way to know when a layer is covered (either 100% or some fraction) by other layers?
Each layer has a rotation transform applied to it, so I cannot accurately make comparisons amongst all of the layers' frames.
You could do a pixel-test to find out. Init a grayscale context the size of your screen (if possible, it only needs to be 1-bit, though I don't know if iOS actually supports that configuration). Fill the area with black. Fill the area your layer covers with white (you can take the layer's transform, set it as the CTM, and then fill the rect for your layer). Then iterate over all other layers and do the same thing, except filling with black again. Once that's done, you can scan all the pixels in the context, looking to see if any of them are white. If you find a white pixel, the layer is still visible. Otherwise, it's not.
Naturally, this assumes that all your layers are completely opaque and fill their entire bounds.

Changing color of part of an image

I have a png image file that is partly opaque and partly transparent. I display it in a UIImageView as a mask of sorts over another UIImageView layered behind it (as a sibling subview of a common superview). It gives me perfect borders around something painted using a finger on the lower UIImageView in my stack of UIImageViews. Perhaps there are better ways to do this, but I am new-ish, and this is the best way I came up with thus far. None the less, my app is in the App Store and now I want to enhance it to provide more images to use as the mask of sorts over the finger painting. But I don't want to bloat my bundle size by adding more static mask images as I did for the initial implementation. Not to mention I don't want to spend lots of time in photoshop making 100 masks. I'd rather programmatically change the color of the mask, without affecting the clear portion in the middle, which is not a simple regtangle or circle, but rather a complex shape. So my question is this: How can I change the colored portion of my loaded image without affecting the clear color portion in the middle? Is there a reasonably easy way to do this? Essentially I want to do what is described in this post (How would I tint an image programmatically on the iPhone?) without affecting the clear portion of my image. Thanks for any insights.
Have a look at the Tinted Image sample project. Try out the different modes until you get the effect you want.