drawing image from CALayers - iphone

I'm building some sort of censoring app. I've gotten so far that i can completele pixelate an image taken with my iPhone.
But I want to achieve in the end an image like this: http://images-mediawiki-sites.thefullwiki.org/11/4/8/8/8328511755287292.jpg
So my thought was to fully pixelate my image and then add a mask on top of it, to achieve the desired effect. So in terms of layers it goes like: originalImage + maskedPixelatedVersionOfImage.. I was thinking to animate the mask when touching the image, to scale the mask to the desired size. The longer you hold your finger on the image, the bigger the mask becomes...
After some searching, I guess this can be done using CALayers and CAAnimation. But how do I then composite those layers to an image that I can save in the photoalbum on the iphone?
Am I taking the right approach here?
EDIT:
Okay, I guess Ole's solution is the correct one, though I'm still not getting what I want: the code I use is:
CALayer *maskLayer = [CALayer layer];
CALayer *mosaicLayer = [CALayer layer];
// Mask image ends with 0.15 opacity on both sides. Set the background color of the layer
// to the same value so the layer can extend the mask image.
mosaicLayer.contents = (id)[img CGImage];
mosaicLayer.frame = CGRectMake(0,0, img.size.width, img.size.height);
UIImage *maskImg = [UIImage imageWithContentsOfFile:[[NSBundle mainBundle] pathForResource:#"mask" ofType:#"png"]];
maskLayer.contents = (id)[maskImg CGImage];
maskLayer.frame = CGRectMake(100,150, maskImg.size.width, maskImg.size.height);
mosaicLayer.mask = maskLayer;
[imageView.layer addSublayer:mosaicLayer];
UIGraphicsBeginImageContext(imageView.layer.bounds.size);
[imageView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *saver = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
So in my imageView i did: setImage, which has the original (unedited) version of the photo. On top of that i add a sublayer, mosaicLayer, which has a mask property: maskLayer. I thought by rendering the rootlayer of the imageView, everything would turn out ok. Is that not correct?
Also, I figured out something else: my mask is stretched and rotated, which i'm guessing has something to do with imageOrientation? I noticed by accidentally saving mosaicLayer to my library, which also explains the problem I had that the mask seemed to mask the wrong part of my image...

To render a layer tree, put all layers in a common container layer and call:
UIGraphicsBeginImageContext(containerLayer.bounds.size);
[containerLayer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

If you're willing to drop support for pre-iPhone 3G S devices (iPhone and iPhone 3G), I'd suggest using OpenGL ES 2.0 shaders for this. While it may be easy to overlay a CALayer containing a pixelated version of the image, I think you'll find the performance to be lacking.
In my tests, performing a simple CPU-based calculation on every pixel of a 480 x 320 image led to a framerate of about 4 FPS on an iPhone 4. You might be able to sample only a fraction of these pixels to achieve the desired effect, but it still will be a slow operation to redraw a pixelated image to match the live video.
Instead, if you use an OpenGL ES 2.0 fragment shader to process the incoming live video image, you should be able to take in the raw camera image, apply this filter selectively over the desired area, and either display or save the resulting camera image. This processing will take place almost entirely on the GPU, which I've found to do simple operations like this at 60 FPS on the iPhone 4.
While getting a fragment shader to work quite right can require a little setup, you might be able to use this sample application I wrote for processing camera input and doing color tracking to be a decent starting point. You might also look at the touch gesture I use there, where I take the initial touch down point to be the location to center an effect around and then a subsequent drag distance to control the strength or radius of an effect.

Related

How to scale only specific parts of image in iPhone app?

I want to scale the image in iPhone app but not entirely. I just want to scale specific parts like the bottom part or the middle part...How do I do it?
Please help.
Thanks
It sounds like you want to do a form of 9-slice scaling or 3-slice scaling. Let's say you have the following image:
and you want to make it look like this:
(the diagonal end pieces do not stretch at all, the top and bottom pieces stretch horizontal, and the left and right pieces stretch vertical)
To do this, use -stretchableImageWithLeftCapWidth:topCapHeight: in iOS 4.x and earlier, or -resizableImageWithCapInsets: starting with iOS 5.
UIImage *myImage = [UIImage imageNamed:#"FancyButton"];
UIImage *myResizableImage = [myImage resizableImageWithCapInsets:UIEdgeInsetsMake(21.0, 13.0, 21.0, 13.0)];
[anImageView setImage:myResizableImage]
To help visualize the scaling, here is an image showing the above cap insets:
I'm not aware of any way to adjust the scale of just a part of a UIImage. I'd approach is slightly differently by creating seperate images from your primary image using CGImageCreateWithImageInRect and then scaling the seperate images with the different rates that you require.
See:
Cropping a UIImage
CGImage Reference
Quartz 2D Programming Guide

How to create a single image from an overlayed image and a picture taken with iPhone?

There are countless apps out there that do this ... but I'm curious as to what suggested way(s) exists for producing the highest quality image.
Example of what I'm looking to do:
Be able to overlay an image of a mustache on top of the iPhone's camera.
Optional be able to resize/rotate that image.
Take a picture and superimpose the overlayed image (the mustache in the case) on the picture so a single image is produced.
Thanks much.
Here is an article on overlaying an image on the camera. http://mobile-augmented-reality.blogspot.com/2009/09/overlaying-views-on-uiimagepickercontro.html. Also, for rotating and resizing the mustache look at this http://icodeblog.com/2010/10/14/working-with-uigesturerecognizers/. After that, you can use the resulting UIImage from the code below for whatever you need. Change self.view to the camera view.
UIGraphicsBeginImageContext(self.view.bounds.size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *screenShot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

The antialias of rotated CGImage/CGlayer seems jaggy, UIImageView's is not

I need to mask a "texture" image with a rotated greyscale image.
I found out, that I have to do it with CGImages or CGlayers (if there is a simplier way using UIImageViews only, please let me know about it).
My problem is simple:
The antialias of any
rotation-transformed CG stuff is quiet
jaggy...
... but the antialias of a rotation-transformed UIImageView is kinda perfect. How can I produce that beautiful antialiased rotations?
I've uploaded a "proof" involving actual iPhone Simulator screenshots, to see what am I talkin' about: http://gotoandplay.freeblog.hu/files/Proof.png
I've tried to use CGImages, CGLayers, UIImageViews "captured" with renderInContext, I've tried to CGContextSetInterpolationQuality to high, and also tried to set CGContextSetAllowsAntialiasing - CGContextSetShouldAntialias, but every case returned the same jaggy result.
I'm planning to learn using OpenGL next year, but this development should released using CoreGraphics only. Please let me know how to get a perfectly rendered rotated image, I just can't accept it's impossible.
To add 1px transparent border to your image use this
CGRect imageRect = CGRectMake(0, 0, image.size.width, image.size.height);
UIGraphicsBeginImageContext( imageRect.size );
[image drawInRect:CGRectMake(1,1,image.size.width-2,image.size.height-2)];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Have you tried adding a clear, 1 pixel border around your image? I've heard of that recommended as a trick to avoid aliasing, by giving CoreGraphics some room to work with when blending the edges.
I am having a similar problem, looks like I'm going to move it over to OpenGL ES as well. I can't nail down an effective solution that doesn't hurt performance.
For reference of future CoreGraphics explorers, putting a 1-pixel transparent border did make for a noticeable improvement in my experiments, but it appears that as Eonil mentioned, you end up with multiple stages of antialiasing/smoothing/interpolation working against each other. IE: CGLayer does some interpolation for it's rotation, then context it's being drawn to will do some interpolation/antialiasing, so on so forth until it ends up looking pretty rough.
I actually ended up with better results by disabling interpolation and antialiasing on the destination context, though it was still obviously jaggy (less artifacts overall though). I was able to achieve the best overall appearance by enabling interpolation and antialiasing when constructing the CGLayer, and disabling it for the destination context when re-drawing it. This approach, obviously, is fraught with other problems.

Resizing an image with stretchableImageWithLeftCapWidth

I'm trying to resize an image using stretchableImageWithLeftCapWidth: it works on the simulator, but on the device, vertical green bars appears.
I've tried to use imageNamed, imageWithContentOfFile and imageWithData lo load the image, it doesnt change.
UIImage *bottomImage = [[UIImage imageWithData:
[NSData dataWithContentsOfFile:
[NSString stringWithFormat:#"%#/bottom_part.png",
[[NSBundle mainBundle] resourcePath]]]]
stretchableImageWithLeftCapWidth:27 topCapHeight:9];
UIImageView *bottomView = [[UIImageView alloc] initWithFrame:CGRectMake(10, 200+73, 100, 73)];
[self.view addSubview:bottomView];
UIGraphicsBeginImageContext(CGSizeMake(100, 73));
[bottomImage drawInRect:CGRectMake(0, 0, 100, 73)];
UIImage *bottomResizedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
bottomView.image = bottomResizedImage;
See the result: the green bars shouldn't be there, they dont appear on the simulator.
alt text http://www.quicksnapper.com/files/5161/96232162149F1C14751048_m.png
Did you figure this out?
Seems like it might be a bug with UIGraphicsGetImageFromCurrentImageContext. If I draw an image that has transparency I get red/green artifacts. These only appear on the device (not in the sim). Also, if I remove transparency from the image, the artifacts go away.
Update: Some more weirdness. I was using PNGs before, so I tried using a transparent gif instead. Using a GIF, shows the artifact problem on the sim.
Victory! Found a solution:
Turn off 'Compress PNG Files' in the build settings for your project. Disabling this makes the PNG transparency work without any artifacts.
It seems like you're writing a lot of unnecessary code, but perhaps that's just because it's out of context and there's more to it that I'm missing.
To get the image:
[[UIImage imageName: #"bottom_part.png"] stretchableImageWithLeftCapWidth: 27 topCapHeight: 9];
What part of your code are you displaying this image? i.e. in what method call are you drawing the images above?
Why not just use a UIImageView and put the image in there? Then you don't need to do any of the image context drawing etc.
In my case I had an UIImageView which I stretched horizontally. I found a strange vertical white line in the stretched image.
The solutions above didn't work for me. However, the following did:
Cast the width value to an int value:
myNewViewFrame.size.width = (int)newWidth;
myView.frame = myNewViewFrame;
Hope it works for you guys as well...
I've spent 5 hours debugging this last night, disabling PNG compression in xcode didn't do anything for me.
For anyone else meeting vertical, green, bars/lines/artifacts with stretchableImageWithLeftCapWidth (in UIImage API) - this post is for you.
I have a 21x30 for a custom barbutton background that I want to widen, but got the same green stripes as the OP. I found a PNG created with photoshop and that worked fine - mine are made with Gimp.
Anyway, stripping ALL chunks from the files (except the three essential ones) made no difference. Neither did disabling PNG compression.
What did work for me was this:
Add ONE empty line above my image (which is now 21x31).
Set topCapHeight:0 when creating the scaled image.
When I use my scaled image, I draw into a CGContext, which is later used to make an UIImage. I use this to draw:
[image drawInRect:CGRectMake(0,-2,width,32)];
This makes the problem go away (for me).
I assume that the bug/issue has to do with not scaling vertically when drawing, so I force scaling of the first source image line (into two lines), which are draw outside my composition.
I hope this helps someone save 5 hours.
/Krisb
I stopped using stretchableImageWithLeftCapWidth in favor of UIView's much more well-behaved contentStretch property. I've never seen the green bars you are describing, but in general I would recommend using contentStretch instead of stretchableImageWithLeftCapWidth.
I tried to disable PNG compression like Nate proposed, but this wouldn't work for me (iOS 3.1.3). I then tried using TIFF images instead of PNG, which works.
I have found similar issues with contentStretch (on any UIView that has drawn content) when using a value of (0.5, 0.5, 0, 0). i.e. stretch on the center pixel.
I have found that only the iphone 3G (possibly 2G) exhibits this problem. iphone 4 and 3GS is OK. So, I assume this is a problem with the old graphics HW.
a way i found around the problem was to stretch on a slightly larger center area.
e.g. (0.4, 0.4, 0.1, 0.1)

After masking Image looks blur

I am developing one game where I want to magnify the image where magnifier image is placed.
For that I am using the concept of masking. After masking I am zooming the image but looks blur. And I want image should be clearer like we r looking through rifle magnifier. So if any one have solution then kindly reply
are you sure that the problem is the masking?
perhaps your resources are too low resolution? high resolution images scaled down always look better than low resolution images scaled up.
Maybe you need to look at the problem backwards... so that your image when looking through the rifle magnifier [scope?] is viewed at a 1:1 resolution and when not viewed through the scope it is zoomed out (1:2 resolution?). so this way your 'normal' mode is the zoomed out mode and the "magnified view" is actually just the image at 1:1.
If you have A UIImage whose size is 293x184 but you create a UIImageView with an initial size of 40x30, the iPhone SCALES the UIImage to fit according to the property: contentMode. Default contentMode is: UIViewContentModeScaleToFill, which scales your image.
So even though you started with a large image, it is now only 40x30 and rendered at 40x30. When you zoom it is STILL 40x30, but rendered at some larger size which is causing the blur.
One solution would be to replace the image after the zoom, then you would have a completely new UIImage at full resolution.
[self.view setFrame:reallyBigFrame];
[self.view setImage:newUIImage];
Another would be initially place the UIImage in a full size 293x184 UIImageView, then use an AffineTransform to scale it down:
view.transform = CGAffineTransformScale(view.transform, 0.25, 0.25);