Zoom Loupe on UIImageView - iphone

Does anyone know of a way to implement the zoom loupe functionality of a UITextField on the iphone in a UIImage view?
Part of the app I'm building allows a user to draw a line on a UIImage, a process that might involve precision positioning of the points. In order to help the user, I want to provide the zoom loupe as seen when positioning the cursor in a UITextField. Does anyone have any idea as to how to do this? Any pointers to relevant docs?
Cheers!

This tutorial is well written and includes sample code:
http://www.craftymind.com/2009/02/10/creating-the-loupe-or-magnifying-glass-effect-on-the-iphone/
You need to comment out the line that imports CustomView.h and then it compiles and runs fine.

I do not know of any relevant docs.
What I would do when implement something like this is. Show a UIImage representation of your view over the view. Then clip the overlay UIImage to just the portion of the view you want to enlarge. You can enlarge the portion by using CGAffineTransformScale.
You can draw a existing view to a UIImage by using CALayer's renderInContext: method.
UIGraphicsBeginImageContext(self.view.frame.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[self.view.layer renderInContext:context];
UIImage *imageRepresentation = UIGraphicsGetImageFromCurrentImageContext();

Related

How to create a blurry to non-blurry animation

I want to create a magnifying glass type effect in my iPhone app, where the text goes from blurry to not blurry in an animation. Can anyone think of a way to do that? Thanks.
In iOS 6 we finally have the ability to make an image literally blurry, using a CIFilter. So you could, if you really wanted to, make an image of the area to be blurred, blur it with CIFilter, and superimpose that blurred image. Then you could use a timer or CADisplayLink to ask for successive "frames" of the animation, and each time you would do the same thing, only creating a less and less blurred image and showing it.
It is Loupe effect.
- (void)drawRect:(CGRect)rect {
// here we're just doing some transforms on the view we're magnifying,
// and rendering that view directly into this view,
// rather than the previous method of copying an image.
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(context,1*(self.frame.size.width*0.5),1*(self.frame.size.height*0.5));
CGContextScaleCTM(context, 1.5, 1.5);
CGContextTranslateCTM(context,-1*(touchPoint.x),-1*(touchPoint.y));
[self.viewToMagnify.layer renderInContext:context];
}
Reference: http://coffeeshopped.com/2010/03/a-simpler-magnifying-glass-loupe-view-for-the-iphone

Hide / reveal UIImage based on finger path?

I hope that makes sense. I'll try to explain it.
I have a UIImageView on screen, and am wondering how I can take the area after "drawing" on it with a finger, and remove that section from the UImage, or, create a separate UIImage from the selection.
I'm not looking for code (unless you have it =] ), just an idea of how to go about doing it. If you have tips, I'd be very grateful, thanks.
If I understand your question,
I think I would add a transparent UIView as a subview over the top of the UIImageView. And draw on that. Then you can remove/hide the subview when your done.
you need to create a UIGestureRecognizer with target and a action like -imageIsPressed, in this -imagePressed method you can call something to make the image disappear. I would suggest placing the UIImage into a UIImageView and calling imageview.hidden = YES; to hide the image, and set it back to "NO" once its not held by the finger.
You'd need to implement something that captures the area the user 'selected' (maybe be creating a CGPath. Then you create a CALayer of the size of the imageView. In it you create, draw and fill the captured path with some arbitrary color while leaving the rest transparent. Finally you apply your generated CALayer as a mask to the UIImageView:
imageView.layer.mask = maskLayer;
Hope that gets you started.
For more info on how to draw that custom CALayer pls refer to Quartz Programming Guide
So basically you want to implement freehand erasing of an image? You will need to use core graphics and the various CGContext methods (with blend mode set to clear) to achieve this. There are two approaches, but both start with drawing your image as the first part of drawRect, and then
1) Store your strokes in an array, and stroke all of them over top of the image.
2) Stroke one stroke over the image and then store the resulting image into a UIImage. Use this UIImage as the next image that you draw in drawRect. This one is difficult for undo/redo functionality.
I recently implemented this myself and made the source available here. Basically I used the same methods described here with this change when setting up the graphics context:
CGContextSetBlendMode(UIGraphicsGetCurrentContext(), kCGBlendModeClear);

UIImage Not Transparent

I am trying to make a view that lets a user draw with their finger. I have a png made in Pixelmator of the brush. My drawRect: method looks like this:
- (void)drawRect:(CGRect)rect
{
NSAssert(brush != nil, #"");
for (UITouch *touch in touchesNeedingDrawing)
{
//Draw the touch
[brush drawInRect:CGRectWithPoints([touch locationInView:self], [touch previousLocationInView:self]) blendMode:0 alpha:1];
}
[touchesNeedingDrawing removeAllObjects];
}
The brush image is a png with transparency, but when I run the app, there is no transparency. Does anyone know why, and how to fix it?
Edit:
I discovered that the image is transparent, but when I call drawInRect, the image draws the transparent pixels as the background color of the view. Is there a CGBlendMode I can use to fix this?
Solution is very simple, testes on my own project. In class which you updated with custom - (void)drawRect:(CGRect)rect set in -(id)init background color for self
[self setBackgroundColor:[UIColor clearColor]];
And thats all. Tested on my project.
It seems like it could be taking the current fill color of the context.
Try setting the fill color for the context with CGContextSetFillColorWithColor() to [UIColor clearColor].CGColor
If that doesn't work, the only other solution that is simple and shouldn't have a performance hit is to have 2 views:
Background View - This will be view that has the proper background color or image
Overlay View - This view will detect the touches etc and then draw all of the brush strokes on top. The background color of this view can then be [UIColor clearColor] so when you draw the brush images, the alpha will be filled with [UIColor clearColor]. Basically the same view you have now, just with a clear background.
Note: You shouldn't need to mess with the blend mode to make this work. You should be able to use the default drawInRect: method
Is the brush png loaded to an imageView? That is, the variable brush is an object of UIImageView, isn't it?
If so, perhaps simple
brush.backgroundColor = [UIColor clearColor];
will help
i think you should try destination out blend mode: kCGBlendModeDestinationOut.
You could also draw at point instead draw in rect:
[brush drawAtPoint:[touch locationInView:self] blendMode:kCGBlendModeDestinationOut alpha:1]
A possible issue is that you are setting the blendMode to 0. I suggest using the -drawInRect: method without a blend mode. The issue may also be that your view has a black background, but that is doubtful. I would also suggest attempting to display the UIImage in a UIImageView as a test. The issue may be related to the way that PixelMator exports images.
Your problem is a fundamental misconception of how drawRect: actually works. Every time you draw something into the current graphics context, everything that was there previously will be cleared (so that only the backgroundColor remains).
Since you're only drawing the current touch(es) (touchesNeedingDrawing is emptied), there's nothing under the rectangle you're filling that could show the transparency of the image you're drawing, so the background color shows through.
You basically have two options to resolve this:
1) Keep all touches that contribute to the drawing around (don't empty the touchesNeedingDrawing array) and redraw all of them every time – this is going to be easy but quite slow.
2) Draw into a buffer of some kind (e.g. a UIImage, using UIGraphicsBeginImageContext etc.). Every time your drawing changes, create a new buffer. Draw the old buffer into the new buffer and the images for the new stroke on top of it.

Magnifier effect with CALayer s

I want to implement a magnifier exactly like the one is shown when an UITextView is long pressed.
I got the idea from here: iPhone, reproduce the magnifier effect
But I am working only with CALayers not UIViews, hence I don't have a drawRect method to write in. I wonder where should I write this?
inside display method? or drawInContext: method?
How can I efficiently raster all the layers from the original view? (the view to be magnified) is it really a good idea to do:
UIGraphicsBeginImageContext(magnifyView.bounds.size); //magnifyView is the view to be magnified
[magnifyView.layer renderInContext:UIGraphicsGetCurrentContext()];
_cache = UIGraphicsGetImageFromCurrentImageContext(); //_cache is an UIImage
UIGraphicsEndImageContext();
and then get the portion I need from this UIImage's CGImageRef?
Thanks

How can I change the inner image position of an UIImageView?

I have defined an UIImageView in my nib. After the app launches, I display an image in that UIImageView. Maybe I am on the wrong way, but what I want to do is the following:
The image which I load into the view is bigger than the view itself. I want that it is displayed in original size with hidden overlay. Then I want to slowly move that image inside that view in random directions.
Maybe you know html div containers with background images. there, you can set a position of that background image and change that position with JavaScript. I need something similar on iPhone.
Maybe an UIImageView is not the right thing for that? Or must I set the UIImageView to the full size of that image and then move the UIImageView around slowly? Could it be bigger than the iPhone's screen?
You need to crop the image. See Lounges' answer to this question.
This is the gist of my answer to pretty much the same question:
There isn't a simple class method to do this, but there is a function that you can use to get the desired results: CGImageCreateWithImageInRect(CGImageRef, CGRect) will help you out.
Here's a short example using it:
CGImageRef imageRef = CGImageCreateWithImageInRect([largeImage CGImage], cropRect);
[UIImageView setImage:[UIImage imageWithCGImage:imageRef]];
CGImageRelease(imageRef);