Good way to use CGImageCreateWithImageInRect, UIImage and UIImagePickerController questions - iphone

I'm trying to do something which should in theory be quite simple, but I've been chasing my tail around for days now. I'm trying to take a touch event from a screen overlay, capture an image, and crop a section of the image out around where the finger touched.
Now all my code is working fine, the overlay, events, cropping etc....however I can't seem to get the coordinates system of the touch event to match the coordinates system of the UIImage. I've read all the docs I can get my hands on, I just can't seem to figure it out.
My main question is, do I need to take into account UIImageOrientation when using CGImageCreateWithImageInRect, or does quartz figure it out? The reason I ask is I have a very simple routine that is cropping images just fine, but the cropped image never seems to be where my finger pressed??
The bulk of the routine is:
-(void) imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info
{
UIImage *image = =[info objectForKey:#"UIImagePickerControllerOriginalImage"];
float scaleX = image.size.width / SCREEN_WIDTH;
float scaleY = image.size.height / SCREEN_HEIGHT;
//lastTouch is saved from touchesBegan method
float x = (lastTouch.x * scaleX) - (CROP_WIDTH/2);
float y = (lastTouch.y * scaleY) - (CROP_WIDTH/2);
if(x < 0) x = 0.0;
if(y < 0) y = 0.0;
CGRect cropArea = CGRectMake(x, y, CROP_WIDTH, CROP_WIDTH);
CGImageRef imageRef = CGImageCreateWithImageInRect([image CGImage], cropArea);
UIImage *swatch = [UIImage imageWithCGImage:imageRef];
//at this point I'm just writing the images to the photo album to see if
//my crop is lining up with my touch
UIImageWriteToSavedPhotosAlbum(image, nil, nil, nil);
UIImageWriteToSavedPhotosAlbum(swatch, nil, nil, nil);
}
So, the problem is that my cropped area (as viewed in my photo album) never matches the actual area that I press (it's always some other random part of the photo), which makes me think my coordinates system is off.
Any pointers would be greatly appreciated, even if there just pointers to some docs I haven't found yet.
Cheers
Adam

You must always remember that the default coordinate system is different between CoreGraphics and UIKit. While UIKit begins with its origin at the top-left corner of the window, CoreGraphics has this point set at the bottom left. I think this might be enough info to put you on the right track, but just to re-iterate:
You can use this code to invert, or flip, your coordinate system.
CGContextTranslateCTM(graphicsContext, 0.0, drawingRect.size.height);
CGContextScaleCTM(graphicsContext, 1.0, -1.0);
Also review Flipping the Default Coordinate Sytem in the Drawing and Printing Guide for iOS

Related

UIIMAGE rotation in iPhone without rotating UIIMAGEVIEW On Button Click In Clockwise Or Anticlock wise

I have a UIImageView with is filled with image either taken by camera or picked from the liabray after I assign UIImage to UIImageView I want to rotate it with either clockwise or anticlock wise using 2 button respectively for both on every click image should rotate but without rotationg in superview UIImageView then after I want so save the image in the final position user saved that image...
If there is any method or procedure then please share as i m searching on this query from last many days but not got any acurate solution with proper details and working.
Rotation routine ( found this routine on another post but I forget where):
-(UIImage *)rotateImage:(UIImage *)image angleInRadians:(float)angleInRadians {
UIGraphicsBeginImageContext(image.size);
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextRotateCTM(ctx, angleInRadians);
CGContextDrawImage(ctx, (CGRect){{}, image.size}, image);
UIImage *imageOut = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return imageOut;
}
Angle in raadians could be M_PI/2 or -M_PI/2 to change landscape to portrait or viceversa

Drawrect with CGBitmapContext is too slow

So I've got a basic drawing app in the process that allows me to draw lines. I draw to an off screen bitmap then present the image in drawRect. It works but its way too slow, updating about half a second after you've drawn it with your finger. I took the code and adapted it from this tutorial, http://www.youtube.com/watch?v=UfWeMIL-Nu8&feature=relmfu , as you can see in the comments people are also saying its too slow but the guy hasn't responded.
So how can I speed it up? or is there a better way to do it? any pointers will be appreciated.
Heres the code in my DrawView.m.
-(id)initWithCoder:(NSCoder *)aDecoder {
if ((self=[super initWithCoder:aDecoder])) {
[self setUpBuffer];
}
return self;
}
-(void)setUpBuffer {
CGContextRelease(offscreenBuffer);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
offscreenBuffer = CGBitmapContextCreate(NULL, self.bounds.size.width, self.bounds.size.height, 8, self.bounds.size.width*4, colorSpace, kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorSpace);
CGContextTranslateCTM(offscreenBuffer, 0, self.bounds.size.height);
CGContextScaleCTM(offscreenBuffer, 1.0, -1.0);
}
-(void)drawToBuffer:(CGPoint)coordA :(CGPoint)coordB :(UIColor *)penColor :(int)thickness {
CGContextBeginPath(offscreenBuffer);
CGContextMoveToPoint(offscreenBuffer, coordA.x,coordA.y);
CGContextAddLineToPoint(offscreenBuffer, coordB.x,coordB.y);
CGContextSetLineWidth(offscreenBuffer, thickness);
CGContextSetLineCap(offscreenBuffer, kCGLineCapRound);
CGContextSetStrokeColorWithColor(offscreenBuffer, [penColor CGColor]);
CGContextStrokePath(offscreenBuffer);
}
- (void)drawRect:(CGRect)rect {
CGImageRef cgImage = CGBitmapContextCreateImage(offscreenBuffer);
UIImage *image =[[UIImage alloc] initWithCGImage:cgImage];
CGImageRelease(cgImage);
[image drawInRect:self.bounds];
}
Works perfectly on the simulator but not device, I imagine that's something to do with processor speed.
I'm using ARC.
I tried to fix your code, however as you only seem to have posted half of it I couldn't get it working (Copy+pasting code results in lots of errors, let alone start performance tuning it).
However there are some tips you can use to VASTLY improve performance.
The first, and probably most noticeably, is -setNeedsDisplayInRect: rather then -setNeedsDisplay. This will mean that it only redraws the little rect that changed. For an iPad 3 with 1024*768*4 pixels that is a lot of work. Reducing that down to about 20*20 or less for each frame will massively improve performance.
CGRect rect;
rect.origin.x = minimum(coordA.x, coordB.x) - (thickness * 0.5);
rect.size.width = (maximum(coordA.x, coordB.x) + (thickness * 0.5)) - rect.origin.x;
rect.origin.y = minimum(coordA.y, coordB.y) - (thickness * 0.5);
rect.size.height = (maximum(coordA.y, coordB.y) + (thickness * 0.5)) - rect.origin.y;
[self setNeedsDisplayInRect:rect];
Another big improvement you could make is to only draw the CGPath for this current touch (which you do). However you then draw that saved/cached image in the draw rect. So, again, it is redrawn each frame. A better approach is to have the draw view being transparent and then to use a UIImageView behind that. UIImageView is the best way to display images on iOS.
- DrawView (1 finger)
-drawRect:
- BackgroundView (the image of the old touches)
-self.image
The draw view would itself then only ever draw the current touch only the part that changes each time. When the user lifts their finger you can cache that to a UIImage, draw that over the current background/cache UIImageView's image and set the imageView.image to the new image.
That final bit when combining the images involves drawing 2 full screen images into an off screen CGContext and so will cause lag if done on the main thread, instead this should be done in a background thread and then the result pushed back to the main thread.
* touch starts *
- DrawView : draw current touch
* touch ends *
- 'background thread' : combine backgroundView.image and DrawView.drawRect
* thread finished *
send resulting UIImage to main queue and set backgroundView.image to it;
Clear DrawView's current path that is now in the cache;
All of this combined can make a very smooth 60fps drawing app. However, views are not updated as quickly as we'd like so the drawing when moving the figure faster looks jagged. This can be improved by using UIBezierPath's instead of CGPaths.
CGPoint lastPoint = [touch previousLocationInView:self];
CGPoint mid = midPoint(currentPoint, lastPoint);
-[UIBezierPath addQuadCurveToPoint:mid controlPoint:lastPoint];
The reason it is slow is because every frame you are creating a bitmap and trying to draw that.
You asked for better ways of doing it? Have you looked at the apple sample code for a drawing app on iOS? If you don't like that, then you can always use cocos2d which provides a CCRenderTexture class (and sample code).
Currently, you are using a method which you already know is not efficient.
With this approach I suppose you should consider using background thread for all hard work of image rendering and main thread for UI updates only, i. e.
__block UIImage *__imageBuffer = nil;
- (UIImage *)drawSomeImage
{
UIGraphicsBeginImageContext(self.bounds);
// draw image with CoreGraphics
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
- (void)updateUI
{
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
// prepare image on background thread
__imageBuffer = [self drawSomeImage];
dispatch_async(dispatch_get_main_queue(), ^{
// calling drawRect with prepared image
[self setNeedsDisplay];
});
});
}
- (void)drawRect
{
// draw image buffer on current context
[__imageBuffer drawInRect:self.bounds];
}
I am omitting some details for making the optimization more clear. Even better to switch to UIImageView. This way you could get rid from critically important - (void)drawDect method and update image property of the UIImageView when the image is ready.
Well I think you need to change your logic. You may get some very good idea with the help of this link
http://devmag.org.za/2011/04/05/bzier-curves-a-tutorial/
and if you think that you have no time to make understanding then you may go directly to this code https://github.com/levinunnink/Smooth-Line-View :) I hop this will help you a lot.
Use CgLayer for caching your paths, read the docs, Its best for optimization.
I did something exactly like this. Check out the Pixelate app on AppStore. In order to draw , I used tiles in my code. After all , when you toch the screen and draw something you need to re-draw the entire image which is a very heavy operation. If you like the way Pixelate is moving , here's how I did it:
1)Split my image in n x m tiles. That was so I can change those values and obtain bigger/smaller tiles. In the worst case scenario (the user taps at the intersection of 4 tiles) you have to re-draw those 4 tiles. Not the entire image.
2) Make a 3 dimensional matrix in which I was storring the pixel information of each tile. So matrix[0][0][0] was the red value ( each pixel has a RGB or RGBA value depending if you are using pngs or jpgs) of the first pixel of the first tile.
3) Get the location the user pressed and calculate the tiles that need to be modified.
4) Modify the values in the matrix and update the tiles that need to update.
NOTE: This most certainly isn't the best option. It's just an alternative. I mentioned it because I think it is close to what you have right now. And it worked for me on an iPhone 3GS. If you are targeting >= iPhone 4 , you should be more than ok.
Regards,
George
Whatever the method u've suggested is way too inefficient, because creating the image every time you move the finger is inappropriate.
If its just paths that you need to draw, then have a CGMutablePathref as a member variable,
and in draw rect just move to the specified point using CGPath functions.
And more importantly, while refreshing the view, call setNeedsDisplayInRect passing only the area that you need to draw. Hope it will work for you.

How to I rotate UIImageView by 90 degrees inside a UIScrollView with correct image size and scrolling?

I have an image inside an UIImageView which is within a UIScrollView. What I want to do is rotate this image 90 degrees so that it is in landscape by default, and set the initial zoom of the image so that the entire image fits into the scrollview and then allow it to be zoomed up to 100% and back down to minimum zoom again.
This is what I have so far:
self.imageView.transform = CGAffineTransformMakeRotation(-M_PI/2);
float minimumScale = scrollView.frame.size.width / self.imageView.frame.size.width;
scrollView.minimumZoomScale = minimumScale;
scrollView.zoomScale = minimumScale;
scrollView.contentSize = CGSizeMake(self.imageView.frame.size.height,self.imageView.frame.size.width);
The problem is that if I set the transform, nothing shows up in the scrollview. However if I commented out the transform, everything works except the image is not in the landscape orientation that I want it to be!
If I apply the transform and remove the code that sets the minimumZoomScale and zoomScale properties, then the image shows up in the correct orientation, however with the incorrect zoomScale and seems like the contentSize property isn't set correctly either - since the doesn't scroll to the edge of the image in the left/right direction, however does top and bottom but much over the edge.
NB: image is being loaded from a URL
Maybe rotating the image itself fits your needs:
UIImage* rotateUIImage(const UIImage* src, float angleDegrees) {
UIView* rotatedViewBox = [[UIView alloc] initWithFrame: CGRectMake(0, 0, src.size.width, src.size.height)];
float angleRadians = angleDegrees * ((float)M_PI / 180.0f);
CGAffineTransform t = CGAffineTransformMakeRotation(angleRadians);
rotatedViewBox.transform = t;
CGSize rotatedSize = rotatedViewBox.frame.size;
[rotatedViewBox release];
UIGraphicsBeginImageContext(rotatedSize);
CGContextRef bitmap = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(bitmap, rotatedSize.width/2, rotatedSize.height/2);
CGContextRotateCTM(bitmap, angleRadians);
CGContextScaleCTM(bitmap, 1.0, -1.0);
CGContextDrawImage(bitmap, CGRectMake(-src.size.width / 2, -src.size.height / 2, src.size.width, src.size.height), [src CGImage]);
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
I believe the easiest way (and thread safe too) is to do:
//assume that the image is loaded in landscape mode from disk
UIImage * LandscapeImage = [UIImage imageNamed: imgname];
UIImage * PortraitImage = [[UIImage alloc] initWithCGImage: LandscapeImage.CGImage
scale: 1.0
orientation: UIImageOrientationLeft];
Any calculations that you do based on the imageView's frame should probably be done before you apply any transformations to it. But I would actually suggest doing those calculations based on the size of the UIImage, not the UIImageView. Then set both the UIImageView's frame and the UIScrollView's contentSize based on that.
Max's suggestion is a good one, although with a larger image it could be a performance killer. Are you displaying this image from your app's resources? If so, why not just rotate the images before you even build the app?
There's a much easier solution that is also faster, just do this:
- (void) imageRotateTapped:(id)sender
{
[UIView animateWithDuration:0.33f animations:^()
{
self.imageView.transform = CGAffineTransformMakeRotation(RADIANS(self.rotateDegrees += 90.0f));
self.imageView.frame = self.imageView.superview.bounds; // change this to whatever rect you want
}];
}
When the user is done, you will need to actually create a new rotated image, but that is very easy to do.
I was using the accepted answer for a while until we noticed that non-square rotations based on images taken directly from the camera seemed stretched (they were rotated as desired, just the frame width/height wasn't adjusted).
Great explanation/post here from Trevor: http://vocaro.com/trevor/blog/2009/10/12/resize-a-uiimage-the-right-way/
In the end, it was a very simple import of Trevor's code which uses categories to add a resizedImage:interpoationQuality method to UIImage. So yeah, user beware, if it still works for you, great. But if it doesn't, I'd take a look at the library instead.

How to compensate the flipped coordinate system of core graphics for easy drawing?

It's really a pain, but always when I draw an UIImage in -drawRect:, it's upside-down.
When I flip the coordinates, the image draws correctly, but at the cost of all other CG functions drawing "wrong" (flipped).
What's your strategy when you have to draw images and other things? Is there any rule of thumb how to not get stuck in this problem over and over again?
Also, one nasty thing when I flip the y-axis is, that my CGRect from the UIImageView frame is wrong. Instead of the origin appearing at 10,10 upper left as expected, it appears at the bottom.
But at the same time, all those normal line drawing functions of CGContext take correct coordinates. drawing a line in -drawRect with origin 10,10 upper left, will really start at upper left. But at the same time that's strange, because core graphics actually has a flipped coordinate system with y 0 at the bottom.
So it seems like something is really inconsistent there. Drawing with CGContext functions takes coordinates as "expected" (cmon, nobody thinks in coordinates starting from bottom left, that's silly), while drawing any kind of image still works the "wrong" way.
Do you use helper methods to draw images? Or is there anything useful that makes image drawing not a pain in the butt?
Problem: Origin is at lower-left corner; positive y goes upward (negative y goes downward).
Goal: Origin at upper-left corner; positive y going downward (negative y going upward).
Solution:
Move origin up by the view's height.
Negate (multiply by -1) the y axis.
The way to do this in code is to translate up by the view bounds' height and scale by (1, -1), in that order.
There are a couple of portions of the Quartz 2D Programming Guide that are relevant to this topic, including “Drawing to a Graphics Context on iPhone OS” and the whole chapter on Transforms. Of course, you really should read the whole thing.
You can do that by apply affinetransform on the point you want to convert in UIKit related coordinates. Following is example.
// Create a affine transform object
CGAffineTransform transform = CGAffineTransformMakeScale(1, -1);
// First translate your image View according to transform
transform = CGAffineTransformTranslate(transform, 0, - imageView.bounds.size.height);
// Then whenever you want any point according to UIKit related coordinates apply this transformation on the point or rect.
// To get tranformed point
CGPoint newPointForUIKit = CGPointApplyAffineTransform(oldPointInCGKit, transform);
// To get transformed rect
CGRect newRectForUIKit = CGRectApplyAffineTransform(oldPointInCGKit, transform);
The better answer to this problem is to use the UIImage method drawInRect: to draw your image. I'm assuming you want the image to span the entire bounds of your view. This is what you'd type in your drawRect: method.
Instead of:
CGContextRef ctx = UIGraphicsGetCurrentContext();
UIImage *myImage = [UIImage imageNamed:#"theImage.png"];
CGImageRef img = [myImage CGImage];
CGRect bounds = [self bounds];
CGContextDrawImage(ctx, bounds, img);
Write this:
UIImage *myImage = [UIImage imageNamed:#"theImage.png"];
CGRect bounds = [self bounds];
[myImage drawInRect:bounds];
It's really a pain, but always when I draw an UIImage in -drawRect:, it's upside-down.
Are you telling the UIImage to draw, or getting its CGImage and drawing that?
As noted in “Drawing to a Graphics Context on iPhone OS”, UIImages are aware of the difference in co-ordinate spaces and should draw themselves correctly without you having to flip your co-ordinate space yourself.
CGImageRef flip (CGImageRef im) {
CGSize sz = CGSizeMake(CGImageGetWidth(im), CGImageGetHeight(im));
UIGraphicsBeginImageContextWithOptions(sz, NO, 0);
CGContextDrawImage(UIGraphicsGetCurrentContext(),
CGRectMake(0, 0, sz.width, sz.height), im);
CGImageRef result = [UIGraphicsGetImageFromCurrentImageContext() CGImage];
UIGraphicsEndImageContext();
return result;
}
Call the above method using the code below:
This code deals with getting the left half of an image from an existing UIImageview and setting the thus generated image to a new imageview - imgViewleft
CGContextRef con = UIGraphicsGetCurrentContext();
CGContextDrawImage(con,
CGRectMake(0,0,sz.width/2.0,sz.height),
flip(leftReference));
imgViewLeft = [[UIImageView alloc] initWithImage:UIGraphicsGetImageFromCurrentImageContext()];

how to sharp/blur an uiimage in iphone?

I have a view with UIImageView and an UIImage set to it. How do I make image sharp or blur using coregraphics?
Apple has a great sample program called GLImageProcessing that includes a very fast blur/sharpen effect using OpenGL ES 1.1 (meaning it works on all iPhones, not just the 3gs.)
If you're not fairly experienced with OpenGL, the code may make your head hurt.
Going down the OpenGL route felt like insane overkill for my needs (blurring a touched point on an image). Instead I implemented a simple blurring process that takes a touch point, creates a rect containing that touch point, samples the image in that point and then redraws the sample image upside down on top of the source rect several times slightly offset with slightly different opacity. This produces a pretty nice poor man's blur effect without an insane amount of code and complexity. Code follows:
- (UIImage*)imageWithBlurAroundPoint:(CGPoint)point {
CGRect bnds = CGRectZero;
UIImage* copy = nil;
CGContextRef ctxt = nil;
CGImageRef imag = self.CGImage;
CGRect rect = CGRectZero;
CGAffineTransform tran = CGAffineTransformIdentity;
int indx = 0;
rect.size.width = CGImageGetWidth(imag);
rect.size.height = CGImageGetHeight(imag);
bnds = rect;
UIGraphicsBeginImageContext(bnds.size);
ctxt = UIGraphicsGetCurrentContext();
// Cut out a sample out the image
CGRect fillRect = CGRectMake(point.x - 10, point.y - 10, 20, 20);
CGImageRef sampleImageRef = CGImageCreateWithImageInRect(self.CGImage, fillRect);
// Flip the image right side up & draw
CGContextSaveGState(ctxt);
CGContextScaleCTM(ctxt, 1.0, -1.0);
CGContextTranslateCTM(ctxt, 0.0, -rect.size.height);
CGContextConcatCTM(ctxt, tran);
CGContextDrawImage(UIGraphicsGetCurrentContext(), rect, imag);
// Restore the context so that the coordinate system is restored
CGContextRestoreGState(ctxt);
// Cut out a sample image and redraw it over the source rect
// several times, shifting the opacity and the positioning slightly
// to produce a blurred effect
for (indx = 0; indx < 5; indx++) {
CGRect myRect = CGRectOffset(fillRect, 0.5 * indx, 0.5 * indx);
CGContextSetAlpha(ctxt, 0.2 * indx);
CGContextScaleCTM(ctxt, 1.0, -1.0);
CGContextDrawImage(ctxt, myRect, sampleImageRef);
}
copy = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return copy;
}
What you really need are in the image filters in the CoreImage API. Unfortunately CoreImage is not supported on the iPhone (unless that changed recently and I missed it). Be careful here, as, IIRC, they are available in the SIM - but not on the device.
AFAIK there is no other way to do it properly with the native libraries, although I've sort of faked a blur before by creating an extra layer over the top which is a copy of what's below, offset by a pixel or two and with a low alpha value. For a proper blur effect, tho, the only way I've been able to do it is offline in Photoshop or similar.
Would be keen to hear if there is a better way too, but to my knowledge that is the situation currently.
Have a look at the following libraries:
https://github.com/coryleach/UIImageAdjust
https://github.com/esilverberg/ios-image-filters
https://github.com/cmkilger/CKImageAdditions