I have an application in which i am cropping the image taken from the camera.all are going well.but after the cropping the image seems to blured and streched.
CGRect rect = CGRectMake(20,40,280,200);
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
// translated rectangle for drawing sub image
CGRect drawRect = CGRectMake(-rect.origin.x, -rect.origin.y,280,200);
// clip to the bounds of the image context
// not strictly necessary as it will get clipped anyway?
CGContextClipToRect(context, CGRectMake(0, 0, rect.size.width, rect.size.height));
// draw image
[image drawInRect:drawRect];
// grab image
UIImage* croppedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGSize size = [croppedImage size];
NSLog(#" = %#",NSStringFromCGSize(size));
NSData* pictureData = UIImagePNGRepresentation(croppedImage);
Can anybody help me in finding out where i am going wrong?
try replacing
UIGraphicsBeginImageContext(rect.size);
with
UIGraphicsBeginImageContextWithOptions(rect.size, NO, [[UIScreen mainScreen] scale]);
to account for retina
Related
I am trying to crop image using rectangle frame. But somehow not able to do that according to its required.
Here is What i am trying:
Here is the result i want :
Now what i need is when click on done image should crop in rectangle shape exactly placed in image. I have tried few things like masking & draw image using mask image rect but no success yet.
Here is my code which is not working :
CALayer *mask = [CALayer layer];
mask.contents = (id)[imgMaskImage.image CGImage];
mask.frame = imgMaskImage.frame;
imgEditedImageView.layer.mask = mask;
imgEditedImageView.layer.masksToBounds = YES;
Can anyone suggest me the better way to implement it.
I have tried so many other things & wasted time so please if i get some help that it will be great & appreciated.
Thanks.
- (UIImage *)croppedPhoto {
// For dealing with Retina displays as well as non-Retina, we need to check
// the scale factor, if it is available. Note that we use the size of teh cropping Rect
// passed in, and not the size of the view we are taking a screenshot of.
CGRect croppingRect = CGRectMake(imgMaskImage.frame.origin.x,
imgMaskImage.frame.origin.y, imgMaskImage.frame.size.width,
imgMaskImage.frame.size.height);
imgMaskImage.hidden=YES;
if ([[UIScreen mainScreen] respondsToSelector:#selector(scale)]) {
UIGraphicsBeginImageContextWithOptions(croppingRect.size, YES,
[UIScreen mainScreen].scale);
} else {
UIGraphicsBeginImageContext(croppingRect.size);
}
// Create a graphics context and translate it the view we want to crop so
// that even in grabbing (0,0), that origin point now represents the actual
// cropping origin desired:
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(ctx, -croppingRect.origin.x, -croppingRect.origin.y);
[self.view.layer renderInContext:ctx];
// Retrieve a UIImage from the current image context:
UIImage *snapshotImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Return the image in a UIImageView:
return snapshotImage;
}
Here is the way you do
+(UIImage *)maskImage:(UIImage *)image andMaskingImage:(UIImage *)maskingImage{
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGImageRef maskImageRef = [maskingImage CGImage];
CGContextRef mainViewContentContext = CGBitmapContextCreate (NULL, maskingImage.size.width, maskingImage.size.height, 8, 0, colorSpace, kCGImageAlphaPremultipliedLast);
if (mainViewContentContext==NULL)
return NULL;
CGFloat ratio = 0;
ratio = maskingImage.size.width/ image.size.width;
if(ratio * image.size.height < maskingImage.size.height) {
ratio = maskingImage.size.height/ image.size.height;
}
CGRect rect1 = {{0, 0}, {maskingImage.size.width, maskingImage.size.height}};
//// CHANGE THIS RECT ACCORDING TO YOUR NEEDS
CGRect rect2 = {{-((image.size.width*ratio)-maskingImage.size.width)/2 , -((image.size.height*ratio)-maskingImage.size.height)/2}, {image.size.width*ratio, image.size.height*ratio}};
CGContextClipToMask(mainViewContentContext, rect1, maskImageRef);
CGContextDrawImage(mainViewContentContext, rect2, image.CGImage);
CGImageRef newImage = CGBitmapContextCreateImage(mainViewContentContext);
CGContextRelease(mainViewContentContext);
CGColorSpaceRelease(colorSpace);
UIImage *theImage = [UIImage imageWithCGImage:newImage];
CGImageRelease(newImage);
return theImage;
}
You need to have image like this
Note that
The mask image cannot have ANY transparency. Instead, transparent areas must be white or some value between black and white. The more towards black a pixel is the less transparent it becomes.
I am using the following code for capture a view on the screen but it is not as sharp as on the screen. the imageView size is 200x200point but the scale is 2.0 (with retina screen). The saved img size is 200x200 px. how could i make the img as sharp as the original one? Any help will be appreciated!
- (UIImage*)captureView:(UIView *)theView {
CGRect rect = theView.frame;
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[theView.layer renderInContext:context];
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
Try like that:
CGRect screenRect = [[UIScreen mainScreen] bounds];
UIGraphicsBeginImageContextWithOptions(screenRect.size, NO, 0.0);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
You are using the old way of creating the image context. Use this instead:
UIGraphicsBeginImageContextWithOptions(rect.size, YES, [UIScreen mainScreen].scale);
The older UIGraphicsBeginImageContext function always assumes a scale of 1.0.
Replace 'YES' with 'NO' if you need the alpha channel.
I am capturing CGRect with following code. But the resulting image is not the image what i want. Image has some transparent background. What to do for removing transparent background as suggesting the picture.
- (UIImage *)captureScreenInRect:(CGRect)captureFrame {
CALayer *layer;
layer = imageScrollview.layer;
UIGraphicsBeginImageContext(imageScrollview.bounds.size);
CGContextClipToRect (UIGraphicsGetCurrentContext(),captureFrame);
\[layer renderInContext:UIGraphicsGetCurrentContext()\];
UIImage *screenImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return screenImage;
}
Translate your context so that its origin matches your captureFrame:
UIGraphicsBeginImageContext(imageScrollview.bounds.size);
CGContextRef c = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(c, -captureFrame.origin.x, -captureFrame.origin.y);
[imageScrollView.layer renderInContext:c];
UIImage *screenImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
(written from memory, untested)
Additionally clipping the context is not necessary as the image is already clipped by the image context's bounds.
Try this one
CGRect cropRect = CGRectMake(imageScrollview.frame.origin.x+15, imageScrollview.frame.origin.y+15, WIDTH, HEIGHT);
I am working on Camera Application user are taking a picture its fine,but i want to crop anywhere in that image and send it to server. How can I do this?
Check out this link for details:
http://www.hive05.com/2008/11/crop-an-image-using-the-iphone-sdk/
Basic code:
- (UIImage*)imageByCropping:(UIImage *)imageToCrop toRect:(CGRect)rect
{
//create a context to do our clipping in
UIGraphicsBeginImageContext(rect.size);
CGContextRef currentContext = UIGraphicsGetCurrentContext();
//create a rect with the size we want to crop the image to
//the X and Y here are zero so we start at the beginning of our
//newly created context
CGRect clippedRect = CGRectMake(0, 0, rect.size.width, rect.size.height);
CGContextClipToRect( currentContext, clippedRect);
//create a rect equivalent to the full size of the image
//offset the rect by the X and Y we want to start the crop
//from in order to cut off anything before them
CGRect drawRect = CGRectMake(rect.origin.x * -1,
rect.origin.y * -1,
imageToCrop.size.width,
imageToCrop.size.height);
//draw the image to our clipped context using our offset rect
CGContextDrawImage(currentContext, drawRect, imageToCrop.CGImage);
//pull the image from our cropped context
UIImage *cropped = UIGraphicsGetImageFromCurrentImageContext();
//pop the context to get back to the default
UIGraphicsEndImageContext();
//Note: this is autoreleased
return cropped;
}
I think i can provide a better solution to that large amount of code.
- (void)viewDidLoad
{
[super viewDidLoad];
// do something......
UIImage *croppedImage = [self imageByCropping:[UIImage imageNamed:#"SomeImage.png"] toRect:CGRectMake(10, 10, 100, 100)];
}
- (UIImage*)imageByCropping:(UIImage *)imageToCrop toRect:(CGRect)rect
{
CGImageRef imageRef = CGImageCreateWithImageInRect([imageToCrop CGImage], rect);
UIImage *cropped = [UIImage imageWithCGImage:imageRef];
return cropped;
}
I developing the simple UIApplication in which i want to crop the UIImage (in .jpg format) with help of CGContext. The developed code till now as follows,
CGImageRef graphicOriginalImage = [originalImage.image CGImage];
UIGraphicsBeginImageContext(originalImage.image.size);
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGBitmapContextCreateImage(graphicOriginalImage);
CGFloat fltW = originalImage.image.size.width;
CGFloat fltH = originalImage.image.size.height;
CGFloat X = round(fltW/4);
CGFloat Y =round(fltH/4);
CGFloat width = round(X + (fltW/2));
CGFloat height = round(Y + (fltH/2));
CGContextTranslateCTM(ctx, 0, image.size.height);
CGContextScaleCTM(ctx, 1.0, -1.0);
CGRect rect = CGRectMake(X,Y ,width ,height);
CGContextDrawImage(ctx, rect, graphicOriginalImage);
croppedImage = UIGraphicsGetImageFromCurrentImageContext();
return croppedImage;
}
The above code is worked fine but it can't crop image.
The original image memory and cropped image memory i will got same(equal to original image memory).
The above code is right for cropping the image??????????????????
Here is a good way to crop an image to a CGRect:
- (UIImage*)imageByCropping:(UIImage *)imageToCrop toRect:(CGRect)rect
{
//create a context to do our clipping in
UIGraphicsBeginImageContext(rect.size);
CGContextRef currentContext = UIGraphicsGetCurrentContext();
//create a rect with the size we want to crop the image to
//the X and Y here are zero so we start at the beginning of our
//newly created context
CGRect clippedRect = CGRectMake(0, 0, rect.size.width, rect.size.height);
CGContextClipToRect( currentContext, clippedRect);
//create a rect equivalent to the full size of the image
//offset the rect by the X and Y we want to start the crop
//from in order to cut off anything before them
CGRect drawRect = CGRectMake(rect.origin.x * -1,
rect.origin.y * -1,
imageToCrop.size.width,
imageToCrop.size.height);
//draw the image to our clipped context using our offset rect
CGContextDrawImage(currentContext, drawRect, imageToCrop.CGImage);
//pull the image from our cropped context
UIImage *cropped = UIGraphicsGetImageFromCurrentImageContext();
//pop the context to get back to the default
UIGraphicsEndImageContext();
//Note: this is autoreleased
return cropped;
}
Or another way:
- (UIImage *)imageByCropping:(UIImage *)imageToCrop toRect:(CGRect)rect
{
CGImageRef imageRef = CGImageCreateWithImageInRect([imageToCrop CGImage], rect);
UIImage *cropped = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
return cropped;
}
From http://www.hive05.com/2008/11/crop-an-image-using-the-iphone-sdk/.
The context you create to draw the image has the same size that the original image. That's why they have the same size.
If you don't want to re-invent the wheel, take a look at the TouchCode project on Google Code. You will find UIImage categories that do the job (see UIImage_ThumbnailExtensions.m).