i want to crop an UIImage to get a new aspect ratio...
bounds is a CGRect with (0,0, newwidth, newhigh)...
- (UIImage *)croppedImage:(UIImage *)myImage :(CGRect)bounds {
CGImageRef imageRef = CGImageCreateWithImageInRect(myImage.CGImage, bounds);
UIImage *croppedImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGSize asd = croppedImage.size;
return croppedImage;
}
with the call:
[workImage croppedImage: workImage: CGRectMake(0, 0, newWidth, newHeigh)];
after that the "workimage" have the same size as before...
what could be wrong?
regards
Well, you aren't altering the current image as this seems to be a category method on UIImage. You are creating a new image and returning it. So what will work is this,
workImage = [workImage croppedImage: workImage: CGRectMake(0, 0, newWidth, newHeigh)];
However I think the method is better named like this, (assuming it as a category method on UIImage)
- (UIImage *)croppedImageWithRect:(CGRect)bounds {
CGImageRef imageRef = CGImageCreateWithImageInRect(self.CGImage, bounds);
UIImage *croppedImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGSize asd = croppedImage.size;
return croppedImage;
}
This way you will call it like this,
workImage = [workImage croppedImageWithRect:CGRectMake(0, 0, newWidth, newHeigh)];
And as a side note, don't use methods like croppedImage::. It is better to name all parameters like say croppedImage:rect:.
Related
The user can change the cropbox size which is shows default in edit screen. I tried with below code :
- (UIImage *)imageByCropping:(UIImage *)imageToCrop toRect:(CGRect)rect {
CGImageRef imageRef = CGImageCreateWithImageInRect([imageToCrop CGImage], rect);
UIImage *cropped = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
return cropped;
}
But it cropped fixed area. How to crop area which is selected by user ?
For Get Crop Image:
UIImage *croppedImg = nil;
CGRect cropRect = CGRectMake("AS YOu Need"); //set your rect size.
croppedImg = [self croppIngimageByImageName:self.imageView.image toRect:cropRect];
Use following code for call croppIngimageByImageName:toRect: method that return UIImage (with specific size of image)
- (UIImage *)croppIngimageByImageName:(UIImage *)imageToCrop toRect:(CGRect)rect
{
//CGRect CropRect = CGRectMake(rect.origin.x, rect.origin.y, rect.size.width, rect.size.height+15);
CGImageRef imageRef = CGImageCreateWithImageInRect([imageToCrop CGImage], rect);
UIImage *cropped = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
return cropped;
}
CGRect clippedRect = CGRectMake(0 ,0,180 ,180);
CGImageRef imageRef = CGImageCreateWithImageInRect(imgVw1.image.CGImage, clippedRect);
UIImage *newImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
imgVw1Cliped.image=newImage;
NSLog(#"%d",imgVw1Cliped.image.imageOrientation);
I am a Fresher Developer in iPhone .
I want Merge Two Images and Get Only One Image In UIImageView without set alpha.
This is my code. This code is working using alpha,
but I want set without set alpha.
MYCODE:-
-(UIImage *)maskingImage:(UIImage *)image
{
CGSize sizeR = CGSizeMake(200, 220);
// UIImage *textureImage = [UIImage imageNamed:#"tt.png"];
UIImage *textureImage =imgView2.image;
UIGraphicsBeginImageContextWithOptions(sizeR, YES, textureImage.scale);
[textureImage drawInRect:CGRectMake(0.0, 0.0, 200, 220)];
UIImage *bottomImage = UIGraphicsGetImageFromCurrentImageContext();
UIImage *upperImage = image;
CGSize newSize = sizeR ;
UIGraphicsBeginImageContext(newSize);
[bottomImage drawInRect:CGRectMake(0.0, 0.0, 200, 220)];
[upperImage drawInRect:CGRectMake(0.0, 0.0, 200, 220) blendMode:kCGBlendModeNormal alpha:0.5];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
Thanks in advance.
UIGraphicsBeginImageContext(YOUR SIZE);
//FIRST IMAGE
[FIRST_IMAGE drawInRect:CGRectMake(0, 0, YOUR_SIZE_WIDTH/2, YOUR_SIZE_HEIGHT)];
//SECOND IMAGE
[SECOND_IMAGE drawInRect:CGRectMake(YOUR_SIZE_WIDTH/2, 0, YOUR_SIZE_WIDTH/2, YOUR_SIZE_HEIGHT)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
use this function
- (UIImage * ) mergeImage: (UIImage *) imageA
withImage: (UIImage *) imageB
strength: (float) strength X:(float )x Y:(float)y{
UIGraphicsBeginImageContextWithOptions(CGSizeMake([imageA size].width,[imageA size].height), NO, 0.0);
[imageA drawAtPoint: CGPointMake(0,0)];
[imageB drawAtPoint: CGPointMake(x,y)
blendMode: kCGBlendModeNormal // you can play with this
alpha: strength]; // 0 - 1
UIImage *mergedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return mergedImage;}
here x and y is the placemnt where you want to show second image
i just faced the same problem , now i got the solution for my problem
CGSize newSize = CGSizeMake(320, 377);
UIGraphicsBeginImageContext( newSize );
// Use existing opacity as is
[ image1 drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
// Apply supplied opacity
[image2 drawInRect:CGRectMake(0,0,newSize.width,newSize.height) blendMode:kCGBlendModeNormal alpha:0.8];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Try this ,its work like a charm for me , i hope you will also get the solution.
you can use like --
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height) blendMode:kCGBlendModeNormal alpha:0.0];
Hi so i currently have a small image (about 100x160) as a NSData Attribute in my CoreData model.
i display all entities in a TableView. The UIImageView in a single Cell has only a size of 50x80. just dropping the image into this frame looks a bit pebbly.
what would be the best solution to display this image in my tableViewCell? resize it on-the-fly in my cellForRowAtIndexPath? probably this will lead up my tableview to become a bit laggy.
resize it on create and save it in my coredata entity (or probably on disk)?
thank you! please leave a comment if something is unclear
For that you have to crop/resize the image. Following is the code to crop the image as per the required frame.
- (void)viewDidLoad
{
[super viewDidLoad];
// do something......
UIImage *img = [UIImage imageWithData:(nsdata)]; // nsdata will be your image data as you specified.
// To crop Image
UIImage *croppedImage = [self imageByCropping:img] toRect:CGRectMake(10, 10, 50, 80)];
// To resize image
UIImage *resizedImage = [self resizeImage:img width:50 height:80];
}
Crop Image:
- (UIImage*)imageByCropping:(UIImage *)imageToCrop toRect:(CGRect)rect
{
CGImageRef imageRef = CGImageCreateWithImageInRect([imageToCrop CGImage], rect);
UIImage *cropped = [UIImage imageWithCGImage:imageRef];
return cropped;
}
Resize Image:
-(UIImage *)resizeImage:(UIImage *)image width:(int)width height:(int)height
{
CGImageRef imageRef = [image CGImage];
CGImageAlphaInfo alphaInfo = CGImageGetAlphaInfo(imageRef);
alphaInfo = kCGImageAlphaNoneSkipLast;
CGContextRef bitmap = CGBitmapContextCreate(NULL, width, height, CGImageGetBitsPerComponent(imageRef), 4 * width, CGImageGetColorSpace(imageRef), alphaInfo);
CGContextDrawImage(bitmap, CGRectMake(0, 0, width, height), imageRef);
CGImageRef ref = CGBitmapContextCreateImage(bitmap);
UIImage *result = [UIImage imageWithCGImage:ref];
return result;
}
You can go with either of ways.
So i have this code to create a UIImage:
UIGraphicsBeginImageContextWithOptions(border.frame.size, YES, 0);
[border.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *thumbnailImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
At this point, the size on the image is correct, 80x100.
Then it runs this code:
NSData *fullImageData = UIImageJPEGRepresentation(image, 1.0f);
And the NSData of the image returns an image at the size 160x200 - twice as much as it should be.
It's became clear the reason for this is the line:
UIGraphicsBeginImageContextWithOptions(border.frame.size, YES, 0);
The 0 on the end is the scale, and because it's 0 it goes by the devices scale factor. I keep it this way to maintain a clear image. However, when i set the image to 1, despite the image staying the size it should, it doesn't come out in retina quality. What i want to do is keep it in retina quality, but also keep it at the right size. Is there a way to do this?
Try resizing the UIImage before calling UIImageJPEGRepresentation
- (UIImage *)resizeImage:(UIImage*)image newSize:(CGSize)newSize {
CGRect newRect = CGRectIntegral(CGRectMake(0, 0, newSize.width, newSize.height));
CGImageRef imageRef = image.CGImage;
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
// Set the quality level to use when rescaling
CGContextSetInterpolationQuality(context, kCGInterpolationHigh);
CGAffineTransform flipVertical = CGAffineTransformMake(1, 0, 0, -1, 0, newSize.height);
CGContextConcatCTM(context, flipVertical);
// Draw into the context; this scales the image
CGContextDrawImage(context, newRect, imageRef);
// Get the resized image from the context and a UIImage
CGImageRef newImageRef = CGBitmapContextCreateImage(context);
UIImage *newImage = [UIImage imageWithCGImage:newImageRef];
CGImageRelease(newImageRef);
UIGraphicsEndImageContext();
return newImage;
}
if([UIScreen mainScreen].scale > 1)
{
thumbnailImage = [self thumbnailImage newSize:CGSizeMake(thumbnailImage.size.width/[UIScreen mainScreen].scale, thumbnailImage.size.height/[UIScreen mainScreen].scale)];
}
- (UIImage *)imageWithImage{
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0.0);
[image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
Here's the code I have but it's crashing ... any ideas?
UIImage *tempImage = [[UIImage alloc] initWithData:imageData];
CGImageRef imgRef = [tempImage CGImage];
[tempImage release];
CGFloat width = CGImageGetWidth(imgRef);
CGFloat height = CGImageGetHeight(imgRef);
CGRect bounds = CGRectMake(0, 0, width, height);
CGSize size = bounds.size;
CGAffineTransform transform = CGAffineTransformMakeScale(4.0, 4.0);
UIGraphicsBeginImageContext(size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextConcatCTM(context, transform);
CGContextDrawImage(context, bounds, imgRef);
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
What am I missing here? Basically just trying to scale image up and crop it to be same size as original.
Thanks
The problem is this line:
CGImageRef imgRef = [tempImage CGImage];
Or more precise, the direct follow-up of this line:
[tempImage release];
You are getting a CF object here, the CGImageRef. Core Foundation object only have the retain/release memory management, but no autoreleased objects. Hence, when you release the UIImage in the second row, the CGImageRef will be deleted as well. And this again means that it's undefined when you try to draw it down there.
I can think of three fixes:
use autorelease to delay the release of the image: [tempImage autorelease];
move the release to the very bottom of your method
retain and release the image using CFRetain and CFRelease.
Try this one:
-(CGImageRef)imageCapture
{
UIGraphicsBeginImageContext(self.view.bounds.size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGRect rect= CGRectMake(0,0 ,320, 480);
CGImageRef imageRef = CGImageCreateWithImageInRect([viewImage CGImage], rect);
return imageRef;
}
use the below line whenever you want to capture the screen
UIImage *captureImg=[[UIImage alloc] initWithCGImage:[self imageCapture]];