I am using imagview with size of 320X320 to display large image (350 X 783).
While placing the large image into the imageview, the image looks squeezed, compressed something not like the quality one.
My question is, how can I make the large image into small with as good quality as the original image ?
You can set your image view a proper content mode, like that
imageView.contentMode = UIViewContentModeScaleAspectFit
-(UIImage*)scaleToSize:(CGSize)size {
// Create a bitmap graphics context
// This will also set it as the current context
UIGraphicsBeginImageContext(size);
// Draw the scaled image in the current context
[self drawInRect:CGRectMake(0, 0, size.width, size.height)];
// Create a new image from current context
UIImage* scaledImage = UIGraphicsGetImageFromCurrentImageContext();
// Pop the current context from the stack
UIGraphicsEndImageContext();
// Return our new scaled image
return scaledImage;
}
You can do like this,
UIImage* sourceImage = "yourImage";
CGSize newSize=CGSizeMake(80,80);
UIGraphicsBeginImageContext( newSize );
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
yourImageView.image=newImage;
Related
I am creating a "cartoonizer" application, which takes an image as an input and modifies it so as to format it in comics style.
At the moment, the original image is stored in a UIImageView, and I access to it by doing:
imageView.image
I was wondering whether it is possible to position an object on the image imageView.image (like a balloon) and then store the image with the object on it, like if it was originally part of the image content.
Thank you in advance.
You can do it like this, assuming you have correct opacity in the top image.
Like you said you get your image from imageView.image:
From here: blend two uiimages based on alpha/transparency of top image
UIImage *bottomImage = [UIImage imageNamed:#"bottom.png"];
UIImage *image = [UIImage imageNamed:#"top.png"];
CGSize newSize = CGSizeMake(width, height);
UIGraphicsBeginImageContext( newSize );
// Use existing opacity as is
[bottomImage drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
// Apply supplied opacity
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height) blendMode:kCGBlendModeNormal alpha:0.8];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
More compositing techniques here: http://mobile.tutsplus.com/tutorials/iphone/ios-sdk-advanced-uiimage-techniques/
In my app, the user is able to put stickers on top of a photo. When they go to save their creation, I do a screen grab and store it in a UIImage:
UIGraphicsBeginImageContextWithOptions(self.mainView.bounds.size, NO, [UIScreen mainScreen].scale);
[self.mainView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *resultImage = [UIGraphicsGetImageFromCurrentImageContext() retain];
UIGraphicsEndImageContext();
(where self.mainView has a subview UIImageView which holds the photo, and another subview UIView which holds the stickers).
I am wondering, is it possible to do a screen shot in this manner, and maintain the resolution of the aforementioned photo?
The following will 'flatten' two UIImages into one while maintaining the resolution of the original image(s):
CGSize photoSize = photoImage.size;
UIGraphicsBeginImageContextWithOptions(photoSize, NO, 0.0);
CGRect photoRect = CGRectMake(0, 0, photoSize.width, photoSize.height);
// Add the original photo into the context
[photoImage drawInRect:photoRect];
// Add the sticker image with its upper left corner set to where the user placed it
[stickerImage drawAtPoint:stickerView.frame.origin];
// Get the resulting 'flattened' image
UIImage *flattenedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
The above assumes photoImage and stickerImage are both instances of UIImage and stickerView is a UIView with containing the stickerImage and thus will be able to use the stickerView frame to determine its origin.
If you have multiple stickers, just iterate through the collection.
If you are looking to save an image of your current view then this might help you.
UIGraphicsBeginImageContext(self.scrollView.contentSize);
[self.scrollView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *finalImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGImageRef imageRef = CGImageCreateWithImageInRect(finalImage.CGImage,
CGRectMake(scrollView.contentOffset.x, scrollView.contentOffset.y,
scrollView.frame.size.width, scrollView.frame.size.height));
UIImage *screenImage = [UIImage imageWithCGImage:imageRef scale:[UIScreen mainScreen].scale orientation:UIImageOrientationUp];
CGImageRelease(imageRef);
I want to get the following result with two images.
Please help me.
To combine two images on an image view try this
UIImage *bottomImage = [UIImage imageNamed:#"bottom.png"]; //background image
UIImage *image = [UIImage imageNamed:#"top.png"]; //foreground image
CGSize newSize = CGSizeMake(width, height);
UIGraphicsBeginImageContext( newSize );
// Use existing opacity as is
[bottomImage drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
// Apply supplied opacity if applicable
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height) blendMode:kCGBlendModeNormal alpha:0.8];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Add newImage to UIImageView
After #Sumanth's code for combine two images you need to mask final image like how-to-mask-an-image link
I am capturing CGRect with following code. But the resulting image is not the image what i want. Image has some transparent background. What to do for removing transparent background as suggesting the picture.
- (UIImage *)captureScreenInRect:(CGRect)captureFrame {
CALayer *layer;
layer = imageScrollview.layer;
UIGraphicsBeginImageContext(imageScrollview.bounds.size);
CGContextClipToRect (UIGraphicsGetCurrentContext(),captureFrame);
\[layer renderInContext:UIGraphicsGetCurrentContext()\];
UIImage *screenImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return screenImage;
}
Translate your context so that its origin matches your captureFrame:
UIGraphicsBeginImageContext(imageScrollview.bounds.size);
CGContextRef c = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(c, -captureFrame.origin.x, -captureFrame.origin.y);
[imageScrollView.layer renderInContext:c];
UIImage *screenImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
(written from memory, untested)
Additionally clipping the context is not necessary as the image is already clipped by the image context's bounds.
Try this one
CGRect cropRect = CGRectMake(imageScrollview.frame.origin.x+15, imageScrollview.frame.origin.y+15, WIDTH, HEIGHT);
I am working on creating an image gallery which has thumbnails in different sizes. I want to convert these rectangle thumbnails to square size so that all of them could appear similar in size. I dont mind cropping it from extended portion but I am not sure how to do it. can anyone please help me?
Thanks
Pankaj
You need to use the image in rect method passing in the image and the required bounds...
CGImageRef imageRef = CGImageCreateWithImageInRect([anImage CGImage], requiredBounds);
UIImage *croppedImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
I have added this to a UIImage category (UIImage+Resize) in the following post, you can download the source code as well - Categories example
Well if you use an UIImageView to display your images (wich I am more than sure that you do) you can set it's contentMode property to UIViewContentModeScaleAspectFill. This should 'crop' your image to the boundaries of your UIImageView. In case your image will go out of the boundaries of the UIImageView make sure clipsToBounds is also set to YES.
Let me know if that helps.
I'm using the next method. The input are the UIImage to scale and the size of the UIImageView's frame where the UIImage is. It works when the frame's height and width are equal.
One important thing: I keep the image's ratio. I don't expand the image to cover the full square. If you want to do it you have to change the 'drawInRect' line for [self drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)]; and remove the if-else.
- (UIImage *)imageWithImage:(UIImage *)image scaledToSize:(CGSize)newSize {
CGFloat scaleRatio;
if (image.size.width > image.size.height) {
scaleRatio = image.size.height/image.size.width;
}else{
scaleRatio = image.size.width/image.size.height;
}
CGAffineTransform scaleTransform = CGAffineTransformMakeScale(scaleRatio, scaleRatio);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextConcatCTM(context, scaleTransform);
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0);
if (image.size.width > image.size.height) {
[image drawInRect:CGRectMake(0, (newSize.height/2)-(newSize.height*scaleRatio/2), newSize.width, newSize.height*scaleRatio)];
}else{
[image drawInRect:CGRectMake((newSize.width/2)-(newSize.width*scaleRatio/2), 0, newSize.width*scaleRatio, newSize.height)];
}
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}