Enhance image quality after resizing - iphone

Can anyone suggest methods to enhance the quality of an image, being resized to a larger size?
On resizing, the resultant image pixelates. I need this pixelation to reduce to maximum extent.

You can use the following code for compressing or increasing size of an image.
CGFloat compression = 222.0f;
CGFloat maxCompression = 202.1f;
int maxFileSize = 160*165; //fill your size need
NSData *imageDat = UIImageJPEGRepresentation(image.image, compression);
while ([imageDat length] > maxFileSize && compression > maxCompression)
{
compression -= 0.1222;
imageDat = UIImageJPEGRepresentation(decodedimage.image, compression);
}
NSLog(#"image compressed success");
[image setImage:[UIImage imageWithData:imageDat]];

You can use the following code for compressing or increasing size of an image.
image1 = [self imageWithImage:image1 convertToSize:CGSizeMake(100, 100)];
// For convert the image to fix size
- (UIImage *)imageWithImage:(UIImage *)image convertToSize:(CGSize)size
{
UIGraphicsBeginImageContext(size);
[image drawInRect:CGRectMake(0, 0, 320,460)];
UIImage *destImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return destImage;
}

There is an Article called : Resize a UIImage the right way , you should Read and Understand each and every aspects given there. I think it will give Answers of Every Questions arises in your mind. I think it's a beautiful Article with proper Graphical Representation (means Less Boring at Reading).

Related

pixelated iphone UIImageView

I've been having issues rendering images with the UIImageView class. The pixelation seems to occur mostly on the edges of the image I am trying to show.
I have tried changing the property 'Render with edge antialiasing' to no avail.
The image files contain images that are larger than what will appear on the screen.
It seems to be royally messing with the quality of the image and then displaying it. I tried to post images here, but StackOverflow is denying me that privilege. So here's a link to what's going on.
http://i.imgur.com/QpUOTOF.png
The sun in this image is the problem I'm speaking of. Any ideas?
On-the-fly image resizing is quick and of low quality. For bundled images, it is worth the extra bundle space to include downsized versions. For downloaded images, you can achieve better results by resizing with Core Graphics into a new UIImage before you set the image property.
CGSize newSize = CGSizeMake(newWidth, newHeight);
UIGraphicsBeginImageContextWithOptions(newSize, // context size
NO, // opaque?
0); // image scale. 0 means "device screen scale"
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetInterpolationQuality(context, kCGInterpolationHigh);
[bigImage drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Use following method use for get specific hight and width with image
+ (UIImage*)resizeImage:(UIImage*)image withWidth:(int)width withHeight:(int)height
{
CGSize newSize = CGSizeMake(width, height);
float widthRatio = newSize.width/image.size.width;
float heightRatio = newSize.height/image.size.height;
if(widthRatio > heightRatio)
{
newSize=CGSizeMake(image.size.width*heightRatio,image.size.height*heightRatio);
}
else
{
newSize=CGSizeMake(image.size.width*widthRatio,image.size.height*widthRatio);
}
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0.0);
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
This method return NewImage, with specific size that you specified.
How big is your image and what is the size of the imageView? Don't rely on UIImageView to scale it down for you. You probably need to resize it manually. This would also be a bit more memory efficient.
I use categories like these:
>>>github link <<<
to do image resizing.
This also gives you some other nice function for rounded corners etc.
Also keep in mind, that you need a transparent border at the edge of an image if you want to rotate it to avoid aliasing.

Core graphics RGB data issue

I am trying to to pixel-by-pixel image filters using Core Graphics (breaking a CGImage into unsigned integers using CFData)
When I try to create an imaged with the processed data, however, the resulting image comes out with significantly different colors.
I commented out the entire loop where I actually alter the pixels' rgb values and nothing changes, either.
When I initialize the UIImage I am using in the filter; I do a resize using drawInRect with UIGraphicsBeginContext(); on an image taken from the camera.
When I remove the resize step and set my image directly from the camera; the filters seem to work just fine. Here's the code where I initialize the image I am using (from inside didFinishPickingImage)
self.editingImage is a UIImageView and self.editingUIImage is a UIImage
-(void)imagePickerController:(UIImagePickerController *)picker
didFinishPickingImage : (UIImage *)image
editingInfo:(NSDictionary *)editingInfo
{
self.didAskForImage = YES;
UIGraphicsBeginImageContext(self.editingImage.frame.size);
float prop = image.size.width / image.size.height;
float left, top, width, height;
if(prop < 1){
height = self.editingImage.frame.size.height;
width = (height / image.size.height) * image.size.width;
left = (self.editingImage.frame.size.width - width)/2;
top = 0;
}else{
width = self.editingImage.frame.size.width;
height = (width / image.size.width) * image.size.height;
top = (self.editingImage.frame.size.height - height)/2;
left = 0;
}
[image drawInRect:CGRectMake(left, top, width, height)];
self.editingUIImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
self.editingImage.image = self.editingUIImage;
[self.contrastSlider addTarget:self action:#selector(doImageFilter:) forControlEvents:UIControlEventValueChanged];
[self.brightnessSlider addTarget:self action:#selector(doImageFilter:) forControlEvents:UIControlEventValueChanged];
[picker dismissModalViewControllerAnimated:YES];
picker = nil;
}
The resizes the image just the way I need it as far as position;
Here's the image filtering function, I've taken the actual loop contents out because they're irrelevant.
- (void) doImageFilter:(id)sender{
CGImageRef src = self.editingUIImage.CGImage;
CFDataRef dta;
dta = CGDataProviderCopyData(CGImageGetDataProvider(src));
UInt8 *pixData = (UInt8 *) CFDataGetBytePtr(dta);
int dtaLen = CFDataGetLength(dta);
for (int i = 0; i < dtaLen; i += 3) {
//the loop
}
CGContextRef ctx;
ctx = CGBitmapContextCreate(pixData, CGImageGetWidth(src), CGImageGetHeight(src), 8, CGImageGetBytesPerRow(src), CGImageGetColorSpace(src), kCGImageAlphaPremultipliedLast);
CGImageRef newCG = CGBitmapContextCreateImage(ctx);
UIImage *new = [UIImage imageWithCGImage:newCG];
CGContextRelease(ctx);
CFRelease(dta);
CGImageRelease(newCG);
self.editingImage.image = new;
}
The image looks like this at first
and then after doing doImageFilter...
As mentioned before, this only happens when I use the resize method shown above.
Really stumped on this one, been researching it all day... any help very appreciated!
Cheers
Update: I've examined all the image objects' color spaces and they're all kCGColorSpaceDeviceRGB. Pretty stumped on this one guys, I'm pretty some something is going wrong when I break the image into unsigned integers, but I'm not sure what.. Anyone?
Your problem is on the last line:
ctx = CGBitmapContextCreate(pixData,
CGImageGetWidth(src),
CGImageGetHeight(src),
8,
CGImageGetBytesPerRow(src),
CGImageGetColorSpace(src),
kCGImageAlphaPremultipliedLast);
You're making an assumption about the alpha and the component ordering of the data of the source image, which is apparently not correct. You should get that from the source image via CGImageGetBitmapInfo(src).
To avoid issues like this one, if you're starting with an arbitrary CGImage and you want to manipulate the bytes of the bitmap directly, it is best to make a CGBitmapContext in a format that you specify yourself (not directly taken from the source image). Then, draw your source image into the bitmap context; CG will convert the image's data into your bitmap context's format, if necessary. Then get the data from the bitmap context and manipulate it.

Resize an ALAsset Photo takes a long time. Any way around this?

I have a blog application that I'm making. To compose a new entry, there is a "Compose Entry" view where the user can select a photo and input text. For the photo, there is a UIImageView placeholder and upon clicking this, a custom ImagePicker comes up where the user can select up to 3 photos.
This is where the problem comes in. I don't need the full resolution photo from the ALAsset, but at the same time, the thumbnail is too low resolution for me to use.
So what I'm doing at this point is resizing the fullResolution photos to a smaller size. However, this takes some time, especially when resizing up to 3 photos to a smaller size.
Here is a code snipped to show what I'm doing:
ALAssetRepresentation *rep = [[dict objectForKey:#"assetObject"] defaultRepresentation];
CGImageRef iref = [rep fullResolutionImage];
if (iref)
{
CGRect screenBounds = [[UIScreen mainScreen] bounds];
UIImage *previewImage;
UIImage *largeImage;
if([rep orientation] == ALAssetOrientationUp) //landscape image
{
largeImage = [[UIImage imageWithCGImage:iref] scaledToWidth:screenBounds.size.width];
previewImage = [[UIImage imageWithCGImage:iref] scaledToWidth:300];
}
else // portrait image
{
previewImage = [[[UIImage imageWithCGImage:iref] scaledToHeight:300] imageRotatedByDegrees:90];
largeImage = [[[UIImage imageWithCGImage:iref] scaledToHeight:screenBounds.size.height] imageRotatedByDegrees:90];
}
}
Here, from the fullresolution image, I am creating two images: a preview image (max 300px on the long end) and a large image (max 960px or 640px on the long end). The preview image is what is shown on the app itself in the "new entry" preview. The large image is what will be used when uploading to the server.
The actual code I'm using to resize, I grabbed somewhere from here:
-(UIImage*)scaledToWidth:(float)i_width
{
float oldWidth = self.size.width;
float scaleFactor = i_width / oldWidth;
float newHeight = self.size.height * scaleFactor;
float newWidth = oldWidth * scaleFactor;
UIGraphicsBeginImageContext(CGSizeMake(newWidth, newHeight));
[self drawInRect:CGRectMake(0, 0, newWidth, newHeight)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
Am I doing things wrong here? As it stands, the ALAsset thumbnail is too low clarity, and at the same time, I dont need the entire full resolution. It's all working now, but the resizing takes some time. Is this just a necessary consequence?
Thanks!
It is a necessary consequence of resizing your image that it will take some amount of time. How much depends on the device, the resolution of the asset and the format of the asset. But you don't have any control over that. But you do have control over where the resizing takes place. I suspect that right now you are resizing the image in your main thread, which will cause the UI to grind to a halt while you are doing the resizing. Do enough images, and your app will appear hung for long enough that the user will just go off and do something else (perhaps check out competing apps in the App Store).
What you should be doing is performing the resizing off the main thread. With iOS 4 and later, this has become much simpler because you can use Grand Central Dispatch to do the resizing. You can take your original block of code from above and wrap it in a block like this:
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_LOW, 0), ^{
ALAssetRepresentation *rep = [[dict objectForKey:#"assetObject"] defaultRepresentation];
CGImageRef iref = [rep fullResolutionImage];
if (iref)
{
CGRect screenBounds = [[UIScreen mainScreen] bounds];
__block UIImage *previewImage;
__block UIImage *largeImage;
if([rep orientation] == ALAssetOrientationUp) //landscape image
{
largeImage = [[UIImage imageWithCGImage:iref] scaledToWidth:screenBounds.size.width];
previewImage = [[UIImage imageWithCGImage:iref] scaledToWidth:300];
}
else // portrait image
{
previewImage = [[[UIImage imageWithCGImage:iref] scaledToHeight:300] imageRotatedByDegrees:90];
largeImage = [[[UIImage imageWithCGImage:iref] scaledToHeight:screenBounds.size.height] imageRotatedByDegrees:90];
}
dispatch_async(dispatch_get_main_queue(), ^{
// do what ever you need to do in the main thread here once your image is resized.
// this is going to be things like setting the UIImageViews to show your new images
// or adding new views to your view hierarchy
});
}
});
You'll have to think about things a little differently this way. For example, you've now broken up what used to be a single step into multiple steps now. Code that was running after this will end up running before the image resize is complete or before you actually do anything with the images, so you need to make sure that you didn't have any dependencies on those images or you'll likely crash.
A late answer, but for those stumbling on this question, you might want to consider using the fullScreenImage rather than the fullResolutionImage of the defaultRepresentation. It's usually much smaller, but still large enough to maintain good quality for larger thumbnails.

Add 2 UIImages into One UIImage

I am adding 2 images to each other and wanted to know if this is a good way to do this? This code works and looked to be powerful.
So, my question really is, It this good or is there a better way?
PS: Warning code written by a designer.
Call the function:
- (IBAction) {
UIImage *MyFirstImage = UIImage imageNamed: #"Image.png"];
UIImage *MyTopImage = UIImage imageNamed: #"Image2.png"];
CGFloat yFloat = 50;
CGFloat xFloat = 50;
UIImage *newImage = [self placeImageOnImage:MyFirstImage imageOver:MyTopImage x:&xFloat y:&yFloat];
}
The Function:
- (UIImage*) placeImageOnImage:(UIImage *)image topImage:(UIImage *)topImage x:(CGFloat *)x y:(CGFloat *)y {
// if you want the image to be added next to the image make this CGSize bigger.
CGSize newSize = CGSizeMake(image.size.width,image.size.height);
UIGraphicsBeginImageContext( newSize );
[topImage drawInRect:CGRectMake(*x,*y,topImage.size.width,topImage.size.height)];
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height) blendMode:kCGBlendModeDestinationOver alpha:1];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
Looks OK. Perhaps you don't really need the CGFloat pointers, but that's fine, too.
The main idea is correct. There is no better way to do what you want.
Minuses:
1) Consider UIGraphicsBeginImageContextWithOptions method. UIGraphicsBeginImageContext isn't good for retina.
2) Don't pass floats as pointers. Use x:(CGFloat)x y:(CGFloat)y instead
You should use the begin context version, UIGraphicsBeginImageContextWithOptions, that allows you to specify options for scale (and pass 0 as the scale) do you don't lose any quality on retina displays.
If you want one image drawn on top of another image, just draw the one in back, then the one in front, exactly as if you were using paint. There is no need to use blend modes.
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
[topImage drawInRect:CGRectMake(*x,*y,topImage.size.width,topImage.size.height)];

how to compress image in iphone?

I m taking images from photo library.I have large images of 4-5 mb but i want to compress those images.As i need to store those images in local memory of iphone.for using less memory or for getting less memory warning i need to compress those images.
I don't know how to compress images and videos.So i want to know hot to compress images?
UIImage *image = [info objectForKey:#"UIImagePickerControllerOriginalImage"];
NSData* data = UIImageJPEGRepresentation(image,1.0);
NSLog(#"found an image");
NSString *path = [destinationPath stringByAppendingPathComponent:[NSString stringWithFormat:#"%#.jpeg", name]];
[data writeToFile:path atomically:YES];
This is the code for saving my image. I dont want to store the whole image as its too big. So, I want to compress it to a much smaller size as I'll need to attach multiple images.
Thanks for the reply.
You can choose a lower quality for JPEG encoding
NSData* data = UIImageJPEGRepresentation(image, 0.8);
Something like 0.8 shouldn't be too noticeable, and should really improve file sizes.
On top of this, look into resizing the image before making the JPEG representation, using a method like this:
+ (UIImage *)imageWithImage:(UIImage *)image scaledToSize:(CGSize)newSize {
UIGraphicsBeginImageContext(newSize);
[image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
Source: The simplest way to resize an UIImage?
UIImageJPEGRepresentation(UIImage,Quality);
1.0 means maximum Quality and 0 means minimum quality.
SO change the quality parameter in below line to reduce file size of the image
NSData* data = UIImageJPEGRepresentation(image,1.0);
NSData *UIImageJPEGRepresentation(UIImage *image, CGFloat compressionQuality);
OR
NSData *image_Data=UIImageJPEGRepresentation(image_Name,compressionQuality);
return image as JPEG. May return nil if image has no CGImageRef or invalid bitmap format. compressionQuality is 0(most) & 1(least).