I have a buffer which has JPEG image data. I need to display this image in UIImageView. I need to convert this image buffer into an object of UIImage and use it as follows
NSData *data = [NSData dataWithContentsOfFile:appFile];
UIImage *theImage = [[UIImage alloc] initWithData:data];
I get the image displayed but with a low resolution as compared to the actual resolution. Do I need to convert it into a Bitmap first and then use it with UIImage? I don't seem to be able to use NSBitmapImageRep. Any ideas on how can this be achieved?
If the UIImageView frame dimensions are different than the source image dimensions, you'll get a resized version of the image. The quality can be pretty rough depending on how much of a conversion is being performed.
I found this code on the net somewhere (sorry original author - I've lost the attribution) that performs a smoother resize:
UIImage* resizedImage(UIImage *inImage, CGRect thumbRect)
{
CGImageRef imageRef = [inImage CGImage];
CGImageAlphaInfo alphaInfo = CGImageGetAlphaInfo(imageRef);
// There's a wierdness with kCGImageAlphaNone and CGBitmapContextCreate
// see Supported Pixel Formats in the Quartz 2D Programming Guide
// Creating a Bitmap Graphics Context section
// only RGB 8 bit images with alpha of kCGImageAlphaNoneSkipFirst, kCGImageAlphaNoneSkipLast, kCGImageAlphaPremultipliedFirst,
// and kCGImageAlphaPremultipliedLast, with a few other oddball image kinds are supported
// The images on input here are likely to be png or jpeg files
if (alphaInfo == kCGImageAlphaNone)
alphaInfo = kCGImageAlphaNoneSkipLast;
// Build a bitmap context that's the size of the thumbRect
CGContextRef bitmap = CGBitmapContextCreate(
NULL,
thumbRect.size.width, // width
thumbRect.size.height, // height
CGImageGetBitsPerComponent(imageRef), // really needs to always be 8
4 * thumbRect.size.width, // rowbytes
CGImageGetColorSpace(imageRef),
alphaInfo
);
// Draw into the context, this scales the image
CGContextDrawImage(bitmap, thumbRect, imageRef);
// Get an image from the context and a UIImage
CGImageRef ref = CGBitmapContextCreateImage(bitmap);
UIImage* result = [UIImage imageWithCGImage:ref];
CGContextRelease(bitmap); // ok if NULL
CGImageRelease(ref);
return result;
}
Related
I have an NSImage that I would like to save as a PNG, but remove the alpha channel and use 5 bit colour. I am currently doing this to create my PNG:
NSData *imageData = [image TIFFRepresentation];
NSBitmapImageRep *imageRep = [NSBitmapImageRep imageRepWithData:imageData];
NSDictionary *imageProps = nil;
imageData = [imageRep representationUsingType:NSPNGFileType properties:imageProps];
[imageData writeToFile:fileNameWithExtension atomically:YES];
I've read though lots of similar questions on SO but am confused as to the best/correct approach to use. Do I create a new CGGraphics context and draw into that? Can I create a new imageRep with these parameters directly? Any help, with a code snippet would be greatly appreciated.
Cheers
Dave
I did this in the end. Looks ugly and smells to me. Any better suggestions greatly appreciated.
// Create a graphics context (5 bits per colour, no-alpha) to render the tile
static int const kNumberOfBitsPerColour = 5;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef tileGraphicsContext = CGBitmapContextCreate (NULL, rect.size.width, rect.size.height, kNumberOfBitsPerColour, 2 * rect.size.width, colorSpace, kCGBitmapByteOrder16Little | kCGImageAlphaNoneSkipFirst);
// Draw the clipped part of the image into the tile graphics context
NSData *imageData = [clippedNSImage TIFFRepresentation];
CGImageRef imageRef = [[NSBitmapImageRep imageRepWithData:imageData] CGImage];
CGContextDrawImage(tileGraphicsContext, rect, imageRef);
// Create an NSImage from the tile graphics context
CGImageRef newImage = CGBitmapContextCreateImage(tileGraphicsContext);
NSImage *newNSImage = [[NSImage alloc] initWithCGImage:newImage size:rect.size];
// Clean up
CGImageRelease(newImage);
CGContextRelease(tileGraphicsContext);
CGColorSpaceRelease(colorSpace);
I got a UIImage from UIImagePickerController, and using the code from this site to resize the image
- (UIImage *)resizedImage:(CGSize)newSize
transform:(CGAffineTransform)transform
drawTransposed:(BOOL)transpose
interpolationQuality:(CGInterpolationQuality)quality {
CGRect newRect = CGRectIntegral(CGRectMake(0, 0, newSize.width, newSize.height));
CGRect transposedRect = CGRectMake(0, 0, newRect.size.height, newRect.size.width);
CGImageRef imageRef = self.CGImage;
// Build a context that's the same dimensions as the new size
CGContextRef bitmap = CGBitmapContextCreate(NULL,
newRect.size.width,
newRect.size.height,
CGImageGetBitsPerComponent(imageRef),
0,
CGImageGetColorSpace(imageRef),
CGImageGetBitmapInfo(imageRef));
// Rotate and/or flip the image if required by its orientation
CGContextConcatCTM(bitmap, transform);
// Set the quality level to use when rescaling
CGContextSetInterpolationQuality(bitmap, quality);
// Draw into the context; this scales the image
CGContextDrawImage(bitmap, transpose ? transposedRect : newRect, imageRef);
// Get the resized image from the context and a UIImage
CGImageRef newImageRef = CGBitmapContextCreateImage(bitmap);
UIImage *newImage = [UIImage imageWithCGImage:newImageRef];
// Clean up
CGContextRelease(bitmap);
CGImageRelease(newImageRef);
return newImage;
}
UIImagePNGRepresentation() failed to return NSData on re-sized image, but UIImageJPEGRepresentation() succeed.
How do we know if a UIImage is presentable in PNG or JPEG? What missed in the above code that make the resized image can not be represented in PNG?
According to apple document: "This function may return nil if the image has no data or if the underlying CGImageRef contains data in an unsupported bitmap format."
What bitmap format supported by PNG presentation? How to make an UIImage PNG-supported format?
That was a mistake that in another part of the code the image was rescaled with the following
CGContextRef context = CGBitmapContextCreate(NULL,
size.width,
size.height,
8,
0,
CGImageGetColorSpace(source),
kCGImageAlphaNoneSkipFirst);
Changing kCGImageAlphaNoneSkipFirst to CGImageGetBitmapInfo(source) fixed the problem
go to following link...
How to check if downloaded PNG image is corrupt?
it may help you...
Let me know it is working or not...
Happy Coding!!!!
I have a resource (.png file) that show a picture frame (border).
This .png file is size 100x100px, and the border width is 10px.
My Question:
How can I create another UIImage from this image, with a different size, without ruin the border's width?
The Problem:
When I try to draw the new image from the original image with CGContextDrawImage I get a new image with the new size, but my border proportion is ruin.
CGRect newRect = CGRectIntegral(CGRectMake(0, 0, newWidth, newHeight));
CGImageRef imageRef = //... the image
// Build a context that's the same dimensions as the new size
CGContextRef bitmap = CGBitmapContextCreate(NULL,
newRect.size.width,
newRect.size.height,
CGImageGetBitsPerComponent(imageRef),
0,
CGImageGetColorSpace(imageRef),
CGImageGetBitmapInfo(imageRef));
// Set the quality level to use when rescaling
CGContextSetInterpolationQuality(bitmap, kCGInterpolationHigh);
// Draw into the context; this scales the image
CGContextDrawImage(bitmap, newRect, imageRef);
// Get the resized image from the context and a UIImage
CGImageRef newImageRef = CGBitmapContextCreateImage(bitmap);
UIImage *newImage = [UIImage imageWithCGImage:newImageRef];
// Clean up
CGContextRelease(bitmap);
CGImageRelease(newImageRef);
For example, when I tried to create an image size 800x100p, I get an image with very thin top and bottom border.
What I need is that the border will stay the same width
*note
Using resizableImageWithCapInsets: wont help me, because I need a new image with the new size to save on the disc.
You can use resizableImageWithCapInsets:
UIImage *img = [UIImage imageNamed:#"myResource"];
img = [img resizableImageWithCapInsets:UIEdgeInsetsMake(10,10,10,10)];
I've never used this approach with CGContextDrawImage, but it should work.
I am having and UIImageView and there is another canvas view over the UIImageView. I want to cut the UIImageView's image according to the canvas view's frame. But after crop the image is getting stretched and blur after crop. Below are my codes.
[UIImage *images = [self captureScreenInRect1:canvas.frame];
self.imgViewCurrent.contentMode = UIViewContentModeScaleToFill;
self.imgViewCurrent.image = images;
- (UIImage *)captureScreenInRect1:(CGRect)captureFrame {
CALayer *layer;
layer = self.view.layer;enter image description here
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, NO, 1.0);
CGContextClipToRect (UIGraphicsGetCurrentContext(),captureFrame);
[layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *screenImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGImageRef imageRef = CGImageCreateWithImageInRect([screenImage CGImage], captureFrame);
UIImage *img = [UIImage imageWithCGImage:imageRef scale:0.0 orientation:UIImageOrientationUp];
CGImageRelease(imageRef);
return img;
}
change this line
self.imgViewCurrent.contentMode = UIViewContentModeScaleToFill;
to this line
self.imgViewCurrent.contentMode = UIViewContentModeScaleAspectFit
UIViewContentModeScaleToFill will basically stretches the image.
After cropping your image, you can resize your image with Custom Size. It may help you.
-(UIImage*) resizedImage:(UIImage *)inImage: (CGRect) thumbRect
{
CGImageRef imageRef = [inImage CGImage];
CGImageAlphaInfo alphaInfo = CGImageGetAlphaInfo(imageRef);
// There's a wierdness with kCGImageAlphaNone and CGBitmapContextCreate
// see Supported Pixel Formats in the Quartz 2D Programming Guide
// Creating a Bitmap Graphics Context section
// only RGB 8 bit images with alpha of kCGImageAlphaNoneSkipFirst, kCGImageAlphaNoneSkipLast, kCGImageAlphaPremultipliedFirst,
// and kCGImageAlphaPremultipliedLast, with a few other oddball image kinds are supported
// The images on input here are likely to be png or jpeg files
if (alphaInfo == kCGImageAlphaNone)
alphaInfo = kCGImageAlphaNoneSkipLast;
// Build a bitmap context that's the size of the thumbRect
CGContextRef bitmap = CGBitmapContextCreate(
NULL,
thumbRect.size.width, // width
thumbRect.size.height, // height
CGImageGetBitsPerComponent(imageRef), // really needs to always be 8
4 * thumbRect.size.width, // rowbytes
CGImageGetColorSpace(imageRef),
alphaInfo
);
// Draw into the context, this scales the image
CGContextDrawImage(bitmap, thumbRect, imageRef);
// Get an image from the context and a UIImage
CGImageRef ref = CGBitmapContextCreateImage(bitmap);
UIImage* result = [UIImage imageWithCGImage:ref];
CGContextRelease(bitmap); // ok if NULL
CGImageRelease(ref);
return result;
}
Wondering if there is a way to isolate a single color in an image either using masks or perhaps even a custom color space. I'm ultimately looking for a fast way to isolate 14 colors out of an image - figured if there was a masking method it might may be faster than walking through the pixels.
Any help is appreciated!
You could use a custom color space (documentation here) and then substitute it for "CGColorSpaceCreateDeviceGray()" in the following code:
- (UIImage *)convertImageToGrayScale:(UIImage *)image
{
// Create image rectangle with current image width/height
CGRect imageRect = CGRectMake(0, 0, image.size.width, image.size.height);
// Grayscale color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray(); // <- SUBSTITUTE HERE
// Create bitmap content with current image size and grayscale colorspace
CGContextRef context = CGBitmapContextCreate(nil, image.size.width, image.size.height, 8, 0, colorSpace, kCGImageAlphaNone);
// Draw image into current context, with specified rectangle
// using previously defined context (with grayscale colorspace)
CGContextDrawImage(context, imageRect, [image CGImage]);
// Create bitmap image info from pixel data in current context
CGImageRef imageRef = CGBitmapContextCreateImage(context);
// Create a new UIImage object
UIImage *newImage = [UIImage imageWithCGImage:imageRef];
// Release colorspace, context and bitmap information
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
CFRelease(imageRef);
// Return the new grayscale image
return newImage;
}
This code is from this blog which is worth a look at for removing colors from images.