iPhone - Image created with CGBitmapContextCreate as Opengl texture - iphone

I want to call CGBitmapContextCreate with texture->data to create a CGContextRef,
and create a CGImageRef by CGBitmapContextCreateImage(context).
However, the image created is not as expected :(
The one created from CGBitmapContextCreateImage:
The actual one (slight different since I take it with another camera):
Codes: (texture.bytesPerPixel = 2)
CGContextRef context = CGBitmapContextCreate(texture.data, 512, 512, 5, 512 * texture.bytesPerPixel, CGColorSpaceCreateDeviceRGB(), kCGImageAlphaNoneSkipFirst);
CGImageRef cg_img = CGBitmapContextCreateImage(context);
UIImage* ui_img = [UIImage imageWithCGImage: cg_img];
UIImageWriteToSavedPhotosAlbum(ui_img, self, #selector(image:didFinishSavingWithError:contextInfo:), nil);
Complete Code:
http://ihome.ust.hk/~tm_lksac/OpenGLSprite.m
The application usually calls - (void)drawSelfIfNeeded:(BOOL)needed
to update the texture in the application. But I want to take a "screenshoot" of the texture and save it as UIImage as further image processing.

Related

<Error>: CGBitmapContextCreate: unsupported parameter combination vs. lower resolution image

- (UIImage *)roundedCornerImage:(NSInteger)cornerSize borderSize:(NSInteger)borderSize {
// If the image does not have an alpha layer, add one
UIImage *image = [self imageWithAlpha];
// Build a context that's the same dimensions as the new size
CGBitmapInfo info = CGImageGetBitmapInfo(image.CGImage);
CGContextRef context = CGBitmapContextCreate(NULL,
image.size.width,
image.size.height,
CGImageGetBitsPerComponent(image.CGImage),
0,
CGImageGetColorSpace(image.CGImage),
CGImageGetBitmapInfo(image.CGImage));
// Create a clipping path with rounded corners
CGContextBeginPath(context);
[self addRoundedRectToPath:CGRectMake(borderSize, borderSize, image.size.width - borderSize * 2, image.size.height - borderSize * 2)
context:context
ovalWidth:cornerSize
ovalHeight:cornerSize];
CGContextClosePath(context);
CGContextClip(context);
// Draw the image to the context; the clipping path will make anything outside the rounded rect transparent
CGContextDrawImage(context, CGRectMake(0, 0, image.size.width, image.size.height), image.CGImage);
// Create a CGImage from the context
CGImageRef clippedImage = CGBitmapContextCreateImage(context);
CGContextRelease(context);
// Create a UIImage from the CGImage
UIImage *roundedImage = [UIImage imageWithCGImage:clippedImage];
CGImageRelease(clippedImage);
return roundedImage;
}
I have the method above and am adding rounded corners to Twitter profile images. For most of the images this works awesome. There are a few that cause the following error to occur:
: CGBitmapContextCreate: unsupported parameter combination: 8 integer bits/component; 32 bits/pixel; 3-component color space; kCGImageAlphaLast; 96 bytes/row.
I have done some debugging and it looks like the only difference from the images causing errors and the ones that are not is the parameter, CGImageGetBitmapInfo(image.CGImage), when creating the context. This throws the error and results in the context being null. I tried setting the last parameter to kCGImageAlphaPremultipliedLast to no avail either. The image is drawn this time but with much less quality. Is there a way to get a higher quality image on par with the rest of them? The path to the image is via Twitter so not sure if they have different ones you can pull.
I have seen the other questions regarding this error too. None of have solved this issue. I saw this post but the errored images are completely blurry after that. And casting the width and height to NSInteger also didn't work. Below is a screenshot of the two profile images and their quality as well. The first one is causing the error.
Does anyone have any idea what the issue is here?
Thanks a ton. This has been killing me.
iOS does not support kCGImageAlphaLast. You need to use kCGImageAlphaPremultipliedLast.
You also need to handle the scale of your initial image. Your current code doesn't, so it downsamples the image if its scale is 2.0.
You can write the entire function more simply by using UIKit functions and classes. UIKit will take care of the scale for you; you just have to pass in the original image's scale when you ask it to create the graphics context.
- (UIImage *)roundedCornerImage:(NSInteger)cornerSize borderSize:(NSInteger)borderSize {
// If the image does not have an alpha layer, add one
UIImage *image = [self imageWithAlpha];
UIGraphicsBeginImageContextWithOptions(image.size, NO, image.scale); {
CGRect imageRect = (CGRect){ CGPointZero, image.size };
CGRect borderRect = CGRectInset(imageRect, borderSize, borderSize);
UIBezierPath *path = [UIBezierPath bezierPathWithRoundedRect:borderRect
byRoundingCorners:UIRectCornerAllCorners
cornerRadii:CGSizeMake(cornerSize, cornerSize)];
[path addClip];
[image drawAtPoint:CGPointZero];
}
UIImage *roundedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return roundedImage;
}
If your imageWithAlpha method itself creates a UIImage from another UIImage, it needs to propagate the scale also.

How to compose two UIImage objects into one UIImage outside of -drawRect:?

I have a few UIImage objects which I want to compose into a single UIImage and then save it to disk. I'm not displaying this on the screen so it doesn't make sense to do it in -drawRect.
Is there a way of creating a context similar like in -drawRect: and then just draw the UIImage objects in there using something like CGContextDrawImage(context, imgRect, img.CGImage); ?
I believe you want to use a CGContextRef to draw all the images in at the desired place and then get the resulting image. The code will look something like this:
CGContextRef context = CGBitmapContextCreate(nil, desired_width, desired_height, bitsPerComponent, bytesPerRow, CGColorSpaceCreateDeviceRGB(), CGImageGetBitmapInfo(image));
//This code is to ilustrate what you have to do:
for(image in your Images) {
CGContextDrawImage(context, CGRectMake(currentImage.frame.origin.x, currentImage.frame.origin.y, CGImageGetWidth(currentImage), CGImageGetHeight(currentImage), currentImage);
}
CGImageRef mergeResult = CGBitmapContextCreateImage(context);
mergedImage = [[UIImage alloc] initWithCGImage:mergeResult];
CGContextRelease(context);
CGImageRelease(mergeResult);
CGContextRefs can be created whenever you wish and this allows you to do all kind of image manipulations.
Use CGBitmapContextCreate to create context and CGBitmapContextCreateImage to get final result.

iPhone - UIImage imageWithData returning nil

I need to create an UIImage from a byte array.
Here is now I create the byte array:
image = CGImageCreateWithImageInRect(aux.CGImage, imageRect);
context = CGBitmapContextCreate (data[i][j], TILE_WIDTH, TILE_HEIGHT,
bitsPerComponent, bitmapBytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);//kCGImageAlphaNoneSkipFirst);//kCGImageAlphaNone);//
CGContextDrawImage(context, CGRectMake(0, 0, TILE_WIDTH, TILE_HEIGHT), image);
data[i][j] = CGBitmapContextGetData (context);
The data variable is an unsigned char.
And this is how I try to get the UIImage:
NSData *imgData = [NSData dataWithBytes:data[i][j] length:TILE_WIDTH*TILE_HEIGHT*numberOfCompponents];
UIImage *img = [UIImage imageWithData: imgData];
The img (UIImage) is remaining nil.
OK, not this is the background: I am trying to create a pixelate application :). The images from the iPhone 4 camera are too big in size, so I split the image in smaller images. Doing so, when the area pixelated (touched) needs to be updated in order for the pixelate effect to be displayed, I am updating a smaller UIImage. I needed to do it like this because in previous tests it seamed like the update of an UIImage is killing the CPU. Still, the smaller images are now around 80x100 pxl, and the update is not working as smooth as possible. Sometimes, if you move the finger to fast it misses some spots :D. I think that using this method to create an UIImage from the byte array is faster than this one:
CGImageRef cgImage = CGImageCreate(TILE_WIDTH, TILE_HEIGHT, bitsPerComponent,
bitsPerPixel, bitmapBytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big,
dataProvider, NULL, false, kCGRenderingIntentDefault);
UIImage *imageToBeUpdated = [UIImage imageWithCGImage:cgImage];
Am I correct?
[UIImage imageWithData:data] parses data that is in a known image file format (e.g. jpeg, png, or gif; full list in the documentation. You're passing it raw pixel data, which is not supported.
Try this instead of CGBitmapContextGetData to get the image out of the context:
CGImageRef imgRef = CGBitmapContextCreateImage(context);
UIImage *img = [UIImage imageWithCGImage:imgRef];

mask image via another image

Alright what I am trying to do is:
given an image where there is a circle within that image that is "blank". I want to take an existing image from user library and then mask it so that only a certain part of that image is shown on the "blank" image..
I have tried a few masking code but they all seem to work the other way around ... any tips on how to tackle this?
Unfortunately you can't use CoreAnimation to do this (which would make it rather easy).
Looking at Apple's CoreAnimation documentation:
iOS Note: As a performance consideration, iOS does not support the mask property.
Therefore the next best way to do this is to use Quartz 2D (as answered here):
CGContextRef mainViewContentContext;
CGColorSpaceRef colorSpace;
colorSpace = CGColorSpaceCreateDeviceRGB();
// create a bitmap graphics context the size of the image
mainViewContentContext = CGBitmapContextCreate (NULL, targetSize.width, targetSize.height, 8, 0, colorSpace, kCGImageAlphaPremultipliedLast);
// free the rgb colorspace
CGColorSpaceRelease(colorSpace);
if (mainViewContentContext==NULL)
return NULL;
CGImageRef maskImage = [[UIImage imageNamed:#"mask.png"] CGImage];
CGContextClipToMask(mainViewContentContext, CGRectMake(0, 0, targetSize.width, targetSize.height), maskImage);
CGContextDrawImage(mainViewContentContext, CGRectMake(thumbnailPoint.x, thumbnailPoint.y, scaledWidth, scaledHeight), self.CGImage);
// Create CGImageRef of the main view bitmap content, and then
// release that bitmap context
CGImageRef mainViewContentBitmapContext = CGBitmapContextCreateImage(mainViewContentContext);
CGContextRelease(mainViewContentContext);
// convert the finished resized image to a UIImage
UIImage *theImage = [UIImage imageWithCGImage:mainViewContentBitmapContext];
// image is retained by the property setting above, so we can
// release the original
CGImageRelease(mainViewContentBitmapContext);
// return the image
return theImage;

How can I draw an CGImageRef context on the screen?

I have a beautiful CGImageRef context, which I created the whole day to get alpha values ;)
It's defined like that:
CGContextRef context = CGBitmapContextCreate (bitmapData, pixWidth, pixHeiht 8, pixWidth, NULL, kCGImageAlphaOnly);
So for my understanding, that context represents somehow my image. But "virtually", non-visible somewhere in memory.
Can I stuff that in an UIImageView or draw that directly to the screen? I guess that alpha would be converted to grayscale or something like that.
You can create a UIImage by calling:
+ (UIImage *)imageWithCGImage:(CGImageRef)cgImage
and then draw the UIImage using:
- (void)drawAtPoint:(CGPoint)point
Go look at CGBitmapContextCreateImage(), that can give you a CGImageRef from your bitmap context. You can then draw that using the CGContext... functions or make a UIImage using +[UIImage imageWithCGImage:].
CGSize size = ...;
UIGraphicsBeginImageContext(size);
...
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
...
CGPoint pt = ...;
[img drawAtPoint:pt];