How to compose two UIImage objects into one UIImage outside of -drawRect:? - iphone

I have a few UIImage objects which I want to compose into a single UIImage and then save it to disk. I'm not displaying this on the screen so it doesn't make sense to do it in -drawRect.
Is there a way of creating a context similar like in -drawRect: and then just draw the UIImage objects in there using something like CGContextDrawImage(context, imgRect, img.CGImage); ?

I believe you want to use a CGContextRef to draw all the images in at the desired place and then get the resulting image. The code will look something like this:
CGContextRef context = CGBitmapContextCreate(nil, desired_width, desired_height, bitsPerComponent, bytesPerRow, CGColorSpaceCreateDeviceRGB(), CGImageGetBitmapInfo(image));
//This code is to ilustrate what you have to do:
for(image in your Images) {
CGContextDrawImage(context, CGRectMake(currentImage.frame.origin.x, currentImage.frame.origin.y, CGImageGetWidth(currentImage), CGImageGetHeight(currentImage), currentImage);
}
CGImageRef mergeResult = CGBitmapContextCreateImage(context);
mergedImage = [[UIImage alloc] initWithCGImage:mergeResult];
CGContextRelease(context);
CGImageRelease(mergeResult);

CGContextRefs can be created whenever you wish and this allows you to do all kind of image manipulations.
Use CGBitmapContextCreate to create context and CGBitmapContextCreateImage to get final result.

Related

iPhone - Image created with CGBitmapContextCreate as Opengl texture

I want to call CGBitmapContextCreate with texture->data to create a CGContextRef,
and create a CGImageRef by CGBitmapContextCreateImage(context).
However, the image created is not as expected :(
The one created from CGBitmapContextCreateImage:
The actual one (slight different since I take it with another camera):
Codes: (texture.bytesPerPixel = 2)
CGContextRef context = CGBitmapContextCreate(texture.data, 512, 512, 5, 512 * texture.bytesPerPixel, CGColorSpaceCreateDeviceRGB(), kCGImageAlphaNoneSkipFirst);
CGImageRef cg_img = CGBitmapContextCreateImage(context);
UIImage* ui_img = [UIImage imageWithCGImage: cg_img];
UIImageWriteToSavedPhotosAlbum(ui_img, self, #selector(image:didFinishSavingWithError:contextInfo:), nil);
Complete Code:
http://ihome.ust.hk/~tm_lksac/OpenGLSprite.m
The application usually calls - (void)drawSelfIfNeeded:(BOOL)needed
to update the texture in the application. But I want to take a "screenshoot" of the texture and save it as UIImage as further image processing.

How to layer on the CGContext

I'm using a bit of a workaround in order to successfully save an image with a mask (because you can't render masks with CoreAction).
Here's my code:
CGContextRef context = CGBitmapContextCreate(nil, cWidth, cHeight, bitsPerComponent, bytesPerRow, CGColorSpaceCreateDeviceRGB(), CGImageGetBitmapInfo(image));
//the mask:
CGContextClipToMask(context, CGRectMake(0,0,1280,935), self.image.image.CGImage);
//the image to mask:
CGContextDrawImage(context, CGRectMake(0, 0, 1280,935), viewImage.CGImage);
CGImageRef mergeResult = CGBitmapContextCreateImage(context);
saver = [[UIImage alloc] initWithCGImage:mergeResult];
So this works pretty well, the mask cuts out everything outside of its shape in the target image. This makes the surrounding area and background white. Rather than show white, I would like to show an image/color/pattern, etc. So I'd basically like to stack another image behind all that.
How can this be done? Thanks
Draw your background into the context first, then clip, then draw your image into the same context.

iPhone - UIImage imageWithData returning nil

I need to create an UIImage from a byte array.
Here is now I create the byte array:
image = CGImageCreateWithImageInRect(aux.CGImage, imageRect);
context = CGBitmapContextCreate (data[i][j], TILE_WIDTH, TILE_HEIGHT,
bitsPerComponent, bitmapBytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);//kCGImageAlphaNoneSkipFirst);//kCGImageAlphaNone);//
CGContextDrawImage(context, CGRectMake(0, 0, TILE_WIDTH, TILE_HEIGHT), image);
data[i][j] = CGBitmapContextGetData (context);
The data variable is an unsigned char.
And this is how I try to get the UIImage:
NSData *imgData = [NSData dataWithBytes:data[i][j] length:TILE_WIDTH*TILE_HEIGHT*numberOfCompponents];
UIImage *img = [UIImage imageWithData: imgData];
The img (UIImage) is remaining nil.
OK, not this is the background: I am trying to create a pixelate application :). The images from the iPhone 4 camera are too big in size, so I split the image in smaller images. Doing so, when the area pixelated (touched) needs to be updated in order for the pixelate effect to be displayed, I am updating a smaller UIImage. I needed to do it like this because in previous tests it seamed like the update of an UIImage is killing the CPU. Still, the smaller images are now around 80x100 pxl, and the update is not working as smooth as possible. Sometimes, if you move the finger to fast it misses some spots :D. I think that using this method to create an UIImage from the byte array is faster than this one:
CGImageRef cgImage = CGImageCreate(TILE_WIDTH, TILE_HEIGHT, bitsPerComponent,
bitsPerPixel, bitmapBytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big,
dataProvider, NULL, false, kCGRenderingIntentDefault);
UIImage *imageToBeUpdated = [UIImage imageWithCGImage:cgImage];
Am I correct?
[UIImage imageWithData:data] parses data that is in a known image file format (e.g. jpeg, png, or gif; full list in the documentation. You're passing it raw pixel data, which is not supported.
Try this instead of CGBitmapContextGetData to get the image out of the context:
CGImageRef imgRef = CGBitmapContextCreateImage(context);
UIImage *img = [UIImage imageWithCGImage:imgRef];

mask image via another image

Alright what I am trying to do is:
given an image where there is a circle within that image that is "blank". I want to take an existing image from user library and then mask it so that only a certain part of that image is shown on the "blank" image..
I have tried a few masking code but they all seem to work the other way around ... any tips on how to tackle this?
Unfortunately you can't use CoreAnimation to do this (which would make it rather easy).
Looking at Apple's CoreAnimation documentation:
iOS Note: As a performance consideration, iOS does not support the mask property.
Therefore the next best way to do this is to use Quartz 2D (as answered here):
CGContextRef mainViewContentContext;
CGColorSpaceRef colorSpace;
colorSpace = CGColorSpaceCreateDeviceRGB();
// create a bitmap graphics context the size of the image
mainViewContentContext = CGBitmapContextCreate (NULL, targetSize.width, targetSize.height, 8, 0, colorSpace, kCGImageAlphaPremultipliedLast);
// free the rgb colorspace
CGColorSpaceRelease(colorSpace);
if (mainViewContentContext==NULL)
return NULL;
CGImageRef maskImage = [[UIImage imageNamed:#"mask.png"] CGImage];
CGContextClipToMask(mainViewContentContext, CGRectMake(0, 0, targetSize.width, targetSize.height), maskImage);
CGContextDrawImage(mainViewContentContext, CGRectMake(thumbnailPoint.x, thumbnailPoint.y, scaledWidth, scaledHeight), self.CGImage);
// Create CGImageRef of the main view bitmap content, and then
// release that bitmap context
CGImageRef mainViewContentBitmapContext = CGBitmapContextCreateImage(mainViewContentContext);
CGContextRelease(mainViewContentContext);
// convert the finished resized image to a UIImage
UIImage *theImage = [UIImage imageWithCGImage:mainViewContentBitmapContext];
// image is retained by the property setting above, so we can
// release the original
CGImageRelease(mainViewContentBitmapContext);
// return the image
return theImage;

How can I draw an CGImageRef context on the screen?

I have a beautiful CGImageRef context, which I created the whole day to get alpha values ;)
It's defined like that:
CGContextRef context = CGBitmapContextCreate (bitmapData, pixWidth, pixHeiht 8, pixWidth, NULL, kCGImageAlphaOnly);
So for my understanding, that context represents somehow my image. But "virtually", non-visible somewhere in memory.
Can I stuff that in an UIImageView or draw that directly to the screen? I guess that alpha would be converted to grayscale or something like that.
You can create a UIImage by calling:
+ (UIImage *)imageWithCGImage:(CGImageRef)cgImage
and then draw the UIImage using:
- (void)drawAtPoint:(CGPoint)point
Go look at CGBitmapContextCreateImage(), that can give you a CGImageRef from your bitmap context. You can then draw that using the CGContext... functions or make a UIImage using +[UIImage imageWithCGImage:].
CGSize size = ...;
UIGraphicsBeginImageContext(size);
...
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
...
CGPoint pt = ...;
[img drawAtPoint:pt];