I'm using a bit of a workaround in order to successfully save an image with a mask (because you can't render masks with CoreAction).
Here's my code:
CGContextRef context = CGBitmapContextCreate(nil, cWidth, cHeight, bitsPerComponent, bytesPerRow, CGColorSpaceCreateDeviceRGB(), CGImageGetBitmapInfo(image));
//the mask:
CGContextClipToMask(context, CGRectMake(0,0,1280,935), self.image.image.CGImage);
//the image to mask:
CGContextDrawImage(context, CGRectMake(0, 0, 1280,935), viewImage.CGImage);
CGImageRef mergeResult = CGBitmapContextCreateImage(context);
saver = [[UIImage alloc] initWithCGImage:mergeResult];
So this works pretty well, the mask cuts out everything outside of its shape in the target image. This makes the surrounding area and background white. Rather than show white, I would like to show an image/color/pattern, etc. So I'd basically like to stack another image behind all that.
How can this be done? Thanks
Draw your background into the context first, then clip, then draw your image into the same context.
Related
I am new to Iphone development.
Currently i am making coloring app.
I am using apple's paint app as ref to create my app.
I successfully create app where u can color on a screen with given texture image
What i did is
I create a custom UIView which extends opengl and i detect touches on it and draw accordingly.
I also kept background UIImageView which contain outline images, so it feels like your drawing above that Image.
Everything works fine
but i wanted to fill color inside black edges
Like if a image has four square which has black edges and inside of that square is blank and if i touch any square it should fill that square with selected color(mostly i am working on irregular shape)
Can anyone tell me how can i fill colors inside that square
The flood fill algo looks slow as i have some big images which will take time to fill the color
so is there any easy method by which i can fill color
A sample code will b very helpful as i am new to iPhone Dev
I implemnted this kind of feature in my recent project. The difference is: I filled color in border only.
Check my code over here, it might get helpful to you
// apply color to only border & return an image
+ (UIImage *)imageNamed:(NSString *)name withColor:(UIColor *)color
{
// load the image
UIImage *img = [UIImage imageNamed:name];
// begin a new image context, to draw our colored image onto
UIGraphicsBeginImageContext(img.size);
// get a reference to that context we created
CGContextRef context = UIGraphicsGetCurrentContext();
// set the fill color
[color setFill];
// translate/flip the graphics context (for transforming from CG* coords to UI* coords
CGContextTranslateCTM(context, 0, img.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
// set the blend mode to color burn, and the original image
CGContextSetBlendMode(context, kCGBlendModeColorBurn);
CGRect rect = CGRectMake(0, 0, img.size.width, img.size.height);
CGContextDrawImage(context, rect, img.CGImage);
// set a mask that matches the shape of the image, then draw (color burn) a colored rectangle
CGContextClipToMask(context, rect, img.CGImage);
CGContextAddRect(context, rect);
CGContextDrawPath(context,kCGPathFill);
// generate a new UIImage from the graphics context we drew onto
UIImage *coloredImg = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//return the color-burned image
return coloredImg;
}
Enjoy Programming !
I have a few UIImage objects which I want to compose into a single UIImage and then save it to disk. I'm not displaying this on the screen so it doesn't make sense to do it in -drawRect.
Is there a way of creating a context similar like in -drawRect: and then just draw the UIImage objects in there using something like CGContextDrawImage(context, imgRect, img.CGImage); ?
I believe you want to use a CGContextRef to draw all the images in at the desired place and then get the resulting image. The code will look something like this:
CGContextRef context = CGBitmapContextCreate(nil, desired_width, desired_height, bitsPerComponent, bytesPerRow, CGColorSpaceCreateDeviceRGB(), CGImageGetBitmapInfo(image));
//This code is to ilustrate what you have to do:
for(image in your Images) {
CGContextDrawImage(context, CGRectMake(currentImage.frame.origin.x, currentImage.frame.origin.y, CGImageGetWidth(currentImage), CGImageGetHeight(currentImage), currentImage);
}
CGImageRef mergeResult = CGBitmapContextCreateImage(context);
mergedImage = [[UIImage alloc] initWithCGImage:mergeResult];
CGContextRelease(context);
CGImageRelease(mergeResult);
CGContextRefs can be created whenever you wish and this allows you to do all kind of image manipulations.
Use CGBitmapContextCreate to create context and CGBitmapContextCreateImage to get final result.
Alright what I am trying to do is:
given an image where there is a circle within that image that is "blank". I want to take an existing image from user library and then mask it so that only a certain part of that image is shown on the "blank" image..
I have tried a few masking code but they all seem to work the other way around ... any tips on how to tackle this?
Unfortunately you can't use CoreAnimation to do this (which would make it rather easy).
Looking at Apple's CoreAnimation documentation:
iOS Note: As a performance consideration, iOS does not support the mask property.
Therefore the next best way to do this is to use Quartz 2D (as answered here):
CGContextRef mainViewContentContext;
CGColorSpaceRef colorSpace;
colorSpace = CGColorSpaceCreateDeviceRGB();
// create a bitmap graphics context the size of the image
mainViewContentContext = CGBitmapContextCreate (NULL, targetSize.width, targetSize.height, 8, 0, colorSpace, kCGImageAlphaPremultipliedLast);
// free the rgb colorspace
CGColorSpaceRelease(colorSpace);
if (mainViewContentContext==NULL)
return NULL;
CGImageRef maskImage = [[UIImage imageNamed:#"mask.png"] CGImage];
CGContextClipToMask(mainViewContentContext, CGRectMake(0, 0, targetSize.width, targetSize.height), maskImage);
CGContextDrawImage(mainViewContentContext, CGRectMake(thumbnailPoint.x, thumbnailPoint.y, scaledWidth, scaledHeight), self.CGImage);
// Create CGImageRef of the main view bitmap content, and then
// release that bitmap context
CGImageRef mainViewContentBitmapContext = CGBitmapContextCreateImage(mainViewContentContext);
CGContextRelease(mainViewContentContext);
// convert the finished resized image to a UIImage
UIImage *theImage = [UIImage imageWithCGImage:mainViewContentBitmapContext];
// image is retained by the property setting above, so we can
// release the original
CGImageRelease(mainViewContentBitmapContext);
// return the image
return theImage;
I have a small image from a database and the image's average color need to be altered slightly.
It's a CGImageRef and I thought of creating a CGContext, drawing the image to this context, then subsequently changing the bitmap data somehow and finally rendering it.
But how can I alter the color information?
Thanks for your help!
Check out this Apple Q&A on pixel data manipulation for details on how you might go about that.
Draw a color onto the object like this:
// first draw image
[self.image drawInRect:rect];
// prepare the context to draw into
CGContextRef context = UIGraphicsGetCurrentContext();
// set the blend mode and draw rectangle on top of image
CGContextSetBlendMode(context, kCGBlendModeColor);
CGContextClipToMask(context, self.bounds, image.CGImage); // this restricts drawing to within alpha channel
CGContextSetRGBFillColor(context, 0.75, 0.0, 0.0, 1.0); // this is your color, a light reddish tint
CGContextFillRect(context, rect);
I put this into the drawRect: method of a custom UIView. That UIView has an ivar, UIImage *image that holds the image you want to tint or color.
have been struggling with this issue for quite some time now and couldn't find an answer so far. Basically, what I want to do, is capturing the content of my EAGLview and then use it to merge it with other images. Anyway, the mainproblem is, that everything transparent in my EAGLview renders opaque when saving it to the photoalbum or putting it into a UIImageView. Let me share some code with you, I found somewhere else:
- (CGImageRef) glToUIImage {
unsigned char buffer[320*480*4];
glReadPixels(0,0,320,480,GL_RGBA,GL_UNSIGNED_BYTE,&buffer);
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, &buffer, 320*480*4, NULL);
CGImageRef iref = CGImageCreate(320,480,8,32,320*4,CGColorSpaceCreateDeviceRGB(),kCGBitmapByteOrderDefault,ref,NULL,true,kCGRenderingIntentDefault);
size_t width = CGImageGetWidth(iref);
size_t height = CGImageGetHeight(iref);
size_t length = width*height*4;
uint32_t *pixels = (uint32_t *)malloc(length);
CGContextRef context = CGBitmapContextCreate(pixels, width, height, 8, width*4, CGImageGetColorSpace(iref), kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGContextTranslateCTM(context, 0.0, height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextDrawImage(context, CGRectMake(0.0, 0.0, width, height), iref);
CGImageRef outputRef = CGBitmapContextCreateImage(context);
UIImage *outputImage = [UIImage imageWithCGImage:outputRef];
UIImageWriteToSavedPhotosAlbum(outputImage, nil, nil, nil);
return outputRef;
}
As I already mentioned, this perfectly grabs the content of my EAGLview, but I can not get the image with its alpha values.
Any help appreciated. Thanks!
Two places I can see that you might be losing your transparency:
when you're drawing your scene: does your scene have a transparent background? make sure you're doing a glClear to something like (0,0,0,0) rather than (0,0,0,1).
when you're drawing the image to flip it over: what is the default background color here? Seems likely it's a non-transparent black and you'll end up with that where the transparent parts of your scene used to be.
You could check if #2 is your problem by saving the image before you flip it over, and if it is, you could avoid the flipping over process by flipping the memory in your pixels buffer directly rather than using Core Graphics calls.