I am coding a jigsaw puzzle and I need to mask the images to create puzzle pieces.
I am using pictures from an online server and they are *.JPG. When I mask them, the area that should be transparent is black.
Can I add the alpha channel programmatically? If yes, can you show me how?
Thanks a lot,
Andrei
I found the answer:
CGImageRef imageRef = self.CGImage;
size_t width = CGImageGetWidth(imageRef);
size_t height = CGImageGetHeight(imageRef);
// The bitsPerComponent and bitmapInfo values are hard-coded to prevent an "unsupported parameter combination" error
CGContextRef offscreenContext = CGBitmapContextCreate(NULL,
width,
height,
8,
0,
CGImageGetColorSpace(imageRef),
kCGBitmapByteOrderDefault | kCGImageAlphaPremultipliedFirst);
// Draw the image into the context and retrieve the new image, which will now have an alpha layer
CGContextDrawImage(offscreenContext, CGRectMake(0, 0, width, height), imageRef);
CGImageRef imageRefWithAlpha = CGBitmapContextCreateImage(offscreenContext);
UIImage *imageWithAlpha = [UIImage imageWithCGImage:imageRefWithAlpha];
// Clean up
CGContextRelease(offscreenContext);
CGImageRelease(imageRefWithAlpha);
return imageWithAlpha;
Related
I got a UIImage from UIImagePickerController, and using the code from this site to resize the image
- (UIImage *)resizedImage:(CGSize)newSize
transform:(CGAffineTransform)transform
drawTransposed:(BOOL)transpose
interpolationQuality:(CGInterpolationQuality)quality {
CGRect newRect = CGRectIntegral(CGRectMake(0, 0, newSize.width, newSize.height));
CGRect transposedRect = CGRectMake(0, 0, newRect.size.height, newRect.size.width);
CGImageRef imageRef = self.CGImage;
// Build a context that's the same dimensions as the new size
CGContextRef bitmap = CGBitmapContextCreate(NULL,
newRect.size.width,
newRect.size.height,
CGImageGetBitsPerComponent(imageRef),
0,
CGImageGetColorSpace(imageRef),
CGImageGetBitmapInfo(imageRef));
// Rotate and/or flip the image if required by its orientation
CGContextConcatCTM(bitmap, transform);
// Set the quality level to use when rescaling
CGContextSetInterpolationQuality(bitmap, quality);
// Draw into the context; this scales the image
CGContextDrawImage(bitmap, transpose ? transposedRect : newRect, imageRef);
// Get the resized image from the context and a UIImage
CGImageRef newImageRef = CGBitmapContextCreateImage(bitmap);
UIImage *newImage = [UIImage imageWithCGImage:newImageRef];
// Clean up
CGContextRelease(bitmap);
CGImageRelease(newImageRef);
return newImage;
}
UIImagePNGRepresentation() failed to return NSData on re-sized image, but UIImageJPEGRepresentation() succeed.
How do we know if a UIImage is presentable in PNG or JPEG? What missed in the above code that make the resized image can not be represented in PNG?
According to apple document: "This function may return nil if the image has no data or if the underlying CGImageRef contains data in an unsupported bitmap format."
What bitmap format supported by PNG presentation? How to make an UIImage PNG-supported format?
That was a mistake that in another part of the code the image was rescaled with the following
CGContextRef context = CGBitmapContextCreate(NULL,
size.width,
size.height,
8,
0,
CGImageGetColorSpace(source),
kCGImageAlphaNoneSkipFirst);
Changing kCGImageAlphaNoneSkipFirst to CGImageGetBitmapInfo(source) fixed the problem
go to following link...
How to check if downloaded PNG image is corrupt?
it may help you...
Let me know it is working or not...
Happy Coding!!!!
I use a function to change brightness of a picture (without use openGL), which works well.
I use another function to convert my image in grayscale, which works well too.
But when i combine them, when i apply my brightness function on the grayscale image, i got some stripes on the image, and when the background is transparent (alpha 0) it is replaced by a black background. Do you any have any idea?
Please find below my grayscale function and the brightness function as well :
// ## Brightness without OpenGL call
+(UIImage *) changeImageBrightness:(UIImage *)aInputImage withFactor:(float)aFactor
{
CGImageRef img=aInputImage.CGImage;
CFDataRef dataref = CGDataProviderCopyData(
CGImageGetDataProvider(aInputImage.CGImage));
int length=CFDataGetLength(dataref);
UInt8 *data=(UInt8 *)CFDataGetBytePtr(dataref);
// Perform operation on pixels
for(int index=0;index<length;index+=4) {
// Go For BRIGHTNESS
for(int i=0;i<3;i++) {
if(data[index+i]+aFactor<0) {
data[index+i]=0;
} else {
if(data[index+i]+aFactor>255) {
data[index+i]=255;
} else {
data[index+i]+=aFactor;
}
}
}
}
// .. Take image attributes
size_t width=CGImageGetWidth(img);
size_t height=CGImageGetHeight(img);
size_t bitsPerComponent=CGImageGetBitsPerComponent(img);
size_t bitsPerPixel=CGImageGetBitsPerPixel(img);
size_t bytesPerRow=CGImageGetBytesPerRow(img);
// .. Do the pixel manupulation
CGColorSpaceRef colorspace=CGImageGetColorSpace(img);
CGBitmapInfo bitmapInfo=CGImageGetBitmapInfo(img);
CFDataRef newData=CFDataCreate(NULL,data,length);
CGDataProviderRef provider=CGDataProviderCreateWithCFData(newData);
// .. Get the Image out of this raw data
CGImageRef newImg = CGImageCreate(width, height, bitsPerComponent,
bitsPerPixel, bytesPerRow, colorspace, bitmapInfo, provider,
NULL, true, kCGRenderingIntentDefault);
// .. Prepare the image from raw data
UIImage* rawImage = [[UIImage alloc] initWithCGImage:newImg];
// .. done with all,so release the references
CFRelease(newData);
CGImageRelease(newImg);
CGDataProviderRelease(provider);
CFRelease(dataref);
// return rawImage.CGImage;
UIImage *imageApresFiltreEtRotationCGI = [UIImage
imageWithCGImage:rawImage.CGImage];
return imageApresFiltreEtRotationCGI;
}
+ (UIImage *)convertImageToGrayScale:(UIImage *)image
{
// Create image rectangle with current image width/height
CGRect imageRect = CGRectMake(0, 0, image.size.width, image.size.height);
// Grayscale color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
// Create bitmap content with current image size and grayscale colorspace
CGContextRef context = CGBitmapContextCreate(nil, image.size.width,
image.size.height, 8, 0, colorSpace, kCGImageAlphaNone);
// Draw image into current context, with specified rectangle
// using previously defined context (with grayscale colorspace)
CGContextDrawImage(context, imageRect, [image CGImage]);
/* changes start here */
// Create bitmap image info from pixel data in current context
CGImageRef grayImage = CGBitmapContextCreateImage(context);
// release the colorspace and graphics context
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
// make a new alpha-only graphics context
context = CGBitmapContextCreate(nil, image.size.width,
image.size.height, 8, 0, nil, kCGImageAlphaOnly);
// draw image into context with no colorspace
CGContextDrawImage(context, imageRect, [image CGImage]);
// create alpha bitmap mask from current context
CGImageRef mask = CGBitmapContextCreateImage(context);
// release graphics context
CGContextRelease(context);
// make UIImage from grayscale image with alpha mask
UIImage *grayScaleImage = [UIImage imageWithCGImage:
CGImageCreateWithMask(grayImage, mask) scale:image.scale
orientation:image.imageOrientation];
// release the CG images
CGImageRelease(grayImage);
CGImageRelease(mask);
// return the new grayscale image
return grayScaleImage;
/* changes end here */
}
Here is the picture i got.
The background behind the baby was transparent before i made the grey/scale and light transformation.
I was able to find a method to help me modify the alpha value for a pixel in a UIImage in my app, but I am running into two errors (the second is most certainly caused by the first). I can't, however, figure out what is going on.
My method:
- (void)modifyAlpha:(int)x and:(int)y {
CGContextRef ctx;
CGImageRef imageRef = [scratchOffImage CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = malloc(height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
// Now your rawData contains the image data in the RGBA8888 pixel format.
int byteIndex = (bytesPerRow * y) + x * bytesPerPixel;
rawData[byteIndex+3] = (char) (255); //Change the pixel value
ctx = CGBitmapContextCreate(rawData,
CGImageGetWidth( imageRef ),
CGImageGetHeight( imageRef ),
8,
CGImageGetBytesPerRow( imageRef ),
CGImageGetColorSpace( imageRef ),
kCGImageAlphaPremultipliedFirst ); //This line causes an error: incorrect colorspace
imageRef = CGBitmapContextCreateImage (ctx);
UIImage* rawImage = [UIImage imageWithCGImage:imageRef];
CGContextRelease(ctx);
scratchOffImage = rawImage;
[scratchOffImageView setImage:scratchOffImage];
free(rawData);
}
The error:
<Error>: CGBitmapContextCreate: unsupported color space.
is being thrown on the line:
ctx = CGBitmapContextCreate(rawData, ...
and then the second error:
<Error>: CGBitmapContextCreateImage: invalid context 0x0
is being thrown on the line:
imageRef = CGBitmapContextCreateImage (ctx);
My image is included in the bundle I originally made it by outputting it Photoshop's Save for Web function using PNG-8 with transparancy as the format.
When I ran the sample code that the function was first used in, the sample image worked fine. However, my image doesn't. How can I debug this?
Does anyone have any idea how I can debug this? Might my input PNG be formatted incorrectly? How can I check this?
Cheers,
Brett
EDIT 1: The original source code came from the example found here. The sample shows a conversion to greyscale, whereas I only need to change the alpha value.
EDIT 2: I have tried saving my image as a PNG-8 and a PNG-24, both with no luck.
A PNG-8 uses an 8-bit indexed color space. Quartz doesn't support indexed color spaces for CGBitmapContext. The CGBitmapContextCreate documentation says "Note that indexed color spaces are not supported for bitmap graphics contexts."
Instead of passing CGImageGetColorSpace( imageRef ) as the color space in your second call to CGBitmapContextCreate, you want to pass the same color space you used in your first call (your colorSpace variable).
Anyway, there's no reason to even create a second CGBitmapContext. And you're leaking the result of CGBitmapContextCreateImage. Just do this:
- (void)modifyAlpha:(int)x and:(int)y {
CGImageRef imageRef = [scratchOffImage CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = malloc(height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
// Now your rawData contains the image data in the RGBA8888 pixel format.
int byteIndex = (bytesPerRow * y) + x * bytesPerPixel;
rawData[byteIndex+3] = (char) (255); //Change the pixel value
imageRef = CGBitmapContextCreateImage (context);
CGContextRelease(context);
UIImage* rawImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
scratchOffImage = rawImage;
[scratchOffImageView setImage:scratchOffImage];
free(rawData);
}
Wondering if there is a way to isolate a single color in an image either using masks or perhaps even a custom color space. I'm ultimately looking for a fast way to isolate 14 colors out of an image - figured if there was a masking method it might may be faster than walking through the pixels.
Any help is appreciated!
You could use a custom color space (documentation here) and then substitute it for "CGColorSpaceCreateDeviceGray()" in the following code:
- (UIImage *)convertImageToGrayScale:(UIImage *)image
{
// Create image rectangle with current image width/height
CGRect imageRect = CGRectMake(0, 0, image.size.width, image.size.height);
// Grayscale color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray(); // <- SUBSTITUTE HERE
// Create bitmap content with current image size and grayscale colorspace
CGContextRef context = CGBitmapContextCreate(nil, image.size.width, image.size.height, 8, 0, colorSpace, kCGImageAlphaNone);
// Draw image into current context, with specified rectangle
// using previously defined context (with grayscale colorspace)
CGContextDrawImage(context, imageRect, [image CGImage]);
// Create bitmap image info from pixel data in current context
CGImageRef imageRef = CGBitmapContextCreateImage(context);
// Create a new UIImage object
UIImage *newImage = [UIImage imageWithCGImage:imageRef];
// Release colorspace, context and bitmap information
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
CFRelease(imageRef);
// Return the new grayscale image
return newImage;
}
This code is from this blog which is worth a look at for removing colors from images.
I'm trying to take my gl buffer and turn it into a UIImage while retaining the per-pixel alpha within that gl buffer. It doesn't seem to work, as the result I'm getting is the buffer w/o alpha. Can anyone help? I feel like I must be missing a few key steps somewhere. I would really love any advice on this.
Basically I do:
//Read Pixels from OpenGL
glReadPixels(0, 0, miDrawBufferWidth, miDrawBufferHeight, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
//Make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer, len, NULL);
//Configure image
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate(miDrawBufferWidth, miDrawBufferHeight, 8, 32, (4 * miDrawBufferWidth), colorSpaceRef, kCGBitmapByteOrderDefault, provider, NULL, NO, kCGRenderingIntentDefault);
// use device's orientation's width/height to determine context dimensions (and consequently resulting image's dimensions)
uint32* pixels = (uint32 *) IQ_NEW(kHeapGfx, "snapshot_pixels") GLubyte[len];
// use kCGImageAlphaLast? :-/
CGContextRef context = CGBitmapContextCreate(pixels, iRotatedWidth, iRotatedHeight, 8, (4 * iRotatedWidth), CGImageGetColorSpace(iref), kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, miDrawBufferWidth, miDrawBufferHeight), iref);
UIImage *outputImage = [[UIImage alloc] initWithCGImage:CGBitmapContextCreateImage(context)];
//cleanup
CGDataProviderRelease(provider);
CGImageRelease(iref);
CGContextRelease(context);
return outputImage;
Yes! Luckily apparently someone has solved this exact problem here: http://www.iphonedevsdk.com/forum/iphone-sdk-development/23525-cgimagecreate-alpha.html
It boiled down to an extra kCGImageAlphaLast flag being passed into the CGImageCreate to incorporate the alpha (along with the kCGBitmapByteOrderDefault flag). :)