Creating UIImage with per-pixel alpha from GL buffer - iphone

I'm trying to take my gl buffer and turn it into a UIImage while retaining the per-pixel alpha within that gl buffer. It doesn't seem to work, as the result I'm getting is the buffer w/o alpha. Can anyone help? I feel like I must be missing a few key steps somewhere. I would really love any advice on this.
Basically I do:
//Read Pixels from OpenGL
glReadPixels(0, 0, miDrawBufferWidth, miDrawBufferHeight, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
//Make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer, len, NULL);
//Configure image
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate(miDrawBufferWidth, miDrawBufferHeight, 8, 32, (4 * miDrawBufferWidth), colorSpaceRef, kCGBitmapByteOrderDefault, provider, NULL, NO, kCGRenderingIntentDefault);
// use device's orientation's width/height to determine context dimensions (and consequently resulting image's dimensions)
uint32* pixels = (uint32 *) IQ_NEW(kHeapGfx, "snapshot_pixels") GLubyte[len];
// use kCGImageAlphaLast? :-/
CGContextRef context = CGBitmapContextCreate(pixels, iRotatedWidth, iRotatedHeight, 8, (4 * iRotatedWidth), CGImageGetColorSpace(iref), kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, miDrawBufferWidth, miDrawBufferHeight), iref);
UIImage *outputImage = [[UIImage alloc] initWithCGImage:CGBitmapContextCreateImage(context)];
//cleanup
CGDataProviderRelease(provider);
CGImageRelease(iref);
CGContextRelease(context);
return outputImage;

Yes! Luckily apparently someone has solved this exact problem here: http://www.iphonedevsdk.com/forum/iphone-sdk-development/23525-cgimagecreate-alpha.html
It boiled down to an extra kCGImageAlphaLast flag being passed into the CGImageCreate to incorporate the alpha (along with the kCGBitmapByteOrderDefault flag). :)

Related

How to capture a screenshot of a view that contains cocs2d and UIKit both

I am developing an iPad application in iOS6 that shows designing of houses with different colors and textures. For that I am using cocos2d. And for showing the used textures and colors on the home, I am using UIKit views.
Now I want to take a screenshot of this view, which contains both cocos2d layer and UIKit views.
If I am taking screen shot using cocos2d like:
UIImage *screenshot = [AppDelegate screenshotWithStartNode:n];
then it is only taking a snap of the cocos2d layer.
else if I am taking screen shot using UIkit like:
UIImage *screenshot = UIGraphicsGetImageFromCurrentImageContext();
then it is only taking capture of the UIKit components and blackouts the cocos2d part.
I want both of them in a same screen shot...
try this method and just change some code with your requirement..
-(UIImage*) screenshotUIImage
{
CGSize displaySize = [self displaySize];
CGSize winSize = [self winSize];
//Create buffer for pixels
GLuint bufferLength = displaySize.width * displaySize.height * 4;
GLubyte* buffer = (GLubyte*)malloc(bufferLength);
//Read Pixels from OpenGL
glReadPixels(0, 0, displaySize.width, displaySize.height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
//Make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer, bufferLength, NULL);
//Configure image
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * displaySize.width;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef iref = CGImageCreate(displaySize.width, displaySize.height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
uint32_t* pixels = (uint32_t*)malloc(bufferLength);
CGContextRef context = CGBitmapContextCreate(pixels, winSize.width, winSize.height, 8, winSize.width * 4, CGImageGetColorSpace(iref), kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGContextTranslateCTM(context, 0, displaySize.height);
CGContextScaleCTM(context, 1.0f, -1.0f);
switch (deviceOrientation_)
{
case CCDeviceOrientationPortrait: break;
case CCDeviceOrientationPortraitUpsideDown:
CGContextRotateCTM(context, CC_DEGREES_TO_RADIANS(180));
CGContextTranslateCTM(context, -displaySize.width, -displaySize.height);
break;
case CCDeviceOrientationLandscapeLeft:
CGContextRotateCTM(context, CC_DEGREES_TO_RADIANS(-90));
CGContextTranslateCTM(context, -displaySize.height, 0);
break;
case CCDeviceOrientationLandscapeRight:
CGContextRotateCTM(context, CC_DEGREES_TO_RADIANS(90));
CGContextTranslateCTM(context, displaySize.width * 0.5f, -displaySize.height);
break;
}
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, displaySize.width, displaySize.height), iref);
CGImageRef imageRef = CGBitmapContextCreateImage(context);
UIImage *outputImage = [UIImage imageWithCGImage:imageRef];
//Dealloc
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGImageRelease(iref);
CGColorSpaceRelease(colorSpaceRef);
CGContextRelease(context);
free(buffer);
free(pixels);
return outputImage;
}
- (Texture2D*) screenshotTexture {
return [[Texture2D alloc] initWithImage:[self screenshotUIImage]];
}
For More Info see This Link
Refer all answers and comments its very interesting
i hope this help you
I researched a lot about this... And at present I can't find any code which can take a screenshot of a screen containing cocos2d and UIKit both together. Even there is some code available but, it is not acceptable on AppStore. So, if u use that code, your app will be rejected from the AppStore.
So for now, I found a temporary solution to achieve this:
First I took the screenshot of my cocos2d layer and then took that screenshot in a UIImageView and added that UIImageView on my screen behind all the present UIViews like this, such that user can't visualize this event:
[self.view insertSubview:imgView atIndex:1];
At index 1 because my cocos2d layer is at index 0. so above that...
Now, that the cocos2d picture is part of my UIKit view, I took the screenshot of my current screen with the normal UIKit way. And there we are. I now had the screenshot containing both the views.
This worked for me for now. If any one finds a valid solution for this, then most welcome!!!
I'll be waiting for any feasible solution for this.
Thanks all for help!!!

How to give animation for the texture in open gl iphone?

I am new to iphone development. Currently I am working on a project where opengl is used and I need to animate a part of the view. For that I have taken the screen shot and then created the texture.
This is my code
glEnable(GL_TEXTURE_2D);
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_SRC_COLOR);
UIGraphicsBeginImageContext(self.introductionTextLabel.frame.size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
GLuint texture[1];
glGenTextures(1, &texture[0]);
glBindTexture(GL_TEXTURE_2D, texture[0]);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
GLuint width = CGImageGetWidth(viewImage.CGImage);
GLuint height = CGImageGetHeight(viewImage.CGImage);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
void *imageData = malloc( height * width * 4 );
CGContextRef contextTexture = CGBitmapContextCreate( imageData, width, height, 8, 4 * width, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big );
CGColorSpaceRelease( colorSpace );
CGContextClearRect( contextTexture, CGRectMake( 0, 0, width, height ) );
CGContextTranslateCTM( contextTexture, 0, height - height );
CGContextDrawImage( contextTexture, CGRectMake( 0, 0, width, height ), viewImage.CGImage );
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageData);
CGContextRelease(contextTexture);
free(imageData);
I have searched a lot but could not find a good method for giving an animation for the texture.
Can any one suggest me good method. If I am doing wrong please point out.
Thanks in advance.
The only way I can recommend you to modify the texture is to use glTexImage2D() for full frame update and glTexSubImage2D() for partial update. Of course it will be better if you will use preloaded and compressed type of textures...

Is it possible to isolate a single color in an UIImage/CGImageRef

Wondering if there is a way to isolate a single color in an image either using masks or perhaps even a custom color space. I'm ultimately looking for a fast way to isolate 14 colors out of an image - figured if there was a masking method it might may be faster than walking through the pixels.
Any help is appreciated!
You could use a custom color space (documentation here) and then substitute it for "CGColorSpaceCreateDeviceGray()" in the following code:
- (UIImage *)convertImageToGrayScale:(UIImage *)image
{
// Create image rectangle with current image width/height
CGRect imageRect = CGRectMake(0, 0, image.size.width, image.size.height);
// Grayscale color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray(); // <- SUBSTITUTE HERE
// Create bitmap content with current image size and grayscale colorspace
CGContextRef context = CGBitmapContextCreate(nil, image.size.width, image.size.height, 8, 0, colorSpace, kCGImageAlphaNone);
// Draw image into current context, with specified rectangle
// using previously defined context (with grayscale colorspace)
CGContextDrawImage(context, imageRect, [image CGImage]);
// Create bitmap image info from pixel data in current context
CGImageRef imageRef = CGBitmapContextCreateImage(context);
// Create a new UIImage object
UIImage *newImage = [UIImage imageWithCGImage:imageRef];
// Release colorspace, context and bitmap information
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
CFRelease(imageRef);
// Return the new grayscale image
return newImage;
}
This code is from this blog which is worth a look at for removing colors from images.

Iphone - Masking JPG images

I am coding a jigsaw puzzle and I need to mask the images to create puzzle pieces.
I am using pictures from an online server and they are *.JPG. When I mask them, the area that should be transparent is black.
Can I add the alpha channel programmatically? If yes, can you show me how?
Thanks a lot,
Andrei
I found the answer:
CGImageRef imageRef = self.CGImage;
size_t width = CGImageGetWidth(imageRef);
size_t height = CGImageGetHeight(imageRef);
// The bitsPerComponent and bitmapInfo values are hard-coded to prevent an "unsupported parameter combination" error
CGContextRef offscreenContext = CGBitmapContextCreate(NULL,
width,
height,
8,
0,
CGImageGetColorSpace(imageRef),
kCGBitmapByteOrderDefault | kCGImageAlphaPremultipliedFirst);
// Draw the image into the context and retrieve the new image, which will now have an alpha layer
CGContextDrawImage(offscreenContext, CGRectMake(0, 0, width, height), imageRef);
CGImageRef imageRefWithAlpha = CGBitmapContextCreateImage(offscreenContext);
UIImage *imageWithAlpha = [UIImage imageWithCGImage:imageRefWithAlpha];
// Clean up
CGContextRelease(offscreenContext);
CGImageRelease(imageRefWithAlpha);
return imageWithAlpha;

Creating a reusable image from a Core Graphics context gives me nothing

I'm trying to speed up my drawing code. Instead of creating a graphics context, drawing the current image into it, and drawing over it, I'm trying to create a context using some pixel data, and just modify that directly. The problem is, I'm pretty new to core graphics and I can't create the initial image. I want just a solid red image, but I get nothing. Here's what I'm using for the initial image. I assume the problem will be the same with the rest of the code.
pixels = malloc(320*460*4);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pixels, 320, 460, 8, 4*320, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextSetRGBFillColor(context, 1.0, 0.0, 0.0, 1.0);
CGContextAddRect(context, CGRectMake(0, 0, 320, 460));
CGContextFillPath(context);
trace = [[UIImageView alloc] initWithImage:UIGraphicsGetImageFromCurrentImageContext()];
CGContextRelease(context);
Edit: UIGraphicsGetImageFromCurrentImageContext() turned out to be the problem. A working solution follows, but note that this is no faster than the much simpler UIGraphicsBeginImageContext()
pixels = malloc(320*460*4);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pixels, 320, 460, 8, 4*320, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGContextSetRGBFillColor(context, 1.0, 1.0, 1.0, 1.0);
CGContextAddRect(context, CGRectMake(0, 0, 320, 460));
CGContextFillPath(context);
CGDataProviderRef dp = CGDataProviderCreateWithData(NULL, pixels, 320*460*4, NULL);
CGImageRef img = CGImageCreate(320, 460, 8, 32, 4*320, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big, dp, NULL, NO, kCGRenderingIntentDefault);
trace = [[UIImageView alloc] initWithImage:[UIImage imageWithCGImage:img]];
CGImageRelease(img);
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
For starters, it seems like the context you're drawing to is unconnected to the UIGraphics "current image context". Have you called UIGraphicsBeginImageContext(CGSize size) somewhere?
Not sure that will get you what you need, but it certainly means you're less likely to get nil back as the current image context.