Related
I am working on paint app taking reference from GLPaint app. In this app there are two canvas views, one view is moving from left to right (animating) and other view is used as background view (as shown in figure).
I am using CAEAGLLayer for filling colors in both views (using subclassing technique). It is working as expected. Now I have to take screenshot of the complete view (outlines and both OpenGL views), but I am getting screenshot of only one view (either moving view or background view). Code related to screenshot is associated with both views but at a time only one view's content is saved.
Code snippet for screenshot as follows.
- (UIImage*)snapshot:(UIView*)eaglview{
GLint backingWidth, backingHeight;
// Bind the color renderbuffer used to render the OpenGL ES view
// If your application only creates a single color renderbuffer which is already bound at this point,
// this call is redundant, but it is needed if you're dealing with multiple renderbuffers.
// Note, replace "_colorRenderbuffer" with the actual name of the renderbuffer object defined in your class.
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
// Get the size of the backing CAEAGLLayer
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight);
NSInteger x = 0, y = 0, width = backingWidth, height = backingHeight;
NSInteger dataLength = width * height * 4;
GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));
// Read pixel data from the framebuffer
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(x, y, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data);
// Create a CGImage with the pixel data
// If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel
// otherwise, use kCGImageAlphaPremultipliedLast
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate(width, height, 8, 32, width * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast,
ref, NULL, true, kCGRenderingIntentDefault);
// OpenGL ES measures data in PIXELS
// Create a graphics context with the target size measured in POINTS
NSInteger widthInPoints, heightInPoints;
if (NULL != UIGraphicsBeginImageContextWithOptions) {
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// Set the scale parameter to your OpenGL ES view's contentScaleFactor
// so that you get a high-resolution snapshot when its value is greater than 1.0
CGFloat scale = eaglview.contentScaleFactor;
widthInPoints = width / scale;
heightInPoints = height / scale;
UIGraphicsBeginImageContextWithOptions(CGSizeMake(widthInPoints, heightInPoints), NO, scale);
}
else {
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
widthInPoints = width;
heightInPoints = height;
UIGraphicsBeginImageContext(CGSizeMake(widthInPoints, heightInPoints));
}
CGContextRef cgcontext = UIGraphicsGetCurrentContext();
// UIKit coordinate system is upside down to GL/Quartz coordinate system
// Flip the CGImage by rendering it to the flipped bitmap context
// The size of the destination area is measured in POINTS
CGContextSetBlendMode(cgcontext, kCGBlendModeCopy);
CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref);
// Retrieve the UIImage from the current context
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Clean up
free(data);
CFRelease(ref);
CFRelease(colorspace);
CGImageRelease(iref);
return image;
}
Is there any way to combine content of both CAEaglelayer views?
Please help.
Thank you very much.
you could create screenshots of each view by separate and then combine them as follows:
UIGraphicsBeginImageContext(canvasSize);
[openGLImage1 drawInRect:CGRectMake(0, 0, canvasSize.width, canvasSize.height)];
[openGLImage2 drawInRect:CGRectMake(0, 0, canvasSize.width, canvasSize.height)];
UIImage *resultingImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
you should use appropriate canvasSize and frame to draw each generated UIImage, it is just a sample of how you could do it
See here for a much better way of doing this. It basically allows you to capture a larger view that contains all of your OpenGL (and other) views into one, fully composed screenshot that's identical to what you see on the screen.
I’m working on a paint app for iphone. In my code I'm using an imageView which contain outline image on which I am puting CAEAGLLayer for filling colors in outline image. Now I am taking screenshot of OpenGL ES [CAEAGLLayer] rendered content using function:
- (UIImage*)snapshot:(UIView*)eaglview{
GLint backingWidth1, backingHeight1;
// Bind the color renderbuffer used to render the OpenGL ES view
// If your application only creates a single color renderbuffer which is already bound at this point,
// this call is redundant, but it is needed if you're dealing with multiple renderbuffers.
// Note, replace "_colorRenderbuffer" with the actual name of the renderbuffer object defined in your class.
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
// Get the size of the backing CAEAGLLayer
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth1);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight1);
NSInteger x = 0, y = 0, width = backingWidth1, height = backingHeight1;
NSInteger dataLength = width * height * 4;
GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));
// Read pixel data from the framebuffer
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(x, y, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data);
// Create a CGImage with the pixel data
// If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel
// otherwise, use kCGImageAlphaPremultipliedLast
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate(width, height, 8, 32, width * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast,
ref, NULL, true, kCGRenderingIntentDefault);
// OpenGL ES measures data in PIXELS
// Create a graphics context with the target size measured in POINTS
NSInteger widthInPoints, heightInPoints;
if (NULL != UIGraphicsBeginImageContextWithOptions) {
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// Set the scale parameter to your OpenGL ES view's contentScaleFactor
// so that you get a high-resolution snapshot when its value is greater than 1.0
CGFloat scale = eaglview.contentScaleFactor;
widthInPoints = width / scale;
heightInPoints = height / scale;
UIGraphicsBeginImageContextWithOptions(CGSizeMake(widthInPoints, heightInPoints), NO, scale);
}
else {
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
widthInPoints = width;
heightInPoints = height;
UIGraphicsBeginImageContext(CGSizeMake(widthInPoints, heightInPoints));
}
CGContextRef cgcontext = UIGraphicsGetCurrentContext();
// UIKit coordinate system is upside down to GL/Quartz coordinate system
// Flip the CGImage by rendering it to the flipped bitmap context
// The size of the destination area is measured in POINTS
CGContextSetBlendMode(cgcontext, kCGBlendModeCopy);
CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref);
// Retrieve the UIImage from the current context
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Clean up
free(data);
CFRelease(ref);
CFRelease(colorspace);
CGImageRelease(iref);
return image;}
combining this screenshot with outline image using function:
- (void)Combine:(UIImage *)Back{
UIImage *Front =backgroundImageView.image;
//UIGraphicsBeginImageContext(Back.size);
UIGraphicsBeginImageContext(CGSizeMake(640,960));
// Draw image1
[Back drawInRect:CGRectMake(0, 0, Back.size.width*2, Back.size.height*2)];
// Draw image2
[Front drawInRect:CGRectMake(0, 0, Front.size.width*2, Front.size.height*2)];
UIImage *resultingImage = UIGraphicsGetImageFromCurrentImageContext();
UIImageWriteToSavedPhotosAlbum(resultingImage, nil, nil, nil);
UIGraphicsEndImageContext();
}
Save this image to photoalbum using function
-(void)captureToPhotoAlbum {
[self Combine:[self snapshot:self]];
UIAlertView *alert = [[UIAlertView alloc] initWithTitle:#"Success" message:#"Image saved to Photo Album" delegate:nil cancelButtonTitle:#"OK" otherButtonTitles:nil];
[alert show];
[alert release]; }
Above Code is working but the image quality of screenshot is poor. On the outlines of the brush, there is a grayish outline. I have uploaded a screenshot of my app which is combination of opengles content & UIImage.
Is there any way to get retina display screenshot of opengles-CAEaglelayer content.
Thank you in advance!
I don't believe that resolution is your issue here. If you aren't seeing the grayish outlines on your drawing when it appears on the screen, odds are that you're observing a compression artifact in the saving process. Your image is probably being saved as a lower-quality JPEG image, where artifacts will appear on sharp edges, like the ones in your drawing.
To work around this, Ben Weiss's answer here provides the following code for forcing your image to be saved to the photo library as a PNG:
UIImage* im = [UIImage imageWithCGImage:myCGRef]; // make image from CGRef
NSData* imdata = UIImagePNGRepresentation ( im ); // get PNG representation
UIImage* im2 = [UIImage imageWithData:imdata]; // wrap UIImage around PNG representation
UIImageWriteToSavedPhotosAlbum(im2, nil, nil, nil); // save to photo album
While this is probably the easiest way to address your problem here, you could also try employing multisample antialiasing, as Apple describes in the "Using Multisampling to Improve Image Quality" section of the OpenGL ES Programming Guide for iOS. Depending on how fill-rate limited you are, MSAA might lead to a little bit of slowdown in your application.
You're using kCGImageAlphaPremultipliedLast when you create the CG bitmap context. Although I can't see your OpenGL code, it seems unlikely to me that your OpenGL context is rendering premultiplied alpha. Unfortunately, IIRC, it's not possible to create a non-premultiplied CG bitmap context on iOS (it would be using kCGImageAlphaLast, but I think that'll just make the creation call fail), so you may need to premultiply the data by hand between getting it from OpenGL and making the CG context.
On the other hand, is there a reason your OpenGL context has an alpha channel? Could you just make it opaque white then use kCGImageAlphaNoneSkipLast?
I've been developing a 3D program for the iPad and iPhone and would like to be able to render it to an external screen. According to my understanding, you have to do something similar to the below code to implement it, (found at: Sunsetlakesoftware.com):
if ([[UIScreen screens] count] > 1)
{
// External screen attached
}
else
{
// Only local screen present
}
CGRect externalBounds = [externalScreen bounds];
externalWindow = [[UIWindow alloc] initWithFrame:externalBounds];
UIView *backgroundView = [[UIView alloc] initWithFrame:externalBounds];
backgroundView.backgroundColor = [UIColor whiteColor];
[externalWindow addSubview:backgroundView];
[backgroundView release];
externalWindow.screen = externalScreen;
[externalWindow makeKeyAndVisible];
However, I'm not sure what to change to do this to an OpenGL project. Does anyone know what you would do to implement this into the defualt openGL project for iPad or iPhone in XCode?
All you need to do to render OpenGL ES content on the external display is to either create a UIView that is backed by a CAEAGLLayer and add it as a subview of the backgroundView above, or take such a view and move it to be a subview of backgroundView.
In fact, you can remove the backgroundView if you want and just place your OpenGL-hosting view directly on the externalWindow UIWindow instance. That window is attached to the UIScreen instance that represents the external display, so anything placed on it will show on that display. This includes OpenGL ES content.
There does appear to be an issue with particular types of OpenGL ES content, as you can see in the experimental support I've tried to add to my Molecules application. If you look in the source code there, I attempt to migrate my rendering view to an external display, but it never appears. I have done the same with other OpenGL ES applications and had their content render fine, so I believe there might be an issue with the depth buffer on the external display. I'm still working to track that down.
I've figured out how to get ANY OpenGL-ES content to render onto an external display. It's actually really straightforward. You just copy your renderbuffer to a UIImage then display that UIImage on the external screen view. The code to take a snapshot of your renderbuffer is below:
- (UIImage*)snapshot:(UIView*)eaglview
{
// Get the size of the backing CAEAGLLayer
GLint backingWidth, backingHeight;
glBindRenderbufferOES(GL_RENDERBUFFER_OES, defaultFramebuffer);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight);
NSInteger x = 0, y = 0, width = backingWidth, height = backingHeight;
NSInteger dataLength = width * height * 4;
GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));
// Read pixel data from the framebuffer
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(x, y, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data);
// Create a CGImage with the pixel data
// If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel
// otherwise, use kCGImageAlphaPremultipliedLast
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate(width, height, 8, 32, width * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast,
ref, NULL, true, kCGRenderingIntentDefault);
// OpenGL ES measures data in PIXELS
// Create a graphics context with the target size measured in POINTS
NSInteger widthInPoints, heightInPoints;
if (NULL != UIGraphicsBeginImageContextWithOptions) {
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// Set the scale parameter to your OpenGL ES view's contentScaleFactor
// so that you get a high-resolution snapshot when its value is greater than 1.0
CGFloat scale = eaglview.contentScaleFactor;
widthInPoints = width / scale;
heightInPoints = height / scale;
UIGraphicsBeginImageContextWithOptions(CGSizeMake(widthInPoints, heightInPoints), NO, scale);
}
else {
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
widthInPoints = width;
heightInPoints = height;
UIGraphicsBeginImageContext(CGSizeMake(widthInPoints, heightInPoints));
}
CGContextRef cgcontext = UIGraphicsGetCurrentContext();
// UIKit coordinate system is upside down to GL/Quartz coordinate system
// Flip the CGImage by rendering it to the flipped bitmap context
// The size of the destination area is measured in POINTS
CGContextSetBlendMode(cgcontext, kCGBlendModeCopy);
CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref);
// Retrieve the UIImage from the current context
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Clean up
free(data);
CFRelease(ref);
CFRelease(colorspace);
CGImageRelease(iref);
return image;
}
Although, for some reason I've never been able to get glGetRenderbufferParameterivOES to return the proper backingwidth and backingheight, so I've had to use my own function to calculate those. Just pop this into your rendering implementation and place the result onto the external screen using a timer. If anyone can make any improvements to this method, please let me know.
I have managed to use the reflection sample app from apple to create a reflection from a UIImageView.
The problem is that when I change the picture inside the UIImageView, the reflection from the previous displayed picture remains on the screen. The new reflection on the next picture then overlaps the previous reflection.
How do I ensure that the previous reflection is removed when I change to the next picture?
Thank you so much. I hope my question is not too basic.
Here is the code which I have used so far:
// reflection
self.view.autoresizesSubviews = YES;
self.view.userInteractionEnabled = YES;
// create the reflection view
CGRect reflectionRect = currentView.frame;
// the reflection is a fraction of the size of the view being reflected
reflectionRect.size.height = reflectionRect.size.height * kDefaultReflectionFraction;
// and is offset to be at the bottom of the view being reflected
reflectionRect = CGRectOffset(reflectionRect, 0, currentView.frame.size.height);
reflectionView = [[UIImageView alloc] initWithFrame:reflectionRect];
// determine the size of the reflection to create
NSUInteger reflectionHeight = currentView.bounds.size.height * kDefaultReflectionFraction;
// create the reflection image, assign it to the UIImageView and add the image view to the containerView
reflectionView.image = [self reflectedImage:currentView withHeight:reflectionHeight];
reflectionView.alpha = kDefaultReflectionOpacity;
[self.view addSubview:reflectionView];
Then the code below is used to form the reflection:
CGImageRef CreateGradientImage(int pixelsWide, int pixelsHigh)
{
CGImageRef theCGImage = NULL;
// gradient is always black-white and the mask must be in the gray colorspace
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
// create the bitmap context
CGContextRef gradientBitmapContext = CGBitmapContextCreate(nil, pixelsWide, pixelsHigh,
8, 0, colorSpace, kCGImageAlphaNone);
// define the start and end grayscale values (with the alpha, even though
// our bitmap context doesn't support alpha the gradient requires it)
CGFloat colors[] = {0.0, 1.0, 1.0, 1.0};
// create the CGGradient and then release the gray color space
CGGradientRef grayScaleGradient = CGGradientCreateWithColorComponents(colorSpace, colors, NULL, 2);
CGColorSpaceRelease(colorSpace);
// create the start and end points for the gradient vector (straight down)
CGPoint gradientStartPoint = CGPointZero;
CGPoint gradientEndPoint = CGPointMake(0, pixelsHigh);
// draw the gradient into the gray bitmap context
CGContextDrawLinearGradient(gradientBitmapContext, grayScaleGradient, gradientStartPoint,
gradientEndPoint, kCGGradientDrawsAfterEndLocation);
CGGradientRelease(grayScaleGradient);
// convert the context into a CGImageRef and release the context
theCGImage = CGBitmapContextCreateImage(gradientBitmapContext);
CGContextRelease(gradientBitmapContext);
// return the imageref containing the gradient
return theCGImage;
}
CGContextRef MyCreateBitmapContext(int pixelsWide, int pixelsHigh)
{
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// create the bitmap context
CGContextRef bitmapContext = CGBitmapContextCreate (nil, pixelsWide, pixelsHigh, 8,
0, colorSpace,
// this will give us an optimal BGRA format for the device:
(kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst));
CGColorSpaceRelease(colorSpace);
return bitmapContext;
}
- (UIImage *)reflectedImage:(UIImageView *)fromImage withHeight:(NSUInteger)height
{
if (!height) return nil;
// create a bitmap graphics context the size of the image
CGContextRef mainViewContentContext = MyCreateBitmapContext(fromImage.bounds.size.width, height);
// offset the context -
// This is necessary because, by default, the layer created by a view for caching its content is flipped.
// But when you actually access the layer content and have it rendered it is inverted. Since we're only creating
// a context the size of our reflection view (a fraction of the size of the main view) we have to translate the
// context the delta in size, and render it.
//
CGFloat translateVertical= fromImage.bounds.size.height - height;
CGContextTranslateCTM(mainViewContentContext, 0, -translateVertical);
// render the layer into the bitmap context
CALayer *layer = fromImage.layer;
[layer renderInContext:mainViewContentContext];
// create CGImageRef of the main view bitmap content, and then release that bitmap context
CGImageRef mainViewContentBitmapContext = CGBitmapContextCreateImage(mainViewContentContext);
CGContextRelease(mainViewContentContext);
// create a 2 bit CGImage containing a gradient that will be used for masking the
// main view content to create the 'fade' of the reflection. The CGImageCreateWithMask
// function will stretch the bitmap image as required, so we can create a 1 pixel wide gradient
CGImageRef gradientMaskImage = CreateGradientImage(1, height);
// create an image by masking the bitmap of the mainView content with the gradient view
// then release the pre-masked content bitmap and the gradient bitmap
CGImageRef reflectionImage = CGImageCreateWithMask(mainViewContentBitmapContext, gradientMaskImage);
CGImageRelease(mainViewContentBitmapContext);
CGImageRelease(gradientMaskImage);
// convert the finished reflection image to a UIImage
UIImage *theImage = [UIImage imageWithCGImage:reflectionImage];
// image is retained by the property setting above, so we can release the original
CGImageRelease(reflectionImage);
return theImage;
}
If you don't want to use IB, just add
reflectionView.image = nil;
before
reflectedImage.image = [self reflectedImage:...
and don't forget this line
if (currentView.image == nil) reflectedImage.image = nil;
or else you'll end up with an old reflection after the image has disappeared.
finally I choose to devote some time to find a way/implementation to
mask text inside UITextView/UIWebView.
By now what I'm able to do is:
- add some custom background
- add a uitextview/uiwebview with some text
- add an UIImageView (with a covering png) or a CAGradientLayer to
create a simple mask effect (*)
Of course this is not a magic bullet and require at least one more
layer (the one pointed out with *).
Furthermore it's not so good when you have a full transparent
background 'cause everyone can recognize the extra view/layer used to
fade away the text.
I searched all over google but still not found a good solution (I've
found about mask an image, blah blah)...
Any tips?
Thanks in advance,
marcio
PS maybe a screenshot will be more straightforward, here you're!
http://grab.by/KzS
Yes! I finally got it. I don't know if it's the Apple's way but it works. Maybe they have the opportunity to employ some private apis. Anyway this is a sort of pseudo-algorithm on how I got it works:
1) get a screenshot of the window
2) crop the desired rect with CGImageCreateWithImageInRect
3) apply a gradient mask (stolen from Apple' sample code on Reflections)
4) create an UIImageView with the freshly created image
I also noted that it doesn't affect the performances even on the lowest devices.
Hope it will be helpful!
And this is a crop of the result (link text)
I've promised to myself to implement a category just to make it better. Until now the code is quite spread in different classes.
Just to make a sample (supported only landscape orientation, see the transform below, supported only top mask). In this case I overrided didMoveToWindow of the table that needs to be masked:
- (void)didMoveToWindow {
if (self.window) {
UIImageView *reflected = (UIImageView *)[self.superview viewWithTag:TABLE_SHADOW_TOP];
if (!reflected) {
UIImage *image = [UIImage screenshot:self.window];
//
CGRect croppedRect = CGRectMake(480-self.frame.size.height, self.frame.origin.x, 16, self.frame.size.width);
CGImageRef cropImage = CGImageCreateWithImageInRect(image.CGImage, croppedRect);
UIImage *reflectedImage = [UIImage imageMaskedWithGradient:cropImage];
CGImageRelease(cropImage);
UIImageView *reflected = [[UIImageView alloc] initWithImage:reflectedImage];
reflected.transform = CGAffineTransformMakeRotation(-(M_PI/2));
reflected.tag = TABLE_SHADOW_TOP;
CGRect adjusted = reflected.frame;
adjusted.origin = self.frame.origin;
reflected.frame = adjusted;
[self.superview addSubview:reflected];
[reflected release];
}
}
}
and this is the uiimage category:
CGImageRef CreateGradientImage(int pixelsWide, int pixelsHigh)
{
CGImageRef theCGImage = NULL;
// gradient is always black-white and the mask must be in the gray colorspace
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
// create the bitmap context
CGContextRef gradientBitmapContext = CGBitmapContextCreate(NULL, pixelsWide, pixelsHigh,
8, 0, colorSpace, kCGImageAlphaNone);
// define the start and end grayscale values (with the alpha, even though
// our bitmap context doesn't support alpha the gradient requires it)
CGFloat colors[] = {0.0, 1.0, 1.0, 1.0};
// create the CGGradient and then release the gray color space
CGGradientRef grayScaleGradient = CGGradientCreateWithColorComponents(colorSpace, colors, NULL, 2);
CGColorSpaceRelease(colorSpace);
// create the start and end points for the gradient vector (straight down)
CGPoint gradientStartPoint = CGPointZero;
// CGPoint gradientStartPoint = CGPointMake(0, pixelsHigh);
CGPoint gradientEndPoint = CGPointMake(pixelsWide/1.75, 0);
// draw the gradient into the gray bitmap context
CGContextDrawLinearGradient(gradientBitmapContext, grayScaleGradient, gradientStartPoint,
gradientEndPoint, kCGGradientDrawsAfterEndLocation);
CGGradientRelease(grayScaleGradient);
// convert the context into a CGImageRef and release the context
theCGImage = CGBitmapContextCreateImage(gradientBitmapContext);
CGContextRelease(gradientBitmapContext);
// return the imageref containing the gradient
return theCGImage;
}
CGContextRef MyCreateBitmapContext(int pixelsWide, int pixelsHigh)
{
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// create the bitmap context
CGContextRef bitmapContext = CGBitmapContextCreate (NULL, pixelsWide, pixelsHigh, 8,
0, colorSpace,
// this will give us an optimal BGRA format for the device:
(kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst));
CGColorSpaceRelease(colorSpace);
return bitmapContext;
}
+ (UIImage *)imageMaskedWithGradient:(CGImageRef)image {
UIDeviceOrientation deviceOrientation = [UIDevice currentDevice].orientation;
DEBUG(#"need to support deviceOrientation: %i", deviceOrientation);
float width = CGImageGetWidth(image);
float height = CGImageGetHeight(image);
// create a bitmap graphics context the size of the image
CGContextRef mainViewContentContext = MyCreateBitmapContext(width, height);
// create a 2 bit CGImage containing a gradient that will be used for masking the
// main view content to create the 'fade' of the reflection. The CGImageCreateWithMask
// function will stretch the bitmap image as required, so we can create a 1 pixel wide gradient
CGImageRef gradientMaskImage = CreateGradientImage(width, 1);
// create an image by masking the bitmap of the mainView content with the gradient view
// then release the pre-masked content bitmap and the gradient bitmap
CGContextClipToMask(mainViewContentContext, CGRectMake(0.0, 0.0, width, height), gradientMaskImage);
CGImageRelease(gradientMaskImage);
// draw the image into the bitmap context
CGContextDrawImage(mainViewContentContext, CGRectMake(0, 0, width, height), image);
// create CGImageRef of the main view bitmap content, and then release that bitmap context
CGImageRef reflectionImage = CGBitmapContextCreateImage(mainViewContentContext);
CGContextRelease(mainViewContentContext);
// convert the finished reflection image to a UIImage
UIImage *theImage = [UIImage imageWithCGImage:reflectionImage];
// image is retained by the property setting above, so we can release the original
CGImageRelease(reflectionImage);
return theImage;
}
Hope it helps.