Related
I have downloaded the sample code GLPaint from developer.Apple website to draw pictures on a Canvas using OpenGL.
I have made many changes to the GLPaint application to meet my requirements. Now, I would like to save the drawn item into photo-library as an image.
I know the method to save an image in the photo-library. So, I tried to create the corresponding image file after drawing a picture. Do you know what's the good way to do it? Any help on this is highly appreciated.
The code details are described below.
PaintingView.h
EAGLContext *context;
// OpenGL names for the renderbuffer and framebuffers used to render to this view
GLuint viewRenderbuffer, viewFramebuffer;
// OpenGL name for the depth buffer that is attached to viewFramebuffer, if it exists (0 if it does not exist)
GLuint depthRenderbuffer;
GLuint brushTexture;
CGPoint location;
CGPoint previousLocation;
PaintingView.m
// Handles the start of a touch
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
CGRect bounds = [self bounds];
UITouch* touch = [[event touchesForView:self] anyObject];
firstTouch = YES;
// Convert touch point from UIView referential to OpenGL one (upside-down flip)
location = [touch locationInView:self];
location.y = bounds.size.height - location.y;
}
// Handles the continuation of a touch.
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
CGRect bounds = [self bounds];
UITouch* touch = [[event touchesForView:self] anyObject];
// Convert touch point from UIView referential to OpenGL one (upside-down flip)
if (firstTouch) {
firstTouch = NO;
previousLocation = [touch previousLocationInView:self];
previousLocation.y = bounds.size.height - previousLocation.y;
} else {
location = [touch locationInView:self];
location.y = bounds.size.height - location.y;
previousLocation = [touch previousLocationInView:self];
previousLocation.y = bounds.size.height - previousLocation.y;
}
// Render the stroke
[self renderLineFromPoint:previousLocation toPoint:location];
}
// Handles the end of a touch event when the touch is a tap.
- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event
{
CGRect bounds = [self bounds];
UITouch* touch = [[event touchesForView:self] anyObject];
if (firstTouch) {
firstTouch = NO;
previousLocation = [touch previousLocationInView:self];
previousLocation.y = bounds.size.height - previousLocation.y;
[self renderLineFromPoint:previousLocation toPoint:location];
}
}
// Drawings a line onscreen based on where the user touches
- (void) renderLineFromPoint:(CGPoint)start toPoint:(CGPoint)end
{
static GLfloat* vertexBuffer = NULL;
static NSUInteger vertexMax = 64;
NSUInteger vertexCount = 0,
count,
i;
[EAGLContext setCurrentContext:context];
glBindFramebufferOES(GL_FRAMEBUFFER_OES, viewFramebuffer);
// Convert locations from Points to Pixels
CGFloat scale = self.contentScaleFactor;
start.x *= scale;
start.y *= scale;
end.x *= scale;
end.y *= scale;
// Allocate vertex array buffer
if(vertexBuffer == NULL)
vertexBuffer = malloc(vertexMax * 2 * sizeof(GLfloat));
// Add points to the buffer so there are drawing points every X pixels
count = MAX(ceilf(sqrtf((end.x - start.x) * (end.x - start.x) + (end.y - start.y) * (end.y - start.y)) / kBrushPixelStep), 1);
for(i = 0; i < count; ++i) {
if(vertexCount == vertexMax) {
vertexMax = 2 * vertexMax;
vertexBuffer = realloc(vertexBuffer, vertexMax * 2 * sizeof(GLfloat));
}
vertexBuffer[2 * vertexCount + 0] = start.x + (end.x - start.x) * ((GLfloat)i / (GLfloat)count);
vertexBuffer[2 * vertexCount + 1] = start.y + (end.y - start.y) * ((GLfloat)i / (GLfloat)count);
vertexCount += 1;
}
// Render the vertex array
glVertexPointer(2, GL_FLOAT, 0, vertexBuffer);
glDrawArrays(GL_POINTS, 0, vertexCount);
// Display the buffer
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER_OES];
}
// Erases the screen
- (void) erase
{
[EAGLContext setCurrentContext:context];
// Clear the buffer
glBindFramebufferOES(GL_FRAMEBUFFER_OES, viewFramebuffer);
glClearColor(0.0, 0.0, 0.0, 0.0);
glClear(GL_COLOR_BUFFER_BIT);
// Display the buffer
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER_OES];
}
// The GL view is stored in the nib file. When it's unarchived it's sent -initWithCoder:
- (id)initWithCoder:(NSCoder*)coder {
CGImageRef brushImage;
CGContextRef brushContext;
GLubyte *brushData;
size_t width, height;
if ((self = [super initWithCoder:coder])) {
CAEAGLLayer *eaglLayer = (CAEAGLLayer *)self.layer;
eaglLayer.opaque = YES;
// In this application, we want to retain the EAGLDrawable contents after a call to presentRenderbuffer.
eaglLayer.drawableProperties = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kEAGLDrawablePropertyRetainedBacking, kEAGLColorFormatRGBA8, kEAGLDrawablePropertyColorFormat, nil];
context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES1];
if (!context || ![EAGLContext setCurrentContext:context]) {
[self release];
return nil;
}
// Create a texture from an image
// First create a UIImage object from the data in a image file, and then extract the Core Graphics image
brushImage = [UIImage imageNamed:#"Particle.png"].CGImage;
// Get the width and height of the image
width = CGImageGetWidth(brushImage);
height = CGImageGetHeight(brushImage);
// Texture dimensions must be a power of 2. If you write an application that allows users to supply an image,
// you'll want to add code that checks the dimensions and takes appropriate action if they are not a power of 2.
// Make sure the image exists
if(brushImage) {
// Allocate memory needed for the bitmap context
brushData = (GLubyte *) calloc(width * height * 4, sizeof(GLubyte));
// Use the bitmatp creation function provided by the Core Graphics framework.
brushContext = CGBitmapContextCreate(brushData, width, height, 8, width * 4, CGImageGetColorSpace(brushImage), kCGImageAlphaPremultipliedLast);
// After you create the context, you can draw the image to the context.
CGContextDrawImage(brushContext, CGRectMake(0.0, 0.0, (CGFloat)width, (CGFloat)height), brushImage);
// You don't need the context at this point, so you need to release it to avoid memory leaks.
CGContextRelease(brushContext);
// Use OpenGL ES to generate a name for the texture.
glGenTextures(1, &brushTexture);
// Bind the texture name.
glBindTexture(GL_TEXTURE_2D, brushTexture);
// Set the texture parameters to use a minifying filter and a linear filer (weighted average)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
// Specify a 2D texture image, providing the a pointer to the image data in memory
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, brushData);
// Release the image data; it's no longer needed
free(brushData);
}
// Set the view's scale factor
self.contentScaleFactor = 1.0;
// Setup OpenGL states
glMatrixMode(GL_PROJECTION);
CGRect frame = self.bounds;
CGFloat scale = self.contentScaleFactor;
// Setup the view port in Pixels
glOrthof(0, frame.size.width * scale, 0, frame.size.height * scale, -1, 1);
glViewport(0, 0, frame.size.width * scale, frame.size.height * scale);
glMatrixMode(GL_MODELVIEW);
glDisable(GL_DITHER);
glEnable(GL_TEXTURE_2D);
glEnableClientState(GL_VERTEX_ARRAY);
glEnable(GL_BLEND);
// Set a blending function appropriate for premultiplied alpha pixel data
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_POINT_SPRITE_OES);
glTexEnvf(GL_POINT_SPRITE_OES, GL_COORD_REPLACE_OES, GL_TRUE);
glPointSize(width / kBrushScale);
// Make sure to start with a cleared buffer
needsErase = YES;
}
return self;
}
AppDelegate.h
PaintingWindow *window; //its a class inherited from window.
PaintingView *drawingView;
#property (nonatomic, retain) IBOutlet PaintingWindow *window;
#property (nonatomic, retain) IBOutlet PaintingView *drawingView;
#synthesize window;
#synthesize drawingView;
AppDelegate.m
- (void) applicationDidFinishLaunching:(UIApplication*)application
{
CGRect rect = [[UIScreen mainScreen] applicationFrame];
CGFloat components[3];
// Create a segmented control so that the user can choose the brush color.
UISegmentedControl *segmentedControl = [[UISegmentedControl alloc] initWithItems:
[NSArray arrayWithObjects:
[UIImage imageNamed:#"Red.png"],
[UIImage imageNamed:#"Yellow.png"],
[UIImage imageNamed:#"Green.png"],
[UIImage imageNamed:#"Blue.png"],
[UIImage imageNamed:#"Purple.png"],
nil]];
// Compute a rectangle that is positioned correctly for the segmented control you'll use as a brush color palette
//CGRect frame = CGRectMake(rect.origin.x + kLeftMargin, rect.size.height - kPaletteHeight - kTopMargin, rect.size.width - (kLeftMargin + kRightMargin), kPaletteHeight);
CGRect frame = CGRectMake(50, 22, (rect.size.width - (kLeftMargin + kRightMargin)) - 20, kPaletteHeight);
segmentedControl.frame = frame;
// When the user chooses a color, the method changeBrushColor: is called.
[segmentedControl addTarget:self action:#selector(changeBrushColor:) forControlEvents:UIControlEventValueChanged];
segmentedControl.segmentedControlStyle = UISegmentedControlStyleBar;
// Make sure the color of the color complements the black background
segmentedControl.tintColor = [UIColor darkGrayColor];
// Set the third color (index values start at 0)
segmentedControl.selectedSegmentIndex = 2;
// Add the control to the window
[window addSubview:segmentedControl];
// Now that the control is added, you can release it
[segmentedControl release];
[self addBackgroundSegmentControll];
// Define a starting color
HSL2RGB((CGFloat) 2.0 / (CGFloat)kPaletteSize, kSaturation, kLuminosity, &components[0], &components[1], &components[2]);
// Defer to the OpenGL view to set the brush color
[drawingView setBrushColorWithRed:components[0] green:components[1] blue:components[2]];
// Look in the Info.plist file and you'll see the status bar is hidden
// Set the style to black so it matches the background of the application
[application setStatusBarStyle:UIStatusBarStyleBlackTranslucent animated:NO];
// Now show the status bar, but animate to the style.
[application setStatusBarHidden:NO withAnimation:YES];
// Load the sounds
NSBundle *mainBundle = [NSBundle mainBundle];
erasingSound = [[SoundEffect alloc] initWithContentsOfFile:[mainBundle pathForResource:#"Erase" ofType:#"caf"]];
selectSound = [[SoundEffect alloc] initWithContentsOfFile:[mainBundle pathForResource:#"Select" ofType:#"caf"]];
[window setFrame:CGRectMake(0, 0, 768, 1024)];
drawingView.frame = CGRectMake(0, 0, 768, 1024);
// Erase the view when recieving a notification named "shake" from the NSNotificationCenter object
// The "shake" nofification is posted by the PaintingWindow object when user shakes the device
[[NSNotificationCenter defaultCenter] addObserver:self selector:#selector(eraseView) name:#"shake" object:nil];
}
An improved version of Ramshad's answer:
This one has no memory leaks and works with new versions of iOS and for different view sizes and displays (retina and non-retina).
CGFloat scale = [[UIScreen mainScreen] scale]; // use nativeScale on iOS 8.0+
CGSize imageSize = CGSizeMake((scale * view.frame.size.width), (scale * view.frame.size.height));
NSUInteger length = imageSize.width * imageSize.height * 4;
GLubyte * buffer = (GLubyte *)malloc(length * sizeof(GLubyte));
if(buffer == NULL)
return nil;
glReadPixels(0, 0, imageSize.width, imageSize.height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer, length, NULL);
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * imageSize.width;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef imageRef = CGImageCreate(imageSize.width, imageSize.height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
UIGraphicsBeginImageContext(imageSize);
CGContextDrawImage(UIGraphicsGetCurrentContext(), CGRectMake(0.0, 0.0, imageSize.width, imageSize.height), imageRef);
UIImage * image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGImageRelease(imageRef);
CGColorSpaceRelease(colorSpaceRef);
CGDataProviderRelease(provider);
free(buffer);
return image;
Please refer the below link to save an OpenGL drawn item as an image in the photo-library.
Save an OpenGL drawn item as an image
Code Details;
Call [self captureToPhotoAlbum]; after writing the below code.
-(void)captureToPhotoAlbum {
UIImage *image = [self glToUIImage];
UIImageWriteToSavedPhotosAlbum(image, self, nil, nil);
}
- (UIImage *)glToUIImage {
NSInteger myDataLength = 320 * 480 * 4;
// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, 320, 480, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
for(int y = 0; y < 480; y++)
{
for(int x = 0; x < 320 * 4; x++)
{
buffer2[(479 - y) * 320 * 4 + x] = buffer[y * 4 * 320 + x];
}
}
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * 320;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef imageRef = CGImageCreate(320, 480, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
// then make the uiimage from that
UIImage *myImage = [UIImage imageWithCGImage:imageRef];
return myImage;
}
For iPad or to fix the scaling issue,change all the width's as 640 instead of 320 and height's as 960 instead of 480. Change the
Height and Width values up to meet your scaling.
Manage the memory(free the buffers)
Thanks.
If the iPhone supports it, you can read from an OpenGL context using glReadPixels. After that's done, you should be able to create something like a UIImage from the pixel data you have read and save it to the photo library like you would for any other image created by an application.
The code works perfectly on iOS 5.0 programmed in XCode 4.0.
Although, now in XCode 4.5 and with the developers preview iOS 6.0 the code does not work as it should.The image tha is saved is tottaly black! It is saved in "My photos", with the resolution we choose, but it is a black image!
I guess that something has changed from the XCode programmers in
glReadPixels(0, 0, 320, 480, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
or in
CGImageRef imageRef = CGImageCreate(320, 480, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
UIImage *myImage = [UIImage imageWithCGImage:imageRef];
It is not a matter of 320, 480 or any other resolution variables. In XCode 4.0 worked perfectly and fast even for Retina resolutions.
I have implemented changing the brightness of an image using uislider in iPhone without using openGlImageProcessing. I want to change the brightness of particular area of an image only not the whole image, only particular area of an image should be get brightnessed for example the central area of an image should get brightnessed or particular color from an entire image should get brightnessed. How can we change the brightness of particular portion of an image. Please help me to solve the problem, code is
CGImageRef imageRef = imageView.image.CGImage;
CFDataRef ref = CGDataProviderCopyData(CGImageGetDataProvider(imageRef));
UInt8 * buf = (UInt8 *) CFDataGetBytePtr(ref);
int length = CFDataGetLength(ref);
NSLog(#"%i",val);
float value = val/100;
for(int i=0; i<length; i+=3)
{
Byte tempR = buf[i + 1];
Byte tempG = buf[i + 2];
Byte tempB = buf[i + 3];
int outputRed = value + tempR;
int outputGreen = value + tempG;
int outputBlue = value + tempB;
if (outputRed>255) outputRed=255;
if (outputGreen>255) outputGreen=255;
if (outputBlue>255) outputBlue=255;
if (outputRed<0) outputRed=0;
if (outputGreen<0) outputGreen=0;
if (outputBlue<0) outputBlue=0;
buf[i + 1] = outputRed;
buf[i + 2] = outputGreen;
buf[i + 3] = outputBlue;
}
size_t width = CGImageGetWidth(imageRef);
size_t height = CGImageGetHeight(imageRef);
size_t bitsPerComponent = CGImageGetBitsPerComponent(imageRef);
size_t bytesPerRow = CGImageGetBytesPerRow(imageRef);
CGColorSpaceRef colorSpace = CGImageGetColorSpace(imageRef);
CGContextRef ctx = CGBitmapContextCreate(buf, width, height, bitsPerComponent, bytesPerRow, colorSpace, CGImageGetAlphaInfo(imageRef));
CGImageRef img = CGBitmapContextCreateImage(ctx);
if (value == 0) {
imageView.image = image;
}
else
{
[imageView setImage:[UIImage imageWithCGImage:img]];
}
CFRelease(ref);
CGContextRelease(ctx);
CGImageRelease(img);
After lot of searching i got the answer as
//origional image
image = [UIImage imageNamed:#"images2.jpg"];
imageView = [[UIImageView alloc] initWithImage:image];
CGSize size = [image size];
[imageView setFrame:CGRectMake(0, 0, size.width, size.height)];
[[self view] addSubview:imageView];
[imageView release];
//image to be brightnessed
CGRect rect = CGRectMake([image size].width / 6, [image size].height / 6 ,
([image size].width / 2), ([image size].height / 2));
CGImageRef imageRef = CGImageCreateWithImageInRect([image CGImage], rect);
UIImage *img = [UIImage imageWithCGImage:imageRef];
imageView = [[UIImageView alloc] initWithImage:img];
[imageView setFrame:CGRectMake([image size].width / 6, [image size].height / 6, ([image size].width / 2), ([image size].height / 2))];
[self.view addSubview:imageView];
[imageView release];
I'm following the instructions outlined in this answer to fill a layer with a pattern in Core Graphics. When the layer is a CALayer subclass, the drawing works fine. However, when the layer is a CATiledLayer subclass, I get an EXC_BAD_ACCESS error at runtime.
static void drawPatternImage (void *info, CGContextRef ctx)
{
CGImageRef image = (CGImageRef) info;
CGContextDrawImage(ctx,
CGRectMake(0,0, CGImageGetWidth(image),CGImageGetHeight(image)),
image); // EXC_BAD_ACCESS here :(
}
static void releasePatternImage( void *info )
{
CGImageRelease((CGImageRef)info);
}
// pattern creation
int width = CGImageGetWidth(image);
int height = CGImageGetHeight(image);
static const CGPatternCallbacks callbacks = {0, &drawPatternImage, &releasePatternImage};
CGPatternRef pattern = CGPatternCreate (image,
CGRectMake (0, 0, width, height),
CGAffineTransformMake (1, 0, 0, 1, 0, 0),
width,
height,
kCGPatternTilingConstantSpacing,
true,
&callbacks);
CGColorSpaceRef space = CGColorSpaceCreatePattern(NULL);
CGFloat components[1] = {1.0};
CGColorRef color = CGColorCreateWithPattern(space, pattern, components);
CGColorSpaceRelease(space);
CGPatternRelease(pattern);
theLayer.backgroundColor = color;
CGColorRelease(color);
What do I need to do to draw a patterned image in a CATiledLayer subclass?
i use "CGContextDrawTiledImage" in the "drawLayer" method for a CATiledLayer class:
in .h:
#interface StarView : UIView {
UIImage *imgTextureCarta;
}
in .m
- (id)initWithFrame:(CGRect)frame {
NSString* imagePath = [ [ NSBundle mainBundle] pathForResource:#"aTexture" ofType:#"png"];
imgTextureCarta = [UIImage imageWithContentsOfFile: imagePath];
[imgTextureCarta retain];
CGImageRef texureCarta;
texureCarta = (imgTextureCarta.CGImage);
}
-(void)drawLayer:(CALayer *)layer inContext:(CGContextRef)context{
CGImageRef texureCarta;
texureCarta = (imgTextureCarta.CGImage);
CGRect bounds = layer.bounds;
CGContextSetBlendMode (context, 0);
CGContextClipToRect(context, layer.bounds);
CGContextDrawTiledImage(context, CGRectMake(0,0, 128, 120), texureCarta);
}
- (void)dealloc {
[imgTextureCarta release];
[super dealloc];
}
I have a problem with my UIImageView that doesn't update.
So, it's like this:
I have a UIScrollView that contains an UIImageView (called imageView).
Now, imageView , should contain more UIImageViews. Those UIImageViews I add from code but they do not appear.
This is the code:
for(i = 0 ; i < NrOfTilesPerHeight ; i++)
for(j = 0 ; j < NrOfTilesPerWidth ; j++)
{
imageRect = CGRectMake(j*TILE_WIDTH,i*TILE_HEIGHT,TILE_WIDTH, TILE_HEIGHT);
image = CGImageCreateWithImageInRect(aux.CGImage, imageRect);
if(!data[i][j])
NSLog(#"data[%d][%d] is nil",i,j);
context = CGBitmapContextCreate (data[i][j], TILE_WIDTH, TILE_HEIGHT,
bitsPerComponent, bitmapBytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
if (context == NULL)
{
free (data);
printf ("Context not created!");
CGColorSpaceRelease( colorSpace );
}
CGContextDrawImage(context, CGRectMake(0, 0, TILE_WIDTH, TILE_HEIGHT), image);
data[i][j] = CGBitmapContextGetData (context);
memcpy(originalData[i][j],data[i][j],TILE_WIDTH*TILE_HEIGHT*numberOfCompponents);
CGContextFlush(context);
CGImageRelease(image);
UIImageView *imgView = [[UIImageView alloc] init];
[imgView setTag:i*10+j];
CGRect frame = imgView.frame;
frame.origin.x = j * (TILE_WIDTH+5) * initialScale;
frame.origin.y = i * (TILE_HEIGHT+5) * initialScale;
frame.size.width *= initialScale;
frame.size.height *= initialScale;
[imgView setFrame:frame];
[imageView addSubview:imgView];
[self updateTileAtLine:i andRow:j];
[imgView release];
CGDataProviderRelease(dataProvider);
CGImageRelease(cgImage);
}
- (void) updateTileAtLine: (int) i andRow: (int) j
{
CGDataProviderRef dataProvider = CGDataProviderCreateWithData(NULL, data[i][j], bitmapByteCount, NULL);
CGImageRef cgImage = CGImageCreate(TILE_WIDTH, TILE_HEIGHT, bitsPerComponent,
bitsPerPixel, bitmapBytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big, dataProvider, NULL, false, kCGRenderingIntentDefault);
UIImage *myImg = [UIImage imageWithCGImage:cgImage];
UIImageView *auxImageView = (UIImageView*) [imageView viewWithTag:(i*10+j)];
[auxImageView setImage:myImg];
CGDataProviderRelease(dataProvider);
CGImageRelease(cgImage);
}
Now...this doesn't crashes...so everything is non-nil and ok.
If instead of using viewWithTag , I alloc init a new UIImageView and add it to imageView, it will appear. But I don't want to do another copy of the view since this updateTile method will be called quite often.
My question is: Why doesn't the auxImageView appear? It very much should.
Thank you.
Regards,
George
Try this
for(UIView *view in [imageView subviews]) {
if(view.tag == i*10+j) {
UIImageView *auxImageView = (UIImageView*) view;
[auxImageView setImage:myImg];
}
}
After applying a 3d transform to an UIImageView.layer, I need to save the resulting "view" as a new UIImage... Seemed like a simple task at first :-) but no luck so far, and searching hasn't turned up any clues :-( so I'm hoping someone will be kind enough to point me in the right direction.
A very simple iPhone project is available here.
Thanks.
- (void)transformImage {
float degrees = 12.0;
float zDistance = 250;
CATransform3D transform3D = CATransform3DIdentity;
transform3D.m34 = 1.0 / zDistance; // the m34 cell of the matrix controls perspective, and zDistance affects the "sharpness" of the transform
transform3D = CATransform3DRotate(transform3D, DEGREES_TO_RADIANS(degrees), 1, 0, 0); // perspective transform on y-axis
imageView.layer.transform = transform3D;
}
/* FAIL : capturing layer contents doesn't get the transformed image -- just the original
CGImageRef newImageRef = (CGImageRef)imageView.layer.contents;
UIImage *image = [UIImage imageWithCGImage:newImageRef];
*/
/* FAIL : docs for renderInContext states that it does not render 3D transforms
UIGraphicsBeginImageContext(imageView.image.size);
[imageView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
*/
//
// header
//
#import <QuartzCore/QuartzCore.h>
#define DEGREES_TO_RADIANS(x) x * M_PI / 180
UIImageView *imageView;
#property (nonatomic, retain) IBOutlet UIImageView *imageView;
//
// code
//
#synthesize imageView;
- (void)transformImage {
float degrees = 12.0;
float zDistance = 250;
CATransform3D transform3D = CATransform3DIdentity;
transform3D.m34 = 1.0 / zDistance; // the m34 cell of the matrix controls perspective, and zDistance affects the "sharpness" of the transform
transform3D = CATransform3DRotate(transform3D, DEGREES_TO_RADIANS(degrees), 1, 0, 0); // perspective transform on y-axis
imageView.layer.transform = transform3D;
}
- (UIImage *)captureView:(UIImageView *)view {
UIGraphicsBeginImageContext(view.frame.size);
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
- (void)imageSavedToPhotosAlbum:(UIImage *)image didFinishSavingWithError:(NSError *)error contextInfo:(void *)contextInfo {
NSString *title = #"Save to Photo Album";
NSString *message = (error ? [error description] : #"Success!");
UIAlertView *alert = [[UIAlertView alloc] initWithTitle:title message:message delegate:nil cancelButtonTitle:#"OK" otherButtonTitles:nil];
[alert show];
[alert release];
}
- (IBAction)saveButtonClicked:(id)sender {
UIImage *newImage = [self captureView:imageView];
UIImageWriteToSavedPhotosAlbum(newImage, self, #selector(imageSavedToPhotosAlbum: didFinishSavingWithError: contextInfo:), nil);
}
I ended up creating a render method pixel per pixel on the CPU using the inverse of the view transform.
Basically, it renders the original UIImageView into a UIImage. Then every pixel in the UIImage is multiplied by the inverse transform matrix to generate the transformed UIImage.
RenderUIImageView.h
#import <UIKit/UIKit.h>
#import <QuartzCore/CATransform3D.h>
#import <QuartzCore/CALayer.h>
#interface RenderUIImageView : UIImageView
- (UIImage *)generateImage;
#end
RenderUIImageView.m
#import "RenderUIImageView.h"
#interface RenderUIImageView()
#property (assign) CATransform3D transform;
#property (assign) CGRect rect;
#property (assign) float denominatorx;
#property (assign) float denominatory;
#property (assign) float denominatorw;
#property (assign) float factor;
#end
#implementation RenderUIImageView
- (UIImage *)generateImage
{
_transform = self.layer.transform;
_denominatorx = _transform.m12 * _transform.m21 - _transform.m11 * _transform.m22 + _transform.m14 * _transform.m22 * _transform.m41 - _transform.m12 * _transform.m24 * _transform.m41 - _transform.m14 * _transform.m21 * _transform.m42 +
_transform.m11 * _transform.m24 * _transform.m42;
_denominatory = -_transform.m12 *_transform.m21 + _transform.m11 *_transform.m22 - _transform.m14 *_transform.m22 *_transform.m41 + _transform.m12 *_transform.m24 *_transform.m41 + _transform.m14 *_transform.m21 *_transform.m42 -
_transform.m11* _transform.m24 *_transform.m42;
_denominatorw = _transform.m12 *_transform.m21 - _transform.m11 *_transform.m22 + _transform.m14 *_transform.m22 *_transform.m41 - _transform.m12 *_transform.m24 *_transform.m41 - _transform.m14 *_transform.m21 *_transform.m42 +
_transform.m11 *_transform.m24 *_transform.m42;
_rect = self.bounds;
if (UIGraphicsBeginImageContextWithOptions != NULL) {
UIGraphicsBeginImageContextWithOptions(_rect.size, NO, 0.0);
} else {
UIGraphicsBeginImageContext(_rect.size);
}
if ([[UIScreen mainScreen] respondsToSelector:#selector(displayLinkWithTarget:selector:)] &&
([UIScreen mainScreen].scale == 2.0)) {
_factor = 2.0f;
} else {
_factor = 1.0f;
}
UIImageView *img = [[UIImageView alloc] initWithFrame:_rect];
img.image = self.image;
[img.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *source = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGContextRef ctx;
CGImageRef imageRef = [source CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *inputData = malloc(height * width * 4);
unsigned char *outputData = malloc(height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(inputData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
context = CGBitmapContextCreate(outputData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
for (int ii = 0 ; ii < width * height ; ++ii)
{
int x = ii % width;
int y = ii / width;
int indexOutput = 4 * x + 4 * width * y;
CGPoint p = [self modelToScreen:(x*2/_factor - _rect.size.width)/2.0 :(y*2/_factor - _rect.size.height)/2.0];
p.x *= _factor;
p.y *= _factor;
int indexInput = 4*(int)p.x + (4*width*(int)p.y);
if (p.x >= width || p.x < 0 || p.y >= height || p.y < 0 || indexInput > width * height *4)
{
outputData[indexOutput] = 0.0;
outputData[indexOutput+1] = 0.0;
outputData[indexOutput+2] = 0.0;
outputData[indexOutput+3] = 0.0;
}
else
{
outputData[indexOutput] = inputData[indexInput];
outputData[indexOutput+1] = inputData[indexInput + 1];
outputData[indexOutput+2] = inputData[indexInput + 2];
outputData[indexOutput+3] = 255.0;
}
}
ctx = CGBitmapContextCreate(outputData,CGImageGetWidth( imageRef ),CGImageGetHeight( imageRef ),8,CGImageGetBytesPerRow( imageRef ),CGImageGetColorSpace( imageRef ), kCGImageAlphaPremultipliedLast );
imageRef = CGBitmapContextCreateImage (ctx);
UIImage* rawImage = [UIImage imageWithCGImage:imageRef];
CGContextRelease(ctx);
free(inputData);
free(outputData);
return rawImage;
}
- (CGPoint) modelToScreen : (float) x: (float) y
{
float xp = (_transform.m22 *_transform.m41 - _transform.m21 *_transform.m42 - _transform.m22* x + _transform.m24 *_transform.m42 *x + _transform.m21* y - _transform.m24* _transform.m41* y) / _denominatorx;
float yp = (-_transform.m11 *_transform.m42 + _transform.m12 * (_transform.m41 - x) + _transform.m14 *_transform.m42 *x + _transform.m11 *y - _transform.m14 *_transform.m41* y) / _denominatory;
float wp = (_transform.m12 *_transform.m21 - _transform.m11 *_transform.m22 + _transform.m14*_transform.m22* x - _transform.m12 *_transform.m24* x - _transform.m14 *_transform.m21* y + _transform.m11 *_transform.m24 *y) / _denominatorw;
CGPoint result = CGPointMake(xp/wp, yp/wp);
return result;
}
#end
Theoretically, you could use the (now-allowed) undocumented call UIGetScreenImage() after quickly rendering it to the screen on a black background, but in practice this will be slow and ugly, so don't use it ;P.
I have the same problem with you and I found the solution!
I want to rotate the UIImageView, because I will have the animation.
And save the image, I use this method:
void CGContextConcatCTM(CGContextRef c, CGAffineTransform transform)
the transform param is the transform of your UIImageView!. So anything you have done to the imageView will be the same with image!.
And I have write a category method of UIImage.
-(UIImage *)imageRotateByTransform:(CGAffineTransform)transform{
// calculate the size of the rotated view's containing box for our drawing space
UIView *rotatedViewBox = [[UIView alloc] initWithFrame:CGRectMake(0,0,self.size.width, self.size.height)];
rotatedViewBox.transform = transform;
CGSize rotatedSize = rotatedViewBox.frame.size;
[rotatedViewBox release];
// Create the bitmap context
UIGraphicsBeginImageContext(rotatedSize);
CGContextRef bitmap = UIGraphicsGetCurrentContext();
// Move the origin to the middle of the image so we will rotate and scale around the center.
CGContextTranslateCTM(bitmap, rotatedSize.width/2, rotatedSize.height/2);
//Rotate the image context using tranform
CGContextConcatCTM(bitmap, transform);
// Now, draw the rotated/scaled image into the context
CGContextScaleCTM(bitmap, 1.0, -1.0);
CGContextDrawImage(bitmap, CGRectMake(-self.size.width / 2, -self.size.height / 2, self.size.width, self.size.height), [self CGImage]);
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
Hope this will help you.
Have you had a look at this? UIImage from UIView
I had the same problem, I was able to use UIView's drawViewHierarchyInRect:afterScreenUpdates: method, from iOS 7.0 -
(Documentation)
It draws the whole tree as it appears on the screen.
UIGraphicsBeginImageContextWithOptions(viewToRender.bounds.size, YES, 0);
[viewToRender drawViewHierarchyInRect:viewToRender.bounds afterScreenUpdates:YES];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Let say you have and UIImageView called imageView.
If you apply 3d transform and try to render this view with UIGraphicsImageRenderer transforms are ignored.
imageView.layer.transform = someTransform3d
but if you convert CATransform3d to CGAffine transform using CATransform3DGetAffineTransform and apply it to transform property of image view, it works.
imageView.transform = CATransform3DGetAffineTransform(someTransform3d)
And then, you can use the extension below to save it as UIImage
extension UIView {
func asImage() -> UIImage {
let renderer = UIGraphicsImageRenderer(bounds: bounds)
return renderer.image { rendererContext in
layer.render(in: rendererContext.cgContext)
}
}
}
And just call
let image = imageView.asImage()
In your captureView: method, try replacing this line:
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
with this:
[view.layer.superlayer renderInContext:UIGraphicsGetCurrentContext()];
You may have to adjust the size you use to create the image context.
I don't see anything in the API doc that says renderInContext: ignores 3D transformations. However, the transformations apply to the layer, not its contents, which is why you need to render the superlayer to see the transformation applied.
Note that calling drawRect: on the superview definitely won't work, as drawRect: does not draw subviews.
3D transform on UIImage / CGImageRef
I've improved Marcos Fuentes answer. You should be able to calculate the mapping of each pixel yourself.. Not perfect, but it does the trick...
It is available on this repository http://github.com/hfossli/AGGeometryKit/
The interesting files is
https://github.com/hfossli/AGGeometryKit/blob/master/Source/AGTransformPixelMapper.m
https://github.com/hfossli/AGGeometryKit/blob/master/Source/CGImageRef%2BCATransform3D.m
https://github.com/hfossli/AGGeometryKit/blob/master/Source/UIImage%2BCATransform3D.m
3D transform on UIView / UIImageView
https://stackoverflow.com/a/12820877/202451
Then you will have full control over each point in the quadrilateral. :)
A solution I found that at least worked in my case was to subclass CALayer. When a renderInContext: message is sent to a layer, that layer automatically forwards that message to all its sublayers. So all I had to do was to subclass CALayer and override the renderInContext: method and render what I needed to be rendered in the provided context.
For example, in my code I had a layer for which I was setting its contents to an image of an arrow:
UIImage *image = [UIImage imageNamed:#"arrow.png"];
MYLayer *myLayer = [[CALayer alloc] init];
[myLayer setContents:(__bridge id)[image CGImage]];
[self.mainLayer addSublayer:myLayer];
Now when I was applying a 3D 180 degree rotation over the Y-axis on the arrow and was trying to do a [self.mainLayer renderInContext:context] afterwards I was still getting the un-rotated image.
So in my subclass MyLayer I overrode renderInContext: and used an already rotated image to draw in provided context:
- (void)renderInContext:(CGContextRef)ctx
{
NSLog(#"Rendered in context");
UIImage *image = [UIImage imageNamed:#"arrow_rotated.png"];
CGContextDrawImage(ctx, self.bounds, image.CGImage);
}
This worked in my case, however I can see that if you are doing lots of 3D transforms you may not be able to have an image ready for every possible scenario. In many other cases though it should be possible to render the result of 3D transform using 2D transforms in the passed context. For example in my case instead of using a different image arrow_rotated.png I could use the arrow.png image and mirror it and draw it in the context.