UIImageView doesn't update - iphone

I have a problem with my UIImageView that doesn't update.
So, it's like this:
I have a UIScrollView that contains an UIImageView (called imageView).
Now, imageView , should contain more UIImageViews. Those UIImageViews I add from code but they do not appear.
This is the code:
for(i = 0 ; i < NrOfTilesPerHeight ; i++)
for(j = 0 ; j < NrOfTilesPerWidth ; j++)
{
imageRect = CGRectMake(j*TILE_WIDTH,i*TILE_HEIGHT,TILE_WIDTH, TILE_HEIGHT);
image = CGImageCreateWithImageInRect(aux.CGImage, imageRect);
if(!data[i][j])
NSLog(#"data[%d][%d] is nil",i,j);
context = CGBitmapContextCreate (data[i][j], TILE_WIDTH, TILE_HEIGHT,
bitsPerComponent, bitmapBytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
if (context == NULL)
{
free (data);
printf ("Context not created!");
CGColorSpaceRelease( colorSpace );
}
CGContextDrawImage(context, CGRectMake(0, 0, TILE_WIDTH, TILE_HEIGHT), image);
data[i][j] = CGBitmapContextGetData (context);
memcpy(originalData[i][j],data[i][j],TILE_WIDTH*TILE_HEIGHT*numberOfCompponents);
CGContextFlush(context);
CGImageRelease(image);
UIImageView *imgView = [[UIImageView alloc] init];
[imgView setTag:i*10+j];
CGRect frame = imgView.frame;
frame.origin.x = j * (TILE_WIDTH+5) * initialScale;
frame.origin.y = i * (TILE_HEIGHT+5) * initialScale;
frame.size.width *= initialScale;
frame.size.height *= initialScale;
[imgView setFrame:frame];
[imageView addSubview:imgView];
[self updateTileAtLine:i andRow:j];
[imgView release];
CGDataProviderRelease(dataProvider);
CGImageRelease(cgImage);
}
- (void) updateTileAtLine: (int) i andRow: (int) j
{
CGDataProviderRef dataProvider = CGDataProviderCreateWithData(NULL, data[i][j], bitmapByteCount, NULL);
CGImageRef cgImage = CGImageCreate(TILE_WIDTH, TILE_HEIGHT, bitsPerComponent,
bitsPerPixel, bitmapBytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big, dataProvider, NULL, false, kCGRenderingIntentDefault);
UIImage *myImg = [UIImage imageWithCGImage:cgImage];
UIImageView *auxImageView = (UIImageView*) [imageView viewWithTag:(i*10+j)];
[auxImageView setImage:myImg];
CGDataProviderRelease(dataProvider);
CGImageRelease(cgImage);
}
Now...this doesn't crashes...so everything is non-nil and ok.
If instead of using viewWithTag , I alloc init a new UIImageView and add it to imageView, it will appear. But I don't want to do another copy of the view since this updateTile method will be called quite often.
My question is: Why doesn't the auxImageView appear? It very much should.
Thank you.
Regards,
George

Try this
for(UIView *view in [imageView subviews]) {
if(view.tag == i*10+j) {
UIImageView *auxImageView = (UIImageView*) view;
[auxImageView setImage:myImg];
}
}

Related

How to apply corner radius to a UIView without issues?

Currently I'm applying a corner radius and shadow to multiple UIView's in a scrollview. I've noticed that adding a corner radius and shadow makes the scrollview lag like crazy whenever I scroll. How can I apply these affects without having my performance suffer?
Try by setting also the shadowPath of the layer:
view.layer.cornerRadius=6.0f;
view.layer.borderWidth=2.0f;
view.layer.borderColor=[UIColor grayColor].CGColor;
view.layer.shadowColor = [UIColor blackColor].CGColor;
view.layer.shadowOpacity = 0.3f;
view.layer.shadowOffset = CGSizeMake(0, 0.0f);
view.layer.masksToBounds = NO;
UIBezierPath *path = [UIBezierPath bezierPathWithRect:view.bounds];
view.layer.shadowPath = path.CGPath;
+(UIImage *)makeRoundCornerImage : (UIImage*) img : (int) cornerWidth : (int) cornerHeight
{
UIImage * newImage = nil;
if( nil != img)
{
NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];
int w = img.size.width;
int h = img.size.height;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL, w, h, 8, 4 * w, colorSpace, kCGImageAlphaPremultipliedFirst);
CGContextBeginPath(context);
CGRect rect = CGRectMake(0, 0, img.size.width, img.size.height);
addRoundedRectToPath(context, rect, cornerWidth, cornerHeight);
CGContextClosePath(context);
CGContextClip(context);
CGContextDrawImage(context, CGRectMake(0, 0, w, h), img.CGImage);
CGImageRef imageMasked = CGBitmapContextCreateImage(context);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
[img release];
newImage = [[UIImage imageWithCGImage:imageMasked] retain];
CGImageRelease(imageMasked);
[pool release];
}
return newImage;
}
// With the help of This method we can create the round corner of any PNG file
UIImage *imageFromFile = [UIImage imageNamed:#"FilterFields.png"];
imageFromFile = [ImageManipulator
makeRoundCornerImage:imageFromFile : 10 : 10];
[SmallTable setImage:imageFromFile];

How to redisplay for iPhone Programing (Xcode)

I'm a beginner of iOS programing.
I want to display image on screen for frame by frame.
But in my source, only first frame is drawn on screen and ignored all of next frames.
How do I do to redisplay for realtime?
My image displaying Source :
- viewDidLoad : init display first screen
- ChangeImage : change display next screens
- setPixelColorR:G:B:X:Y: : change a pixel color on screen
<ContentViewController.h>
#import <UIKit/UIKit.h>
#interface ContentViewController : UIViewController {
IBOutlet UIImageView *svContent;
CGContextRef context;
size_t dataSize;
UInt8* initdata;
}
#property(nonatomic,retain) IBOutlet UIImageView *svContent;
-(IBAction)btnPhotosTouched;
- (void) ChangeImage;
- (void) setPixelColorR:(int)red G:(int)green B:(int)blue X:(int)x Y:(int)y;
#end
AND
<ContentViewController.m>
#import "ContentViewController.h"
#implementation ContentViewController
#synthesize svContent;
- (void)viewDidLoad {
[super viewDidLoad];
UIImageView *imageView;
CGSize screenSize = [[UIScreen mainScreen] bounds].size;
CGFloat width = screenSize.width;
CGFloat height = screenSize.height;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
size_t bitsPerComponent = 8;
size_t bytesPerPixel = 4;
size_t bytesPerRow = (width * bitsPerComponent * bytesPerPixel + 7) / 8;
dataSize = bytesPerRow * height;
initdata = malloc(sizeof(UInt8) * dataSize);
memset(initdata, 0, dataSize);
context = CGBitmapContextCreate(initdata, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Big);
// a drawing for a test.
CGContextSetRGBFillColor (context, 1, 0, 0, 1);
CGContextFillRect (context, CGRectMake (0, 0, 200, 100 ));
CGContextSetRGBFillColor (context, 0, 0, 1, .5);
CGContextFillRect (context, CGRectMake (0, 0, 100, 200));
CGColorSpaceRelease(colorSpace);
CGImageRef imageRef = CGBitmapContextCreateImage(context);
UIImage *result = [[UIImage imageWithCGImage:imageRef] retain];
CGImageRelease(imageRef);
imageView = [[UIImageView alloc] initWithImage:result];
[svContent addSubview:imageView];
[imageView release];
// [self ChangeImage:initdata];
}
- (void) ChangeImage
{
UIImageView *imageView;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGColorSpaceRelease(colorSpace);
CGImageRef imageRef = CGBitmapContextCreateImage(context);
UIImage *result = [[UIImage imageWithCGImage:imageRef] retain];
CGImageRelease(imageRef);
imageView = [[UIImageView alloc] initWithImage:result];
[svContent addSubview:imageView];
[imageView release];
NSLog(#"Thread Image Change GO");
[svContent setNeedsDisplay];
[self.view setNeedsDisplay];
[super.view setNeedsDisplay];
}
- (void) setPixelColorR:(int)red G:(int)green B:(int)blue X:(int)x Y:(int)y
{
CGFloat r = (CGFloat)red / 255.0f;
CGFloat g = (CGFloat)green / 255.0f;
CGFloat b = (CGFloat)blue / 255.0f;
CGContextSetRGBFillColor(context, r, g, b, 1);
CGContextFillRect(context, CGRectMake(x, y, 1, 1));
}
.
.
.
#end
Thank you.

iPhone take augmented reality screenshot with AVCaptureVideoPreviewLayer

I have a small augmented reality app that I'm developing and would like to know how to save a screenshot of what the user sees with a tap of a button or a timer.
The app works by overlaying live camera feed above another UIView. I can save screenshots by using power button +home button, these are saved to camera roll. However, Apple will not render the AVCaptureVideoPreviewLayer, even if I ask the window to save itself. It will create a transparent piece of canvas where the preview layer is.
What's the proper way for an augmented reality app to save screenshots, including transparency and subviews?
//displaying a live preview on one of the views
-(void)startCapture
{
captureSession = [[AVCaptureSession alloc] init];
AVCaptureDevice *audioCaptureDevice = nil;
// AVCaptureDevice *audioCaptureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSArray *videoDevices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
for (AVCaptureDevice *device in videoDevices) {
if(useFrontCamera){
if (device.position == AVCaptureDevicePositionFront) {
//FRONT-FACING CAMERA EXISTS
audioCaptureDevice = device;
break;
}
}else{
if (device.position == AVCaptureDevicePositionBack) {
//Rear-FACING CAMERA EXISTS
audioCaptureDevice = device;
break;
}
}
}
NSError *error = nil;
AVCaptureDeviceInput *audioInput = [AVCaptureDeviceInput deviceInputWithDevice:audioCaptureDevice error:&error];
if (audioInput) {
[captureSession addInput:audioInput];
}
else {
// Handle the failure.
}
if([captureSession canAddOutput:captureOutput]){
captureOutput = [[AVCaptureVideoDataOutput alloc] init];
[captureOutput setAlwaysDiscardsLateVideoFrames:YES];
[captureOutput setSampleBufferDelegate:self queue:queue];
[captureOutput setVideoSettings:videoSettings];
dispatch_release(queue);
}else{
//handle failure
}
previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:captureSession];
UIView *aView = arOverlayView;
previewLayer.frame =CGRectMake(0,0, arOverlayView.frame.size.width,arOverlayView.frame.size.height); // Assume you want the preview layer to fill the view.
[aView.layer addSublayer:previewLayer];
[captureSession startRunning];
}
//ask the entire window to draw itself in a graphics context. This call will not render
//the AVCaptureVideoPreviewLayer . It has to be replaced with a UIImageView or GL based view.
//see following code for creating a dynamically updating UIImageView
-(void)saveScreenshot
{
UIGraphicsBeginImageContext(appDelegate.window.bounds.size);
[appDelegate.window.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *screenshot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(screenshot, self,
#selector(image:didFinishSavingWithError:contextInfo:), nil);
}
//image saved to camera roll callback
- (void)image:(UIImage *)image didFinishSavingWithError:(NSError *)error
contextInfo:(void *)contextInfo
{
// Was there an error?
if (error != NULL)
{
// Show error message...
NSLog(#"save failed");
}
else // No errors
{
NSLog(#"save successful");
// Show message image successfully saved
}
}
Here's the code for creating the image:
//you need to add your view controller as a delegate to the camera output to be notified of buffereed data
-(void)activateCameraFeed
{
//this is the code responsible for capturing feed for still image processing
dispatch_queue_t queue = dispatch_queue_create("com.AugmentedRealityGlamour.ImageCaptureQueue", NULL);
captureOutput = [[AVCaptureVideoDataOutput alloc] init];
[captureOutput setAlwaysDiscardsLateVideoFrames:YES];
[captureOutput setSampleBufferDelegate:self queue:queue];
[captureOutput setVideoSettings:videoSettings];
dispatch_release(queue);
//......configure audio feed, add inputs and outputs
}
//buffer delegate callback
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
if ( ignoreImageStream )
return;
[self performImageCaptureFrom:sampleBuffer];
}
Create a UIImage:
- (void) performImageCaptureFrom:(CMSampleBufferRef)sampleBuffer
{
CVImageBufferRef imageBuffer;
if ( CMSampleBufferGetNumSamples(sampleBuffer) != 1 )
return;
if ( !CMSampleBufferIsValid(sampleBuffer) )
return;
if ( !CMSampleBufferDataIsReady(sampleBuffer) )
return;
imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
if ( CVPixelBufferGetPixelFormatType(imageBuffer) != kCVPixelFormatType_32BGRA )
return;
CVPixelBufferLockBaseAddress(imageBuffer,0);
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
CGImageRef newImage = nil;
if ( cameraDeviceSetting == CameraDeviceSetting640x480 )
{
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
newImage = CGBitmapContextCreateImage(newContext);
CGColorSpaceRelease( colorSpace );
CGContextRelease(newContext);
}
else
{
uint8_t *tempAddress = malloc( 640 * 4 * 480 );
memcpy( tempAddress, baseAddress, bytesPerRow * height );
baseAddress = tempAddress;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipFirst);
newImage = CGBitmapContextCreateImage(newContext);
CGContextRelease(newContext);
newContext = CGBitmapContextCreate(baseAddress, 640, 480, 8, 640*4, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGContextScaleCTM( newContext, (CGFloat)640/(CGFloat)width, (CGFloat)480/(CGFloat)height );
CGContextDrawImage(newContext, CGRectMake(0,0,640,480), newImage);
CGImageRelease(newImage);
newImage = CGBitmapContextCreateImage(newContext);
CGColorSpaceRelease( colorSpace );
CGContextRelease(newContext);
free( tempAddress );
}
if ( newImage != nil )
{
//modified for iOS5.0 with ARC
tempImage = [[UIImage alloc] initWithCGImage:newImage scale:(CGFloat)1.0 orientation:cameraImageOrientation];
CGImageRelease(newImage);
//this call creates the illusion of a preview layer, while we are actively switching images created with this method
[self performSelectorOnMainThread:#selector(newCameraImageNotification:) withObject:tempImage waitUntilDone:YES];
}
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
}
update the interface with a UIView that can actually be rendered in a graphics context:
- (void) newCameraImageNotification:(UIImage*)newImage
{
if ( newImage == nil )
return;
[arOverlayView setImage:newImage];
//or do more advanced processing of the image
}
If you are wanting a snapshot of what's on screen, this is what I'm doing in one of my camera apps. I haven't touched this code in a long time so there might be a better 5.0 way now but this is solid with over 1 million downloads. There is a function for grabbing a UIView based screen and one for grabbing an Open/GLES1 screen:
//
// ScreenCapture.m
// LiveEffectsCam
//
// Created by John Carter on 10/8/10.
//
#import "ScreenCapture.h"
#import <QuartzCore/CABase.h>
#import <QuartzCore/CATransform3D.h>
#import <QuartzCore/CALayer.h>
#import <QuartzCore/CAScrollLayer.h>
#import <OpenGLES/EAGL.h>
#import <OpenGLES/ES1/gl.h>
#import <OpenGLES/ES1/glext.h>
#import <QuartzCore/QuartzCore.h>
#import <OpenGLES/EAGLDrawable.h>
#implementation ScreenCapture
+ (UIImage *) GLViewToImage:(GLView *)glView
{
UIImage *glImage = [GLView snapshot:glView]; // returns an autoreleased image
return glImage;
}
+ (UIImage *) GLViewToImage:(GLView *)glView withOverlayImage:(UIImage *)overlayImage
{
UIImage *glImage = [GLView snapshot:glView]; // returns an autoreleased image
// Merge Image and Overlay
//
CGRect imageRect = CGRectMake((CGFloat)0.0, (CGFloat)0.0, glImage.size.width*glImage.scale, glImage.size.height*glImage.scale);
CGImageRef overlayCopy = CGImageCreateCopy( overlayImage.CGImage );
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL, (int)glImage.size.width*glImage.scale, (int)glImage.size.height*glImage.scale, 8, (int)glImage.size.width*4*glImage.scale, colorSpace, kCGImageAlphaPremultipliedLast);
CGContextDrawImage(context, imageRect, glImage.CGImage);
CGContextDrawImage(context, imageRect, overlayCopy);
CGImageRef newImage = CGBitmapContextCreateImage(context);
UIImage *combinedViewImage = [[[UIImage alloc] initWithCGImage:newImage] autorelease];
CGImageRelease(newImage);
CGImageRelease(overlayCopy);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
return combinedViewImage;
}
+ (UIImage *) UIViewToImage:(UIView *)view withOverlayImage:(UIImage *)overlayImage
{
UIImage *viewImage = [ScreenCapture UIViewToImage:view]; // returns an autoreleased image
// Merge Image and Overlay
//
CGRect imageRect = CGRectMake((CGFloat)0.0, (CGFloat)0.0, viewImage.size.width*viewImage.scale, viewImage.size.height*viewImage.scale);
CGImageRef overlayCopy = CGImageCreateCopy( overlayImage.CGImage );
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL, (int)viewImage.size.width*viewImage.scale, (int)viewImage.size.height*viewImage.scale, 8, (int)viewImage.size.width*4*viewImage.scale, colorSpace, kCGImageAlphaPremultipliedLast);
CGContextDrawImage(context, imageRect, viewImage.CGImage);
CGContextDrawImage(context, imageRect, overlayCopy);
CGImageRef newImage = CGBitmapContextCreateImage(context);
UIImage *combinedViewImage = [[[UIImage alloc] initWithCGImage:newImage] autorelease];
CGImageRelease(newImage);
CGImageRelease(overlayCopy);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
return combinedViewImage;
}
+ (UIImage *) UIViewToImage:(UIView *)view
{
// Create a graphics context with the target size
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
//
// CGSize imageSize = [[UIScreen mainScreen] bounds].size;
CGSize imageSize = CGSizeMake( (CGFloat)480.0, (CGFloat)640.0 ); // camera image size
if (NULL != UIGraphicsBeginImageContextWithOptions)
UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0);
else
UIGraphicsBeginImageContext(imageSize);
CGContextRef context = UIGraphicsGetCurrentContext();
// Start with the view...
//
CGContextSaveGState(context);
CGContextTranslateCTM(context, [view center].x, [view center].y);
CGContextConcatCTM(context, [view transform]);
CGContextTranslateCTM(context,-[view bounds].size.width * [[view layer] anchorPoint].x,-[view bounds].size.height * [[view layer] anchorPoint].y);
[[view layer] renderInContext:context];
CGContextRestoreGState(context);
// ...then repeat for every subview from back to front
//
for (UIView *subView in [view subviews])
{
if ( [subView respondsToSelector:#selector(screen)] )
if ( [(UIWindow *)subView screen] == [UIScreen mainScreen] )
continue;
CGContextSaveGState(context);
CGContextTranslateCTM(context, [subView center].x, [subView center].y);
CGContextConcatCTM(context, [subView transform]);
CGContextTranslateCTM(context,-[subView bounds].size.width * [[subView layer] anchorPoint].x,-[subView bounds].size.height * [[subView layer] anchorPoint].y);
[[subView layer] renderInContext:context];
CGContextRestoreGState(context);
}
UIImage *image = UIGraphicsGetImageFromCurrentImageContext(); // autoreleased image
UIGraphicsEndImageContext();
return image;
}
+ (UIImage *) snapshot:(GLView *)eaglview
{
NSInteger x = 0;
NSInteger y = 0;
NSInteger width = [eaglview backingWidth];
NSInteger height = [eaglview backingHeight];
NSInteger dataLength = width * height * 4;
NSUInteger i;
for ( i=0; i<100; i++ )
{
glFlush();
CFRunLoopRunInMode(kCFRunLoopDefaultMode, (float)1.0/(float)60.0, FALSE);
}
GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));
// Read pixel data from the framebuffer
//
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(x, y, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data);
// Create a CGImage with the pixel data
// If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel
// otherwise, use kCGImageAlphaPremultipliedLast
//
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate(width, height, 8, 32, width * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast, ref, NULL, true, kCGRenderingIntentDefault);
// OpenGL ES measures data in PIXELS
// Create a graphics context with the target size measured in POINTS
//
NSInteger widthInPoints;
NSInteger heightInPoints;
if (NULL != UIGraphicsBeginImageContextWithOptions)
{
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// Set the scale parameter to your OpenGL ES view's contentScaleFactor
// so that you get a high-resolution snapshot when its value is greater than 1.0
//
CGFloat scale = eaglview.contentScaleFactor;
widthInPoints = width / scale;
heightInPoints = height / scale;
UIGraphicsBeginImageContextWithOptions(CGSizeMake(widthInPoints, heightInPoints), NO, scale);
}
else
{
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
//
widthInPoints = width;
heightInPoints = height;
UIGraphicsBeginImageContext(CGSizeMake(widthInPoints, heightInPoints));
}
CGContextRef cgcontext = UIGraphicsGetCurrentContext();
// UIKit coordinate system is upside down to GL/Quartz coordinate system
// Flip the CGImage by rendering it to the flipped bitmap context
// The size of the destination area is measured in POINTS
//
CGContextSetBlendMode(cgcontext, kCGBlendModeCopy);
CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref);
// Retrieve the UIImage from the current context
UIImage *image = UIGraphicsGetImageFromCurrentImageContext(); // autoreleased image
UIGraphicsEndImageContext();
// Clean up
free(data);
CFRelease(ref);
CFRelease(colorspace);
CGImageRelease(iref);
return image;
}
#end

How can I draw an image?

I'm programming on objective-c. I have an image a line (see below) (1 x 30) pixels.
How can I get a UIImage (50 x 30) (see below) from this line?
Create a CGBitmapContext with size of 50 * 30 than you can just draw that image on the context by using CGContextDrawImage.
After that use CGBitmapContextCreateImage and [UIImage imageWithCGImage:] to create the UIImage
CGContextRef CreateBitmapContext(int pixelsWide, int pixelsHigh)
{
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
int bitmapByteCount;
int bitmapBytesPerRow;
bitmapBytesPerRow = (pixelsWide * 4); // RGBA
bitmapByteCount = (bitmapBytesPerRow * pixelsHigh);
colorSpace = CGColorSpaceCreateDeviceRGB();
context = CGBitmapContextCreate (NULL,
pixelsWide,
pixelsHigh,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedLast);
NSCAssert(context != NULL, #"cannot create bitmap context");
CGColorSpaceRelease( colorSpace );
return context;
}
CGContextRef context = CreateBitmapContext(50, 30);
UIImage *yourLineImage = ...;
CGImageRef cgImg = [yourLineImage CGImage];
for (int i = 0; i < 50; i++) {
CGRect rect;
rect.origin.x = i;
rect.origin.y = 0;
rect.size.width = 1;
rect.size.height = 30;
CGContextDrawImage(context, rect, cgImg);
}
CGImageRef output = CGBitmapContextCreateImage(context);
UIImage *result = [UIImage imageWithCGImage:output];
if your line has simple color, try this lazy method:
UIImageView *line = [[UIImageView alloc] initWithFrame:CGRectMake(10, 10, 50, 30)];
[line setImage:[UIImage imageNamed:#"your gray line"]];
[self.view addSubView:line];
You can use +[UIColor colorWithPatternImage] in iOS:
NSString *path =
[[NSBundle mainBundle] pathForResource:#"<# the pattern file #>" ofType:#"png"];
UIColor *patternColor = [UIColor colorWithPatternImage:
[UIImage imageWithContentsOfFile:path]];
/* use patternColor anywhere as a regular UIColor instance */
It works better with seamless patterns. For OSX you can use +[NSColor colorWithPatternImage] method.
If you just want to draw the image, you might want to try UIImage's drawInRect: method.
You'd typically want to call this from your custom UIView's drawRect:.
There are different approaches to drawing in Cocoa (and Cocoa-Touch) so here's Apple's Drawing and Printing Guide for iOS.

reseting CGContextRef after drawing pdf page using CGContextDrawPDFPage

I am trying to create thumb images for every pdf page in PDF document and place it in a UISCrollVIew. I have succeeded in this, but scrolling is not so smooth as I want when it's too fast.
And I want to optimize thumb images creating for Pdf page.
I want to create one CGContextRef and reset its content after CGContextDrawPDFPage, as a consequence I wouldn't have to create a context each time and perform some other calculation, which takes a lot of resources.
Is it possible to reset CGContextRef content after CGContextDrawPDFPage? CGContextRestoreGState and CGContextSaveGState seems to doesn't help in this situation.
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
GCPdfSource *pdfSource = [GCPdfSource sharedInstance];
for (int i = pageRange.location; i <= pageRange.length; i++) {
UIView *thumbPdfView = [scrollView viewWithTag:i+1];
if (thumbPdfView == nil) {
CGPDFPageRef pdfPage = [pdfSource pageAt:i + 1];
float xPosition = THUMB_H_PADDING + THUMB_H_PADDING * i + THUMB_WIDTH * i;
CGRect frame = CGRectMake(xPosition, THUMB_H_PADDING, THUMB_WIDTH, THUMB_HEIGHT);
thumbPdfView = [[UIView alloc] initWithFrame:frame];
thumbPdfView.opaque = YES;
thumbPdfView.backgroundColor = [UIColor whiteColor];
[thumbPdfView setTag:i+1];
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL,
frame.size.width,
frame.size.height,
8, /* bits per component*/
frame.size.width * 4, /* bytes per row */
colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextClipToRect(context, CGRectMake(0, 0, frame.size.width,frame.size.height));
CGRect pdfPageRect = CGPDFPageGetBoxRect(pdfPage, kCGPDFMediaBox);
CGRect contextRect = CGContextGetClipBoundingBox(context);
CGAffineTransform transform = aspectFit(pdfPageRect, contextRect);
CGContextConcatCTM(context, transform);
CGContextDrawPDFPage(context, pdfPage);
CGImageRef image = CGBitmapContextCreateImage(context);
CGContextRelease(context);
UIImage *uiImage = [[UIImage alloc]initWithCGImage:image];
CGImageRelease(image);
UIImageView *imageVIew = [[UIImageView alloc]initWithImage:uiImage];
[uiImage release];
[thumbPdfView addSubview:imageVIew];
[imageVIew release];
[scrollView addSubview:thumbPdfView];
[thumbPdfView release];
}
}
[pool release];//release
and aspectFit function...
CGAffineTransform aspectFit(CGRect innerRect, CGRect outerRect) {
CGFloat scaleFactor = MIN(outerRect.size.width/innerRect.size.width, outerRect.size.height/innerRect.size.height);
CGAffineTransform scale = CGAffineTransformMakeScale(scaleFactor, scaleFactor);
CGRect scaledInnerRect = CGRectApplyAffineTransform(innerRect, scale);
CGAffineTransform translation =
CGAffineTransformMakeTranslation((outerRect.size.width - scaledInnerRect.size.width) / 2 - scaledInnerRect.origin.x,
(outerRect.size.height - scaledInnerRect.size.height) / 2 - scaledInnerRect.origin.y);
return CGAffineTransformConcat(scale, translation);
}
Try CGContextClearRect(context, contextBounds).
CGContextSaveGState and CGContextRestoreGState do not have any effect on the content of a context. They push and pop changes made to state aspects of the context like the current fill color.