How to draw image and text in Image Context? - iphone

I tried to draw text on the image.When I don't apply the transformations, then the image is drawn at the bottom left corner and the image and text are fine(Fig 2),but I want the image on the top left of the view.
Below is my drawRect implementation.
How to flip the image so that text and image are aligned properly?
or
How to move the image to the top left of the view?
If I don't use the following function calls image gets created at the bottom of the view.(Fig 2)
CGContextTranslateCTM(UIGraphicsGetCurrentContext(), 0.0, self.bounds.size.height);
CGContextScaleCTM(UIGraphicsGetCurrentContext(), 1.0, -1.0);
- (void)drawRect:(CGRect)rect
{
UIGraphicsBeginImageContext(self.bounds.size);
// Fig 2 comment these lines to have Fig 2
CGContextTranslateCTM(UIGraphicsGetCurrentContext(), 0.0, self.bounds.size.height);
CGContextScaleCTM(UIGraphicsGetCurrentContext(), 1.0, -1.0);
// Fig 2 Applying the above transformations results in Fig 1
UIImage *natureImage = [UIImage imageNamed:#"Nature"];
CGRect natureImageRect = CGRectMake(130, 380, 50, 50);
[natureImage drawInRect:natureImageRect];
UIFont *numberFont = [UIFont systemFontOfSize:28.0];
NSFileManager *fm = [NSFileManager defaultManager];
NSString * aNumber = #"111";
[aNumber drawAtPoint:CGPointMake(100, 335) withFont:numberFont];
UIFont *textFont = [UIFont systemFontOfSize:22.0];
NSString * aText = #"Hello";
[aText drawAtPoint:CGPointMake(220, 370) withFont: textFont];
self.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSData * imageData = UIImagePNGRepresentation(self.image);
NSArray * paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask,YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
NSString *filePath = [documentsDirectory stringByAppendingPathComponent:#"Image.png"];
NSLog(#"filePath :%#", filePath);
BOOL isFileCreated = [fm createFileAtPath:filePath contents:imageData attributes:nil];
if(isFileCreated)
{
NSLog(#"File created at Path %#",filePath);
}
}

Here is the code I used to draw the same image (except I did not use the nature image). Its not in a draw rect but depending on your end goal it might be better to do this outside of the draw rect in a custom method and set the self.image to the result of -(UIImage *)imageToDraw. It outputs this:
Here is the code:
- (UIImage *)imageToDraw
{
UIGraphicsBeginImageContextWithOptions(CGSizeMake(320, 480), NO, [UIScreen mainScreen].scale);
UIImage *natureImage = [UIImage imageNamed:#"testImage.png"];
[natureImage drawInRect:CGRectMake(130, 380, 50, 50)];
UIFont *numberFont = [UIFont systemFontOfSize:28.0];
NSString * aNumber = #"111";
[aNumber drawAtPoint:CGPointMake(100, 335) withFont:numberFont];
UIFont *textFont = [UIFont systemFontOfSize:22.0];
NSString * aText = #"Hello";
[aText drawAtPoint:CGPointMake(220, 370) withFont:textFont];
UIImage *resultingImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return resultingImage;
}
- (NSString *)filePath
{
NSArray * paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask,YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
return [documentsDirectory stringByAppendingPathComponent:#"Image.png"];
}
- (void)testImageWrite
{
NSData *imageData = UIImagePNGRepresentation([self imageToDraw]);
NSError *writeError = nil;
BOOL success = [imageData writeToFile:[self filePath] options:0 error:&writeError];
if (!success || writeError != nil)
{
NSLog(#"Error Writing: %#",writeError.description);
}
}
anyway - hope this helps

Your image frame of reference is the standard iOS frame of reference: the origin is at the top left corner. Instead the text you are drawing with Core Text has the old frame of reference, with the origin at the bottom left corner. Just apply a transform on the text (not the graphics context) like so:
CGContextSetTextMatrix(context, CGAffineTransformMakeScale(1.0f, -1.0f));
then all your stuff will be layout as if the axis origin were in the top left corner.
If you have to write entire sentences with Core Text as opposed to just words take a look at this blog post

I want to say that drawRect method is for drawing in a view not to create an image. Performance issue, drawInRect is called a lot.With this intention is better viewDidLoad or some custom method.
I have done an example with your request:
- (void)viewDidLoad {
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib.
UIGraphicsBeginImageContext(self.view.bounds.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGFloat margin = 10;
CGFloat y = self.view.bounds.size.height * 0.5;
CGFloat x = margin;
UIImage *natureImage = [UIImage imageNamed:#"image"];
CGRect natureImageRect = CGRectMake(x, y - 20, 40, 40);
[natureImage drawInRect:natureImageRect];
x += 40 + margin;
UIFont *textFont = [UIFont systemFontOfSize:22.0];
NSString * aText = #"Hello";
NSMutableParagraphStyle *style = [[NSParagraphStyle defaultParagraphStyle] mutableCopy];
style.alignment = NSTextAlignmentCenter;
NSDictionary *attr = #{NSParagraphStyleAttributeName: style,
NSFontAttributeName: textFont};
CGSize size = [aText sizeWithAttributes:attr];
[aText drawInRect: CGRectMake(x, y - size.height * 0.5, 100, 40)
withFont: textFont
lineBreakMode: UILineBreakModeClip
alignment: UITextAlignmentLeft];
self.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
self.imageView.image = self.image;
}

Related

MKAnnotationView image is not displayed on result snapshot image iOS 7

I`ve created similar code as was shown on WWDC for displaying pin on snapshots, but pin image is not displayed:
MKMapSnapshotOptions *options = [[MKMapSnapshotOptions alloc] init];
options.region = self.mapView.region;
options.scale = 2;
options.size = self.mapView.frame.size;
MKMapSnapshotter *snapshotter = [[MKMapSnapshotter alloc] initWithOptions:options];
[snapshotter startWithCompletionHandler:^(MKMapSnapshot *snapshot, NSError *error)
{
MKAnnotationView *pin = [[MKAnnotationView alloc] initWithAnnotation:nil reuseIdentifier:#""];
UIImage *image;
UIImage *finalImage;
image = snapshot.image;
NSLog(#"%f", image.size.height);
UIImage *pinImage = pin.image;
CGPoint pinPoint = [snapshot pointForCoordinate:CLLocationCoordinate2DMake(self.longtitude, self.latitude)];
CGPoint pinCenterOffset = pin.centerOffset;
pinPoint.x -= pin.bounds.size.width / 2.0;
pinPoint.y -= pin.bounds.size.height / 2.0;
pinPoint.x += pinCenterOffset.x;
pinPoint.y += pinCenterOffset.y;
UIGraphicsBeginImageContextWithOptions(image.size, YES, image.scale);
[image drawAtPoint:CGPointMake(0, 0)];
[pinImage drawAtPoint:pinPoint];
finalImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSData *data = UIImageJPEGRepresentation(finalImage, 0.95f);
NSArray *pathArray = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *path = [pathArray objectAtIndex:0];
NSLog(#"%#", path);
NSString *fileWithPath = [path stringByAppendingPathComponent:#"test.jpeg"];
[data writeToFile:fileWithPath atomically:YES];
}];
Only snapshot of map is displayed without pin image.
If you're expecting the default pin image to appear, you need to create an MKPinAnnotationView instead of the plain MKAnnotationView (which has no default image -- it's blank by default).
Also, please note that the latitude and longitude parameters are backwards in this line:
CGPoint pinPoint = [snapshot pointForCoordinate:CLLocationCoordinate2DMake(
self.longtitude, self.latitude)];
In CLLocationCoordinate2DMake, latitude should be the first parameter and longitude the second.

Faster way to load Images in Collection View

I have a little flow problem with my UICollectionView. I want to display PDF's thumbnails just like in Apple's iBook app, when I scroll my collection view I can see it's not really smooth. Here is the way I use to load my pictures :
- (UICollectionViewCell *)collectionView:(UICollectionView *)cv cellForItemAtIndexPath: (NSIndexPath *)indexPath
{
GridCell *cell = [cv dequeueReusableCellWithReuseIdentifier:#"gridCell" forIndexPath:indexPath];
...
// Set Thumbnail
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
if ([Tools checkIfLocalFileExist:cell.pdfDoc])
{
UIImage *thumbnail = [Tools generateThumbnailForFile:((PDFDocument *)[self.pdfList objectAtIndex:indexPath.row]).title];
dispatch_async( dispatch_get_main_queue(), ^{
[cell.imageView setImage:thumbnail];
});
}
});
...
return cell;
}
Method to get thumbnail :
+ (UIImage *)generateThumbnailForFile:(NSString *) fileName
{
// ----- Check if thumbnail already exist
NSString* documentsPath = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) objectAtIndex:0];
NSString* thumbnailAddress = [documentsPath stringByAppendingPathComponent:[[fileName stringByDeletingPathExtension] stringByAppendingString:#".png"]];
BOOL fileExists = [[NSFileManager defaultManager] fileExistsAtPath:thumbnailAddress];
if (fileExists)
return [UIImage imageWithContentsOfFile:thumbnailAddress];
// ----- Generate Thumbnail
NSString* filePath = [documentsPath stringByAppendingPathComponent:fileName];
CFURLRef url = (__bridge CFURLRef)[NSURL fileURLWithPath:filePath];
CGPDFDocumentRef documentRef = CGPDFDocumentCreateWithURL(url);
CGPDFPageRef pageRef = CGPDFDocumentGetPage(documentRef, 1);
CGRect pageRect = CGPDFPageGetBoxRect(pageRef, kCGPDFCropBox);
UIGraphicsBeginImageContext(pageRect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(context, CGRectGetMinX(pageRect),CGRectGetMaxY(pageRect));
CGContextScaleCTM(context, 1, -1);
CGContextTranslateCTM(context, -(pageRect.origin.x), -(pageRect.origin.y));
CGContextDrawPDFPage(context, pageRef);
// ----- Save Image
UIImage *finalImage = UIGraphicsGetImageFromCurrentImageContext();
NSData *imageData = [NSData dataWithData:UIImagePNGRepresentation(finalImage)];
[imageData writeToFile:thumbnailAddress atomically:YES];
UIGraphicsEndImageContext();
return finalImage;
}
Do you have any suggestion ?
Take a look at https://github.com/rs/SDWebImage. The library works great for async image loading particularly the method setImageWithURL:placeholderImage:
You can call that method and set a place holder with a loading image or blank png and once the image you are trying to retrieve is loaded it will fill in the place holder. This should speed up your app quite a bit.

How to retrieve images from Instagram which has special hashtag?

My client wants to share an image on Instagram. I have implemeted sharing image on instagram.But i could not share it with a special hashtag. Here is my code so far.
- (IBAction)sharePhotoOnInstagram:(id)sender {
UIImagePickerController *imgpicker=[[UIImagePickerController alloc] init];
imgpicker.delegate=self;
[self storeimage];
NSURL *instagramURL = [NSURL URLWithString:#"instagram://app"];
if ([[UIApplication sharedApplication] canOpenURL:instagramURL])
{
CGRect rect = CGRectMake(0 ,0 , 612, 612);
NSString *jpgPath = [NSHomeDirectory() stringByAppendingPathComponent:#"Documents/15717.ig"];
NSURL *igImageHookFile = [[NSURL alloc] initWithString:[[NSString alloc] initWithFormat:#"file://%#", jpgPath]];
dic.UTI = #"com.instagram.photo";
dic.delegate = self;
dic = [self setupControllerWithURL:igImageHookFile usingDelegate:self];
dic = [UIDocumentInteractionController interactionControllerWithURL:igImageHookFile];
dic.delegate = self;
[dic presentOpenInMenuFromRect: rect inView: self.view animated: YES ];
// [[UIApplication sharedApplication] openURL:instagramURL];
}
else
{
// NSLog(#"instagramImageShare");
UIAlertView *errorToShare = [[UIAlertView alloc] initWithTitle:#"Instagram unavailable " message:#"You need to install Instagram in your device in order to share this image" delegate:nil cancelButtonTitle:#"OK" otherButtonTitles:nil];
errorToShare.tag=3010;
[errorToShare show];
}
}
- (void) storeimage
{
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
NSString *savedImagePath = [documentsDirectory stringByAppendingPathComponent:#"15717.ig"];
UIImage *NewImg = [self resizedImage:picTaken :CGRectMake(0, 0, 612, 612) ];
NSData *imageData = UIImagePNGRepresentation(NewImg);
[imageData writeToFile:savedImagePath atomically:NO];
}
-(UIImage*) resizedImage:(UIImage *)inImage: (CGRect) thumbRect
{
CGImageRef imageRef = [inImage CGImage];
CGImageAlphaInfo alphaInfo = CGImageGetAlphaInfo(imageRef);
// There's a wierdness with kCGImageAlphaNone and CGBitmapContextCreate
// see Supported Pixel Formats in the Quartz 2D Programming Guide
// Creating a Bitmap Graphics Context section
// only RGB 8 bit images with alpha of kCGImageAlphaNoneSkipFirst, kCGImageAlphaNoneSkipLast, kCGImageAlphaPremultipliedFirst,
// and kCGImageAlphaPremultipliedLast, with a few other oddball image kinds are supported
// The images on input here are likely to be png or jpeg files
if (alphaInfo == kCGImageAlphaNone)
alphaInfo = kCGImageAlphaNoneSkipLast;
// Build a bitmap context that's the size of the thumbRect
CGContextRef bitmap = CGBitmapContextCreate(
NULL,
thumbRect.size.width, // width
thumbRect.size.height, // height
CGImageGetBitsPerComponent(imageRef), // really needs to always be 8
4 * thumbRect.size.width, // rowbytes
CGImageGetColorSpace(imageRef),
alphaInfo
);
// Draw into the context, this scales the image
CGContextDrawImage(bitmap, thumbRect, imageRef);
// Get an image from the context and a UIImage
CGImageRef ref = CGBitmapContextCreateImage(bitmap);
UIImage* result = [UIImage imageWithCGImage:ref];
CGContextRelease(bitmap); // ok if NULL
CGImageRelease(ref);
return result;
}
- (UIDocumentInteractionController *) setupControllerWithURL: (NSURL*) fileURL usingDelegate: (id <UIDocumentInteractionControllerDelegate>) interactionDelegate
{
UIDocumentInteractionController *interactionController = [UIDocumentInteractionController interactionControllerWithURL:fileURL];
interactionController.delegate = self;
return interactionController;
}
- (void)documentInteractionControllerWillPresentOpenInMenu:(UIDocumentInteractionController *)controller
{
}
- (BOOL)documentInteractionController:(UIDocumentInteractionController *)controller canPerformAction:(SEL)action
{
// NSLog(#"5dsklfjkljas");
return YES;
}
- (BOOL)documentInteractionController:(UIDocumentInteractionController *)controller performAction:(SEL)action
{
// NSLog(#"dsfa");
return YES;
}
- (void)documentInteractionController:(UIDocumentInteractionController *)controller didEndSendingToApplication:(NSString *)application
{
// NSLog(#"fsafasd;");
}
Note : This is working fine.
I have followed their documentation on http://instagram.com/developer/iphone-hooks/ but couldn't get better idea from it!. Now don't know what to do next step for sharing an image with hashtag and other information.
Secondly I want to retrieve all the images shared with a particular hashtag into the application.
Please guide me! Thanks in advance!
First, from iPhone Hooks, under 'Document Interaction':
To include a pre-filled caption with your photo, you can set the annotation property on the document interaction request to an NSDictionary containing an NSString under the key "InstagramCaption". Note: this feature will be available on Instagram 2.1 and later.
You'll need to add something like:
dic.annotation = [NSDictionary dictionaryWithObject:#"#yourTagHere" forKey:#"InstagramCaption"];
Second, you'll need to take a look at Tag Endpoints if you want to pull down images with a specific tag.

iOS - How is that change of TextView frame makes it's text after pdf export unselectable?

How is that change of textView frame makes it's text after pdf export unselectable? It's simple rasterized with horible compression. If I export textView and don't change the frame I got perfect vectors.
I've view with textView and I export the pdf with content of textView like this:
- (void) generatePDFWithFilename:(NSString *)filename
{
// Creates a mutable data object for updating with binary data, like a byte array
NSMutableData *pdfData = [NSMutableData data];
// pdf view
_pdfView = [[UIView alloc] initWithFrame:CGRectMake(0, 0, 612, 792)];
// adding subview - for easier positioning
CGRect tempFrame;
UIView *tempView = [[UIView alloc] initWithFrame:CGRectMake(10, 10, _pdfView.bounds.size.width - 20, _pdfView.bounds.size.height - 20)];
tempView.backgroundColor = [UIColor redColor];
_pdfView.backgroundColor = [UIColor lightGrayColor];
[_pdfView addSubview:tempView];
UITextView *pdfDescriptionTextView = [[UITextView alloc] initWithFrame:CGRectZero];
pdfDescriptionTextView = _workDescriptionTextView;
[tempView addSubview:pdfDescriptionTextView];
// pdfDescriptionTextView.center = CGPointMake(50, 200); // working - vector data are preserved
tempFrame = pdfDescriptionTextView.frame;
tempFrame.origin = CGPointMake(100, 100);
tempFrame.size = CGSizeMake(tempView.bounds.size.width - 40, pdfDescriptionTextView.contentSize.height);
pdfDescriptionTextView.frame = tempFrame;
pdfDescriptionTextView.backgroundColor = [UIColor greenColor];
UIGraphicsBeginPDFContextToData(pdfData, CGRectZero, nil);
UIGraphicsBeginPDFPageWithInfo(_pdfView.bounds, nil);
CGContextRef pdfContext = UIGraphicsGetCurrentContext();
[_pdfView.layer renderInContext:pdfContext];
// remove PDF rendering context
UIGraphicsEndPDFContext();
// Retrieves the document directories from the iOS device
NSArray* documentDirectories = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask,YES);
NSString* documentDirectory = [documentDirectories objectAtIndex:0];
NSString* documentDirectoryFilename = [documentDirectory stringByAppendingPathComponent:filename];
// instructs the mutable data object to write its context to a file on disk
BOOL success = [pdfData writeToFile:documentDirectoryFilename atomically:YES];
if(success)
{
[self sendEmailWithData:pdfData];
}
}
just change its editing mode.....
pdfDescriptionTextView.editable = FALSE;
After this the user will not be able to change the text anymore

Loading retina/normal images using core graphics

+(UIImage*) createCGImageFromFile:(NSString*)inName
{
if(!inName || [inName length]==0)
return NULL;
// use the data provider to get a CGImage; release the data provider
CGImageRef image= NULL;
CGFloat scale = 1.0f;
CGDataProviderRef dataProvider = NULL;
NSString *fileName = nil;
NSString *path = nil;
if([Utilities isRetinaDevice])
{
NSString *extenstion = [inName pathExtension];
NSString *stringWithoutPath = [inName stringByDeletingPathExtension];
fileName = [NSString stringWithFormat:#"/Images/%##2x.%#",stringWithoutPath,extenstion];
path = [[[NSBundle mainBundle] resourcePath] stringByAppendingString:fileName];
dataProvider = CGDataProviderCreateWithFilename([path UTF8String]);
if(dataProvider)
{
image = CGImageCreateWithPNGDataProvider(dataProvider, NULL, NO,kCGRenderingIntentDefault);
CGDataProviderRelease(dataProvider);
if(image)
{
scale = 2.0f;
}
}
}
if(image == NULL) //Try normal image
{
fileName = [NSString stringWithFormat:#"/Images/%#",inName];
path = [[[NSBundle mainBundle] resourcePath] stringByAppendingString:fileName];
dataProvider = CGDataProviderCreateWithFilename([path UTF8String]);
if(dataProvider)
{
image = CGImageCreateWithPNGDataProvider(dataProvider, NULL, NO,kCGRenderingIntentDefault);
CGDataProviderRelease(dataProvider);
}
}
// make a bitmap context of a suitable size to draw to, forcing decode
size_t width = CGImageGetWidth(image);
size_t height = CGImageGetHeight(image);
unsigned char *imageBuffer = (unsigned char *)malloc(width*height*4);
CGColorSpaceRef colourSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef imageContext =
CGBitmapContextCreate(imageBuffer, width, height, 8, width*4, colourSpace,
kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Little);
CGColorSpaceRelease(colourSpace);
// draw the image to the context, release it
CGContextDrawImage(imageContext, CGRectMake(0, 0, width, height), image);
CGImageRelease(image);
// now get an image ref from the context
CGImageRef cgImageRef = CGBitmapContextCreateImage(imageContext);
CGContextRelease(imageContext);
free(imageBuffer);
UIImage *outImage = nil;
if(cgImageRef)
{
outImage = [[UIImage alloc] initWithCGImage:cgImageRef scale:scale orientation:UIImageOrientationUp];
CGImageRelease(cgImageRef);
}
return outImage;//Need to release by the receiver
}
Earlier the function was not loading retina images since i was not appending #2x to the image path, i thought that apple itself will take care.
Now I modified the code to load #2x images and created UIImage using scale factor, am I missing something in above code? Is it fine?
P.S. Currently I am not facing any problem using this code but I am not 100% sure
I would use UIImage to load it (it will correct the path)
That will greatly simply your code
THEN get a CGImage from it (UIImage is basically just a thin wrapper around CG so thats no real overhead)
then proceed with your code - it looks ok to me