Creating pdf Thumbnail in iphone - iphone

I am new to Objective-c iPhone programming. I have an application in which I display a PDF in my UIWebView successfully, but now I want to create a thumbnail of my PDF. My PDF is stored in my resource folder.
So please give me code for how I can show a thumbnail of my PDF. My code is for displaying PDF is which is taken in button function:
-(void)show:(id)sender {
pdfView.autoresizesSubviews = NO;
pdfView.scalesPageToFit=YES;
pdfView.autoresizingMask=(UIViewAutoresizingFlexibleHeight | UIViewAutoresizingFlexibleWidth);
[pdfView setDelegate:self];
NSString *path = [[NSBundle mainBundle] pathForResource:#"com" ofType:#"pdf"];
NSLog(#"Path of res is%#",path);
NSURL *url = [NSURL fileURLWithPath:path];
NSURLRequest *request = [NSURLRequest requestWithURL:url];
[pdfView loadRequest:request];
}

try the following method:
- (UIImage *)imageFromPDFWithDocumentRef:(CGPDFDocumentRef)documentRef {
CGPDFPageRef pageRef = CGPDFDocumentGetPage(documentRef, 1);
CGRect pageRect = CGPDFPageGetBoxRect(pageRef, kCGPDFCropBox);
UIGraphicsBeginImageContext(pageRect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(context, CGRectGetMinX(pageRect),CGRectGetMaxY(pageRect));
CGContextScaleCTM(context, 1, -1);
CGContextTranslateCTM(context, -(pageRect.origin.x), -(pageRect.origin.y));
CGContextDrawPDFPage(context, pageRef);
UIImage *finalImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return finalImage;
}

Swift 3
(Thanks to Prine for Swift 2!)
func getPdfThumb(url:NSURL, pageBase1:Int) -> UIImage? {
guard let document = CGPDFDocument(url as CFURL) else { return nil }
guard let firstPage = document.page(at: pageBase1) else { return nil }
let width:CGFloat = 240.0;
var pageRect:CGRect = firstPage.getBoxRect(.mediaBox)
let pdfScale:CGFloat = width/pageRect.size.width
pageRect.size = CGSize(width: pageRect.size.width*pdfScale, height: pageRect.size.height*pdfScale)
pageRect.origin = CGPoint.zero
UIGraphicsBeginImageContext(pageRect.size)
let context:CGContext = UIGraphicsGetCurrentContext()!
// White background
context.setFillColor(red: 1.0,green: 1.0,blue: 1.0,alpha: 1.0)
context.fill(pageRect)
context.saveGState()
// Handle rotation
context.translateBy(x: 0.0, y: pageRect.size.height)
context.scaleBy(x: 1.0, y: -1.0)
context.concatenate(firstPage.getDrawingTransform(.mediaBox, rect: pageRect, rotate: 0, preserveAspectRatio: true))
context.drawPDFPage(firstPage)
context.restoreGState()
let image:UIImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return image
}

There are as you may already know 2 ways of rendering PDF's:
UIWebView
Quartz Rendering
The Other answers at the time of writing this have focused on Quartz. There are a number of good reasons for this mostly performance related but in my opinion using Quartz is worth it. I would recommend reading this thread to get a better idea of the pro's and cons.
There is apparently an excellent newish api for Quartz based pdf rendering here
ofc you could present the pdf via UIWebView and render the thumbs using quartz.
There is also a bit of confussion around thumbs, for people new quartz pdf magic, it might seem that after some searching there are apis that support thumbs abut you should check if the support is for embedded thumbs only, many PDF's don't have these.
Another option is to create the thumbs yourself (using quartz) and there are plenty of examples of this around the net including the two answers above. However If you are targeting iOS 4 or above I would strongly recommend using blocks. (Also graphics contexts are thread safe since 4).
I found a significant performance increase when I generated thumbs with blocks.
What I have done in the past is:
Have a ViewController for your
thumbs, it has a scrollview that has
a content size appropriate for all
your pages. Insert Placeholder
ImageViews into is if you like.
On document load, kick off a thumb
generator in the background (see code
below)
The code below calls a method drawImageView that takes the index of the page, grabs the image from the disk and puts it into the scroll view
If your feeling really motivated you can implement a render scope on the thumb scrollView (only rendering the thumbs you need to - something you should be doing for the pdf's anyway)
Dont forget to delete the thumbs when your done, unless you want to cache..
#define THUMB_SIZE 100,144
-(void)generateThumbsWithGCD
{
thumbQueue = dispatch_queue_create("thumbQueue", 0);//thumbQueue = dispatch_queue_t
NSFileManager *fm = [NSFileManager defaultManager];
//good idea to check for previous thumb cache with NSFileManager here
CGSize thumbSize = CGSizeMake(THUMB_SIZE);
__block CGPDFPageRef myPageRef;
NSString *reqSysVer = #"4.0";
NSString *currSysVer = [[UIDevice currentDevice] systemVersion];
//need to handle ios versions < 4
if ([currSysVer compare:reqSysVer options:NSNumericSearch] == NSOrderedAscending) {NSLog(#"UIKIT MULTITHREADING NOT SUPPORTED!");return;}//thread/api saftey
dispatch_async(thumbQueue, ^{
for (i=1; i<=_maxPages; i++) {
//check if worker is valid (class member bool) for cancelations
myPageRef=[[PDFDocument sharedPDFDocument]getPageData:i];//pdfdocument is a singleton class
if(!myPageRef)return;
NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];
NSString* imageName = [NSString stringWithFormat:#"%#thumb%i.png",documentName,i];
NSString* fullPathToFile = [thumbDocPath stringByAppendingPathComponent:imageName];
if(![fm fileExistsAtPath:fullPathToFile]){
//NSLog(#"Not there");
UIGraphicsBeginImageContext(thumbSize);//thread Safe in iOs4
CGContextRef context = UIGraphicsGetCurrentContext();//thread Safe in iOs4
CGContextTranslateCTM(context, 0, 144);
CGContextScaleCTM(context, 0.15, -0.15);
CGContextDrawPDFPage (context, myPageRef);
UIImage * render = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSData* imageData= UIImagePNGRepresentation(render);
if(imageData){
NSLog(#"WROTE TO:%#",fullPathToFile);
if(![imageData writeToFile:fullPathToFile atomically:NO])NSLog(#"ERROR: Thumb Didnt Save"); //COMMENT OUT TO DISABLE WRITE
}
}
else NSLog(#"Allready There! %#",fullPathToFile);
//update progress on thumb viewController if you wish here
[pool release];
dispatch_sync(dispatch_get_main_queue(), ^{
[self drawImageView:i];
});
}
});
dispatch_release(thumbQueue);
}

I come up with a solution that uses CoreGraphics and Swift 3.0. It's highly inspired by the one presented Alexandre. In my opinion my approach results in more 'Swifty' code. Also, my solution fixes a couple of problems with scaling and orientation of resulting image.
Note that my code uses AVMakeRect(aspectRatio:, insideRect:) and requires import of AVFoundation.
//pages numbering starts from 1.
func generate(size: CGSize, page: Int) -> UIImage? {
guard let document = CGPDFDocument(url as CFURL), let page = document.page(at: page) else { return nil }
let originalPageRect: CGRect = page.getBoxRect(.mediaBox)
var targetPageRect = AVMakeRect(aspectRatio: originalPageRect.size, insideRect: CGRect(origin: CGPoint.zero, size: size))
targetPageRect.origin = CGPoint.zero
UIGraphicsBeginImageContextWithOptions(targetPageRect.size, true, 0)
defer { UIGraphicsEndImageContext() }
guard let context = UIGraphicsGetCurrentContext() else { return nil }
context.setFillColor(gray: 1.0, alpha: 1.0)
context.fill(targetPageRect)
context.saveGState()
context.translateBy(x: 0.0, y: targetPageRect.height)
context.scaleBy(x: 1.0, y: -1.0)
context.concatenate(page.getDrawingTransform(.mediaBox, rect: targetPageRect, rotate: 0, preserveAspectRatio: true))
context.drawPDFPage(page)
context.restoreGState()
return context.makeImage().flatMap() { UIImage(cgImage: $0, scale: UIScreen.main.scale, orientation: .up) }
}

I just rewrote the Objective-C code to Swift. Maybe anyone else can use it:
func getThumbnail(url:NSURL, pageNumber:Int) -> UIImage {
var pdf:CGPDFDocumentRef = CGPDFDocumentCreateWithURL(url as CFURLRef);
var firstPage = CGPDFDocumentGetPage(pdf, pageNumber)
var width:CGFloat = 240.0;
var pageRect:CGRect = CGPDFPageGetBoxRect(firstPage, kCGPDFMediaBox);
var pdfScale:CGFloat = width/pageRect.size.width;
pageRect.size = CGSizeMake(pageRect.size.width*pdfScale, pageRect.size.height*pdfScale);
pageRect.origin = CGPointZero;
UIGraphicsBeginImageContext(pageRect.size);
var context:CGContextRef = UIGraphicsGetCurrentContext();
// White BG
CGContextSetRGBFillColor(context, 1.0,1.0,1.0,1.0);
CGContextFillRect(context,pageRect);
CGContextSaveGState(context);
// ***********
// Next 3 lines makes the rotations so that the page look in the right direction
// ***********
CGContextTranslateCTM(context, 0.0, pageRect.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextConcatCTM(context, CGPDFPageGetDrawingTransform(firstPage, kCGPDFMediaBox, pageRect, 0, true));
CGContextDrawPDFPage(context, firstPage);
CGContextRestoreGState(context);
var thm:UIImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return thm;
}

Related

How to draw text in NPAPI plugin CGContextRef?

I want to draw text in the NPAPI plugin CGContextRef, but i don’t know to make it work.
I get the CGContext Ref as follow:
int16_t NPP_HandleEvent(NPP instance, void* event)
{
int16_t iRet = 0;
PluginObject *obj = (PluginObject *)instance->pdata;
NPCocoaEvent *cocoaEvent = (NPCocoaEvent *)event;
switch(cocoaEvent->type)
{
case NPCocoaEventDrawRect:
obj->m_NPContext = CGContextRetain(cocoaEvent->data.draw.context);
DrawSealOnContext(obj->m_NPContext, obj->m_pstSealAPInfo);
iRet = 1;
break;
default:
iRet = 0;
break;
}
return iRet;
}
In function DrawSealOnContext, I want to draw an Ellipse and some text in the window. the function as follow:
int DrawSealOnContext(CGContextRef contextRef, PSEAL_APPEARANCE_INFO pstSealAPInfo)
{
// draw ellipses
CGRect rect = {2, 25, 146, 100};
CGContextSetLineWidth(contextRef, 4.0);
CGContextSetStrokeColorWithColor(contextRef, [NSColor blueColor].CGColor);
CGContextBeginPath(contextRef);
CGContextAddEllipseInRect(contextRef, rect);
CGContextDrawPath(contextRef, kCGPathStroke);
// draw text
CGContextSetFillColorWithColor(contextRef, [NSColor blueColor].CGColor);
NSMutableParagraphStyle * paragraphStyle = [[[NSParagraphStyle defaultParagraphStyle] mutableCopy] autorelease];
[paragraphStyle setAlignment:NSCenterTextAlignment];
NSDictionary * attributes = [NSDictionary dictionaryWithObject:paragraphStyle
forKey:NSParagraphStyleAttributeName];
NSString * mystr = #“hello\n and";
NSRect strFrame = { { 0, 0 }, { 150, 150 } };
[mystr drawInRect:strFrame withAttributes:attributes];
}
I can get the ellipse on the screen, but the text doesn’t show up?
I also try this:
// draw text
CGContextShowText(contextRef, "hello", 5);
It doesn’t work either.
what’s wrong with my program, really appreciate your
answers.
I have solve this problem, with the help of this article:
http://www.cocoabuilder.com/archive/cocoa/240527-setting-cgcontextref-to-the-current-context.html
The problem is that, function
[mystr drawInRect:strFrame withAttributes:attributes];
draw test in current context, but my current is not the the context "CGContextRef contextRef" i decleared. So, before i used drawInRect method, i need to set contextRef my current context.
How to set CGContextRef to current context, please refer to the website i pasted.
Hope this answer can solve your problem!

fetching the 1st page of pdf as UIImage in ipad development

In ipad development is there a way to get the 1st page of a pdf file as UIImage???
if u dont have the exact solution now can u tell me which way should I proceed??
I tried this function..but UIGraphicsGetCurrentContext() returns nothing...
+(UIImage*) imageFromPDF:(CGPDFDocumentRef)pdf withPageNumber:(NSUInteger)pageNumber withScale:(CGFloat)scale
{
//if(pageNumber > 0 && pageNumber < CGPDFDocumentGetNumberOfPages(pdf))
//{
CGPDFPageRef pdfPage = CGPDFDocumentGetPage(pdf,pageNumber);
CGRect tmpRect = CGPDFPageGetBoxRect(pdfPage,kCGPDFMediaBox);
CGRect rect = CGRectMake(tmpRect.origin.x,tmpRect.origin.y,tmpRect.size.width*scale,tmpRect.size.height*scale);
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(context,0,rect.size.height);
CGContextScaleCTM(context,scale,-scale);
CGContextDrawPDFPage(context,pdfPage);
UIImage* pdfImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return pdfImage;
//}
//return nil;
}
Thanks
Shoeb
Here's a category I found on gist to do what you want:
https://gist.github.com/892868
Additionally, here is a post/answer from the past on SO:
Rendering a CGPDFPage into a UIImage

Proper use of MKOverlayView

I am writing an iPhone app in which I place a large PNG image (1936 × 2967) on an MKMapView using MKOverlayView. I am a little confused about how to appropriately implement the drawMapRect: function in MKOverlayView - should I manually segment my image before drawing it? Or should I let the mechanisms of MKOverlayView handle all that?
My impression from other posts is that before MKOverlayView was available, you were expected to segment images yourself for this kind of task, and use a CATiledLayer. I thought maybe MKOverlayView took care of all the dirty work.
The real reason I ask though is because when I run my app through Instruments using the allocations tool, I find that the number of live bytes my app is using steadily increases with the introduction of the custom image on the map. Right now I am NOT segmenting my image, but I also am seeing no record of memory leaks in the leaks tool in Instruments. Here is my drawMapRect: function:
- (void)drawMapRect:(MKMapRect)mapRect zoomScale:(MKZoomScale)zoomScale inContext:(CGContextRef)context{
// Load image from applicaiton bundle
NSString* imageFileName = [[[NSBundle mainBundle] resourcePath] stringByAppendingPathComponent:#"map.png"];
CGDataProviderRef provider = CGDataProviderCreateWithFilename([imageFileName UTF8String]);
CGImageRef image = CGImageCreateWithPNGDataProvider(provider, NULL, true, kCGRenderingIntentDefault);
CGDataProviderRelease(provider);
MKMapRect overlayMapRect = [self.overlay boundingMapRect];
CGRect overlayRect = [self rectForMapRect:overlayMapRect];
// draw image
CGContextSaveGState(context);
CGContextDrawImage(context, overlayRect, image);
CGContextRestoreGState(context);
CGImageRelease(image);
}
If my drawMapRect: function is not the cause of these memory issues, does anybody know what it might be? I know through debugging that my viewForOverlay: function for the mapView only gets called once for each overlay, so it's not that memory is leaking there or something.
Any advice is welcome!
Thanks, -Matt
EDIT: so it turns out that the memory issue is actually being caused by MKMapView - every time I move the map at all the memory usage goes up very steadily and never comes down - this doesn't seem good :(
A bit of a late answer, leaving it here if somebody else hits the same problem in the future. The flaw here is trying to render a whole image while documentation clearly says -
In addition, you should avoid drawing the entire contents of the overlay each time this method is called. Instead, always take the mapRect parameter into consideration and avoid drawing content outside that rectangle.
so, you have to only draw the part of the image in the area defined by mapRect
updated: keep in mind that drawRect here can be larger than mapRect, need to adjust the paint and cut regions accordingly
let overlayMapRect = overlay.boundingMapRect
let overlayDrawRect = self.rect(for: overlayMapRect)
// watch out for draw rect adjustment here --
let drawRect = self.rect(for: mapRect).intersection(overlayDrawRect)
let scaleX = CGFloat(image.width) / overlayRect.width
let scaleY = CGFloat(image.height) / overlayRect.height
let transform = CGAffineTransform.init(scaleX: scaleX, y: scaleY)
let imageCut = drawRect.applying(transform)
// omitting optionals checks, you should not
let cutImage = image.cropping(to: imageCut)
// the usual vertical flip issue with image.draw
context.translateBy(x: 0, y: drawRect.maxY + drawRect.origin.y)
context.scaleBy(x: 1, y: -1)
context.draw(image, in: drawRect, byTiling: false)
Here is the objc version based on epolyakov's answer. It works great, but only without any rotation.
- (void) drawMapRect:(MKMapRect)mapRect zoomScale:(MKZoomScale)zoomScale inContext:(CGContextRef)context
{
CGImageRef overlayImage = <your_uiimage>.CGImage;
CGRect overlayRect = [self rectForMapRect:[self.overlay boundingMapRect]];
CGRect drawRect = [self rectForMapRect:mapRect];
CGRect rectPortion = CGRectIntersection(overlayRect, drawRect);
CGFloat scaleX = rotatedImage.size.width / overlayRect.size.width;
CGFloat scaleY = rotatedImage.size.height / overlayRect.size.height;
CGAffineTransform transform = CGAffineTransformMakeScale(scaleX, scaleY);
CGRect imagePortion = CGRectApplyAffineTransform(rectPortion, transform);
CGImageRef cutImage = CGImageCreateWithImageInRect(overlayImage, imagePortion);
CGRect finalRect = rectPortion;
CGContextTranslateCTM(context, 0, finalRect.origin.y + CGRectGetMaxY(finalRect));
CGContextScaleCTM(context, 1.0, -1.0);
CGContextSetAlpha(context, self.alpha);
CGContextDrawImage(context, finalRect, cutImage);
}
If you need to manage also the rotation of your image, I found a trick using a rotated version of the original image (this because the map rendering always draw vertical rects and rotating the image in this method will cut it).
So using a rotated version of the original image allows to render with vertical rects as the map expects
UIImage* rotatedImage = [self rotatedImage:<your_uiimage> withAngle:<angle_of_image>];
CGImageRef overlayImage = rotatedImage.CGImage;
And this is the method that produce a rotated image in a bounding rect
- (UIImage*) rotatedImage:(UIImage*)image withAngle:(CGFloat)angle
{
float radians = degreesToRadians(angle);
CGAffineTransform xfrm = CGAffineTransformMakeRotation(radians);
CGRect imageRect = CGRectMake(0, 0, image.size.width, image.size.height);
CGRect rotatedImageBoundingRect = CGRectApplyAffineTransform (imageRect, xfrm);
UIGraphicsBeginImageContext(rotatedImageBoundingRect.size);
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextTranslateCTM (ctx, rotatedImageBoundingRect.size.width/2., rotatedImageBoundingRect.size.height/2.);
CGContextScaleCTM(ctx, 1.0, -1.0);
CGContextRotateCTM (ctx, radians);
CGContextDrawImage (ctx, CGRectMake(-image.size.width / 2, -image.size.height / 2, image.size.width, image.size.height), image.CGImage);
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}

Get size of a UIImage (bytes length) not height and width

I'm trying to get the length of a UIImage. Not the width or height of the image, but the size of the data.
UIImage *img = [UIImage imageNamed:#"sample.png"];
NSData *imgData = UIImageJPEGRepresentation(img, 1.0);
NSLog(#"Size of Image(bytes):%d",[imgData length]);
The underlying data of a UIImage can vary, so for the same "image" one can have varying sizes of data. One thing you can do is use UIImagePNGRepresentation or UIImageJPEGRepresentation to get the equivalent NSData constructs for either, then check the size of that.
Use the CGImage property of UIImage. Then using a combination of CGImageGetBytesPerRow *
CGImageGetHeight, add in the sizeof UIImage, you should be within a few bytes of the actual size.
This will return the size of the image, uncompressed, if you want to use it for purposes such as malloc in preparation for bitmap manipulation (assuming a 4 byte pixel format of 3 bytes for RGB and 1 for Alpha):
int height = image.size.height,
width = image.size.width;
int bytesPerRow = 4*width;
if (bytesPerRow % 16)
bytesPerRow = ((bytesPerRow / 16) + 1) * 16;
int dataSize = height*bytesPerRow;
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)editInfo
{
UIImage *image=[editInfo valueForKey:UIImagePickerControllerOriginalImage];
NSURL *imageURL=[editInfo valueForKey:UIImagePickerControllerReferenceURL];
__block long long realSize;
ALAssetsLibraryAssetForURLResultBlock resultBlock=^(ALAsset *asset)
{
ALAssetRepresentation *representation=[asset defaultRepresentation];
realSize=[representation size];
};
ALAssetsLibraryAccessFailureBlock failureBlock=^(NSError *error)
{
NSLog(#"%#", [error localizedDescription]);
};
if(imageURL)
{
ALAssetsLibrary *assetsLibrary=[[[ALAssetsLibrary alloc] init] autorelease];
[assetsLibrary assetForURL:imageURL resultBlock:resultBlock failureBlock:failureBlock];
}
}
Example in Swift:
let img: UIImage? = UIImage(named: "yolo.png")
let imgData: NSData = UIImageJPEGRepresentation(img, 0)
println("Size of Image: \(imgData.length) bytes")
This following is the fastest, cleanest, most general, and least error-prone way to get the answer. In a category UIImage+MemorySize:
#import <objc/runtime.h>
- (size_t) memorySize
{
CGImageRef image = self.CGImage;
size_t instanceSize = class_getInstanceSize(self.class);
size_t pixmapSize = CGImageGetHeight(image) * CGImageGetBytesPerRow(image);
size_t totalSize = instanceSize + pixmapSize;
return totalSize;
}
Or if you only want the actual bitmap and not the UIImage instance container, then it is truly as simple as this:
- (size_t) memorySize
{
return CGImageGetHeight(self.CGImage) * CGImageGetBytesPerRow(self.CGImage);
}
Swift 3:
let image = UIImage(named: "example.jpg")
if let data = UIImageJPEGRepresentation(image, 1.0) {
print("Size: \(data.count) bytes")
}
I'm not sure your situation. If you need the actual byte size, I don't think you do that. You can use UIImagePNGRepresentation or UIImageJPEGRepresentation to get an NSData object of compressed data of the image.
I think you want to get the actual size of uncompressed image(pixels data). You need to convert UIImage* or CGImageRef to raw data. This is an example of converting UIImage to IplImage(from OpenCV). You just need to allocate enough memory and pass the pointer to CGBitmapContextCreate's first arg.
UIImage *image = //Your image
CGImageRef imageRef = image.CGImage;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
IplImage *iplimage = cvCreateImage(cvSize(image.size.width, image.size.height), IPL_DEPTH_8U, 4);
CGContextRef contextRef = CGBitmapContextCreate(iplimage->imageData, iplimage->width, iplimage->height,
iplimage->depth, iplimage->widthStep,
colorSpace, kCGImageAlphaPremultipliedLast|kCGBitmapByteOrderDefault);
CGContextDrawImage(contextRef, CGRectMake(0, 0, image.size.width, image.size.height), imageRef);
CGContextRelease(contextRef);
CGColorSpaceRelease(colorSpace);
IplImage *ret = cvCreateImage(cvGetSize(iplimage), IPL_DEPTH_8U, 3);
cvCvtColor(iplimage, ret, CV_RGBA2BGR);
cvReleaseImage(&iplimage);
SWIFT 4+
let imgData = image?.jpegData(compressionQuality: 1.0)
debugPrint("Size of Image: \(imgData!.count) bytes")
you can use this trick to find out image size.
Swift 4 & 5:
extension UIImage {
var sizeInBytes: Int {
guard let cgImage = self.cgImage else {
// This won't work for CIImage-based UIImages
assertionFailure()
return 0
}
return cgImage.bytesPerRow * cgImage.height
}
}
If needed in human readable form we can use ByteCountFormatter
if let data = UIImageJPEGRepresentation(image, 1.0) {
let fileSizeStr = ByteCountFormatter.string(fromByteCount: Int64(data.count), countStyle: ByteCountFormatter.CountStyle.memory)
print(fileSizeStr)
}
Where Int64(data.count) is what you need in numeric format.
I tried to get image size using
let imgData = image.jpegData(compressionQuality: 1.0)
but it gives less than the actual size of image. Then i tried to get size using PNG representation.
let imageData = image.pngData()
but it gives more byte counts than the actual image size.
The only thing that worked perfectly for me.
public func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [String : Any]) {
var asset: PHAsset!
if #available(iOS 11.0, *) {
asset = info[UIImagePickerControllerPHAsset] as? PHAsset
} else {
if let url = info[UIImagePickerControllerReferenceURL] as? URL {
asset = PHAsset.fetchAssets(withALAssetURLs: [url], options: .none).firstObject!
}
}
if #available(iOS 13, *) {
PHImageManager.default().requestImageDataAndOrientation(for: asset, options: .none) { data, string, orien, info in
let imgData = NSData(data:data!)
var imageSize: Int = imgData.count
print("actual size of image in KB: %f ", Double(imageSize) / 1024.0)
}
} else {
PHImageManager.default().requestImageData(for: asset, options: .none) { data, string, orientation, info in
let imgData = NSData(data:data!)
var imageSize: Int = imgData.count
print("actual size of image in KB: %f ", Double(imageSize) / 1024.0)
}
}
}

Slicing up a UIImage on iPhone

Objective: take a UIImage, crop out a square in the middle, change size of square to 320x320 pixels, slice up the image into 16 80x80 images, save the 16 images in an array.
Here's my code:
CGImageRef originalImage, resizedImage, finalImage, tmp;
float imgWidth, imgHeight, diff;
UIImage *squareImage, *playImage;
NSMutableArray *tileImgArray;
int r, c;
originalImage = [image CGImage];
imgWidth = image.size.width;
imgHeight = image.size.height;
diff = fabs(imgWidth - imgHeight);
if(imgWidth > imgHeight){
resizedImage = CGImageCreateWithImageInRect(originalImage, CGRectMake(floor(diff/2), 0, imgHeight, imgHeight));
}else{
resizedImage = CGImageCreateWithImageInRect(originalImage, CGRectMake(0, floor(diff/2), imgWidth, imgWidth));
}
CGImageRelease(originalImage);
squareImage = [UIImage imageWithCGImage:resizedImage];
if(squareImage.size.width != squareImage.size.height){
NSLog(#"image cutout error!");
//*code to return to main menu of app, irrelevant here
}else{
float newDim = squareImage.size.width;
if(newDim != 320.0){
CGSize finalSize = CGSizeMake(320.0, 320.0);
UIGraphicsBeginImageContext(finalSize);
[squareImage drawInRect:CGRectMake(0, 0, finalSize.width, finalSize.height)];
playImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}else{
playImage = squareImage;
}
}
finalImage = [playImage CGImage];
tileImgArray = [NSMutableArray arrayWithCapacity:0];
for(int i = 0; i < 16; i++){
r = i/4;
c = i%4;
//*
tmp = CGImageCreateWithImageInRect(finalImage, CGRectMake(c*tileSize, r*tileSize, tileSize, tileSize));
[tileImgArray addObject:[UIImage imageWithCGImage:tmp]];
}
The code works correctly when the original (the variable image) has its smaller dimension either bigger or smaller than 320 pixels. When it's exactly 320, the resulting 80x80 images are almost entirely black, some with a few pixels at the edges that may (I can't really tell) be from the original image.
I tested by displaying the full image both directly:
[UIImage imageWithCGImage:finalImage];
And indirectly:
[UIImage imageWithCGImage:CGImageCreateWithImageInRect(finalImage, CGRectMake(0, 0, 320, 320))];
In both cases, the display worked. The problems only arise when I attempt to slice out some part of the image.
After some more experimentation, I found the following solution (I still don't know why it didn't work as originally written, though.) But anyway, the slicing works after the resize code is put in place even when resizing is unnecessary:
if(newDim != 320.0){
CGSize finalSize = CGSizeMake(320.0, 320.0);
UIGraphicsBeginImageContext(finalSize);
[squareImage drawInRect:CGRectMake(0, 0, finalSize.width, finalSize.height)];
playImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}else{
CGSize finalSize = CGSizeMake(320.0, 320.0);
UIGraphicsBeginImageContext(finalSize);
[squareImage drawInRect:CGRectMake(0, 0, finalSize.width, finalSize.height)];
playImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
Anyone has any clue WHY this is going on?
P.S. Yes, if/else is no longer required here. Removing it before I knew it was going to work would be stupid, though.
Just out of curiosity, why did you make your mutable array with bound of 0 when you know you're going to put 16 things in it?
Well, aside from that, I've tried the basic techniques you used for resizing and slicing (I did not need to crop, because I'm working with images that are already square) and I'm unable to reproduce your problem in the simulator. You might want to try breaking your code into three separate functions (crop to square, resize, and slice into pieces) and then test the three separately so you can figure out which of the three steps is causing the problems (ie. input images that you've manipulated in a normal graphics program instead of using objective c and then inspect what you get back out!).
I'll attach my versions of the resize and slice functions below, which will hopefully be helpful. It was nice to have your versions to look at, since I didn't have to find all the methods by myself for once. :)
Just as a note, the two dimensional array mentioned is my own class built out of NSMutableArrays, but you could easily implement your own version or use a flat NSMutableArray instead. ;)
// cut the given image into a grid of equally sized smaller images
// this assumes that the image can be equally divided in the requested increments
// the images will be stored in the return array in [row][column] order
+ (TwoDimensionalArray *) chopImageIntoGrid : (UIImage *) originalImage : (int) numberOfRows : (int) numberOfColumns
{
// figure out the size of our tiles
int tileWidth = originalImage.size.width / numberOfColumns;
int tileHeight = originalImage.size.height / numberOfRows;
// create our return array
TwoDimensionalArray * toReturn = [[TwoDimensionalArray alloc] initWithBounds : numberOfRows
: numberOfColumns];
// get a CGI image version of our image
CGImageRef cgVersionOfOriginal = [originalImage CGImage];
// loop to chop up each row
for(int row = 0; row < numberOfRows ; row++){
// loop to chop up each individual piece by column
for (int column = 0; column < numberOfColumns; column++)
{
CGImageRef tempImage =
CGImageCreateWithImageInRect(cgVersionOfOriginal,
CGRectMake(column * tileWidth,
row * tileHeight,
tileWidth,
tileHeight));
[toReturn setObjectAt : row : column : [UIImage imageWithCGImage:tempImage]];
}
}
// now return the set of images we created
return [toReturn autorelease];
}
// this method resizes an image to the requested dimentions
// be a bit careful when using this method, since the resize will not respect
// the proportions of the image
+ (UIImage *) resize : (UIImage *) originalImage : (int) newWidth : (int) newHeight
{
// translate the image to the new size
CGSize newSize = CGSizeMake(newWidth, newHeight); // the new size we want the image to be
UIGraphicsBeginImageContext(newSize); // downside: this can't go on a background thread, I'm told
[originalImage drawInRect : CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext(); // get our new image
UIGraphicsEndImageContext();
// return our brand new image
return newImage;
}
Eva Schiffer