Adding thumbnail size images to tableview - iphone

I have a tableview, and i am loading images to it. I have images which are ranging from 150kb - 2MB. Since this is too much for a tableview to handle (it takes long time to load, and makes the scrolling slow), i thought of using ImageIO framework to create thumbnail images of images.
I found a code that does this, but i can't undestand it.
1.) Can someone please explain me the code
2.) My problem is that, I have a tableview and i need to load thumbnail images to it. So how can i use the following code and display it on my tableview. Can someone show me some sample code or a tutorial that does this ?
heres the code ;
-(void)buildGallery
{
for (NSUInteger i = 0; i < kMaxPictures; i++)
{
NSInteger imgTag = i + 1;
NYXPictureView* v = [[NYXPictureView alloc] initWithFrame:(CGRect){.origin.x = x, .origin.y = y, .size = _thumbSize}];
NSString* imgPath = [[NSBundle mainBundle] pathForResource:[NSString stringWithFormat:#"%d", imgTag] ofType:#"jpg"];
CGImageSourceRef src = CGImageSourceCreateWithURL((CFURLRef)[NSURL fileURLWithPath:imgPath], NULL);
CFDictionaryRef options = (CFDictionaryRef)[[NSDictionary alloc] initWithObjectsAndKeys:(id)kCFBooleanTrue, (id)kCGImageSourceCreateThumbnailWithTransform, (id)kCFBooleanTrue, (id)kCGImageSourceCreateThumbnailFromImageIfAbsent, (id)[NSNumber numberWithDouble:_maxSize], (id)kCGImageSourceThumbnailMaxPixelSize, nil];
CGImageRef thumbnail = CGImageSourceCreateThumbnailAtIndex(src, 0, options); // Create scaled image
CFRelease(options);
CFRelease(src);
UIImage* img = [[UIImage alloc] initWithCGImage:thumbnail];
[v setImage:img];
[img release];
CGImageRelease(thumbnail);
}
}

Basically the problem you have is due to the fact that when you scale down an image, the number of bytes stored in memory doesnt change when you scale it down. The hardware still has to read your 2mb image, and then render it to a smaller scale. What you need to do is to either change the size of your image (use photoshop or other) or the way im suggesting is to compress your image, and then scale it down. The image will look rough at normal size, but will look ok when you scale it down to a thumbview.
To generate an NSData version of your image encoded as a PNG.
NSData *PNGFile = UIImagePNGRepresentation(myImage);
Or a JPEG, with a quality value set
NSData *JPEGFile = UIImageJPEGRepresentation(myImage, 0.9f);
Both of these will give you an image smaller than you currently have, which will be easier to render in the tableView.

In order to get better performance you're going to have to load the image in a background thread, and after it's in memory add the UIImage to the image view on the main thread. There are a couple ways to go about doing this, but the simplest is going to be using GCD's block based methods.
Resizing the image is definitely still important for memory considerations, but get the asynchronous image loading part down first.

Related

Loading large images from core data, memory usage spikes after 10

In my iOS app, I have an UIScrollView with paging enabled. Each page shows a view (with multiple subviews); only 3 "pages" are loaded at any time - the currently viewed one, the one to the left, and the one to the right. Loading is done lazily in the background. The app is using ARC.
The view on each page mainly consists of an image, which is retrieved from Core Data. This image may be large, so a thumbnail is loaded first, and is later replaced by the larger image. This larger image is actually a scaled down version of what is actually in the data store - the full resolution version is needed for a different screen, but for this image scroller, it just needs to fit the page. The actual stored image is may be much larger (2448x3264 for a photo from the camera). Note, the image property in Core Data is set to allow external storage, so in most cases it is not stored in the SQLite database.
Everything "works" fine: the scroller and images load quickly (thumbnail first, larger image soon after), and scrolling is fast. According to Instruments, memory usage is good also - until the 11th image loads, when memory spikes by ~5MB; subsequent images being loaded are likely to cause more memory spikes (not every one, maybe every other causes another ~5MB spike). There isn't a specific image causing the spike (I've changed the order of the images that get loaded, its always the 11th). This memory never seems to be released.
Here's a snippet of the code where the images are loaded in the background:
- (void)loadImageWithCardInstanceObjectId:(NSManagedObjectID *)cardInstanceObjectId
imageIndex:(NSUInteger)imageIndex
withInitialImageHandler:(void (^)(UIImage *image))initialImageHandler
withFinalImageHandler:(void (^)(UIImage *image))finalImageHandler
{
NSManagedObjectContext *tempMoc = [[NSManagedObjectContext alloc] initWithConcurrencyType:NSPrivateQueueConcurrencyType];
tempMoc.parentContext = [[DataManager sharedInstance] managedObjectContext];
[tempMoc performBlock:^{
CardInstance *ci = (CardInstance *)[tempMoc objectWithID:cardInstanceObjectId];
if (ci.cardInstanceImages.count == 0) {
// no card images, return the default image
dispatch_async(dispatch_get_main_queue(), ^{
initialImageHandler([UIImage imageNamed:#"CardNoImage.png"]);
});
return;
}
// have card images, pick the one according to the passed in index
UIImage *thumbnail = [[ci.cardInstanceImages objectAtIndex:imageIndex] thumbnailRepresentation];
dispatch_async(dispatch_get_main_queue(), ^{
initialImageHandler(thumbnail);
});
// THIS IS THE OFFENDING LINE
UIImage *fullImage = [[ci.cardInstanceImages objectAtIndex:imageIndex] imageRepresentation];
CGSize size = fullImage.size;
CGFloat ratio = 0;
if (size.width > size.height) {
ratio = 240.0 / size.width;
}
else {
ratio = 240.0 / size.height;
}
CGRect rect = CGRectMake(0.0, 0.0, ceilf(ratio * size.width), ceilf(ratio * size.height));
// create the image to display
UIGraphicsBeginImageContextWithOptions(rect.size, YES, 0);
[fullImage drawInRect:rect];
UIImage *imageToDisplay = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
dispatch_async(dispatch_get_main_queue(), ^{
finalImageHandler(imageToDisplay);
});
}];
}
In instruments, the responsible library/caller for the allocation is CoreData +[_PFRoutines readExternalReferenceDataFromFile:].
Any thoughts on why this memory spike occurs? Thanks

Loaded image in UIImage is pixelated on retina

I parsed the data from a web which also contains jpg image. The problem is that the image looks blurry/pixelated on retina display. Any solution for this? Thanks.
NSData *data = [NSData dataWithContentsOfURL:linkUrl];
UIImage *img = [[UIImage alloc] initWithData:data];
// detailViewController.faces.contentScaleFactor=[UIScreen mainScreen].scale;//Attampt to solve the problem
detailViewController.faces.image=img;
After initializing your image with the data, create a new one from it with the correct scale like this:
img = [UIImage imageWithCGImage:img.CGImage scale:[UIScreen mainScreen].scale orientation:img.imageOrientation];
...but note that the image will now appear half the size on retina displays unless you scale it up, for example by stretching it in an image view.

Resize an ALAsset Photo takes a long time. Any way around this?

I have a blog application that I'm making. To compose a new entry, there is a "Compose Entry" view where the user can select a photo and input text. For the photo, there is a UIImageView placeholder and upon clicking this, a custom ImagePicker comes up where the user can select up to 3 photos.
This is where the problem comes in. I don't need the full resolution photo from the ALAsset, but at the same time, the thumbnail is too low resolution for me to use.
So what I'm doing at this point is resizing the fullResolution photos to a smaller size. However, this takes some time, especially when resizing up to 3 photos to a smaller size.
Here is a code snipped to show what I'm doing:
ALAssetRepresentation *rep = [[dict objectForKey:#"assetObject"] defaultRepresentation];
CGImageRef iref = [rep fullResolutionImage];
if (iref)
{
CGRect screenBounds = [[UIScreen mainScreen] bounds];
UIImage *previewImage;
UIImage *largeImage;
if([rep orientation] == ALAssetOrientationUp) //landscape image
{
largeImage = [[UIImage imageWithCGImage:iref] scaledToWidth:screenBounds.size.width];
previewImage = [[UIImage imageWithCGImage:iref] scaledToWidth:300];
}
else // portrait image
{
previewImage = [[[UIImage imageWithCGImage:iref] scaledToHeight:300] imageRotatedByDegrees:90];
largeImage = [[[UIImage imageWithCGImage:iref] scaledToHeight:screenBounds.size.height] imageRotatedByDegrees:90];
}
}
Here, from the fullresolution image, I am creating two images: a preview image (max 300px on the long end) and a large image (max 960px or 640px on the long end). The preview image is what is shown on the app itself in the "new entry" preview. The large image is what will be used when uploading to the server.
The actual code I'm using to resize, I grabbed somewhere from here:
-(UIImage*)scaledToWidth:(float)i_width
{
float oldWidth = self.size.width;
float scaleFactor = i_width / oldWidth;
float newHeight = self.size.height * scaleFactor;
float newWidth = oldWidth * scaleFactor;
UIGraphicsBeginImageContext(CGSizeMake(newWidth, newHeight));
[self drawInRect:CGRectMake(0, 0, newWidth, newHeight)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
Am I doing things wrong here? As it stands, the ALAsset thumbnail is too low clarity, and at the same time, I dont need the entire full resolution. It's all working now, but the resizing takes some time. Is this just a necessary consequence?
Thanks!
It is a necessary consequence of resizing your image that it will take some amount of time. How much depends on the device, the resolution of the asset and the format of the asset. But you don't have any control over that. But you do have control over where the resizing takes place. I suspect that right now you are resizing the image in your main thread, which will cause the UI to grind to a halt while you are doing the resizing. Do enough images, and your app will appear hung for long enough that the user will just go off and do something else (perhaps check out competing apps in the App Store).
What you should be doing is performing the resizing off the main thread. With iOS 4 and later, this has become much simpler because you can use Grand Central Dispatch to do the resizing. You can take your original block of code from above and wrap it in a block like this:
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_LOW, 0), ^{
ALAssetRepresentation *rep = [[dict objectForKey:#"assetObject"] defaultRepresentation];
CGImageRef iref = [rep fullResolutionImage];
if (iref)
{
CGRect screenBounds = [[UIScreen mainScreen] bounds];
__block UIImage *previewImage;
__block UIImage *largeImage;
if([rep orientation] == ALAssetOrientationUp) //landscape image
{
largeImage = [[UIImage imageWithCGImage:iref] scaledToWidth:screenBounds.size.width];
previewImage = [[UIImage imageWithCGImage:iref] scaledToWidth:300];
}
else // portrait image
{
previewImage = [[[UIImage imageWithCGImage:iref] scaledToHeight:300] imageRotatedByDegrees:90];
largeImage = [[[UIImage imageWithCGImage:iref] scaledToHeight:screenBounds.size.height] imageRotatedByDegrees:90];
}
dispatch_async(dispatch_get_main_queue(), ^{
// do what ever you need to do in the main thread here once your image is resized.
// this is going to be things like setting the UIImageViews to show your new images
// or adding new views to your view hierarchy
});
}
});
You'll have to think about things a little differently this way. For example, you've now broken up what used to be a single step into multiple steps now. Code that was running after this will end up running before the image resize is complete or before you actually do anything with the images, so you need to make sure that you didn't have any dependencies on those images or you'll likely crash.
A late answer, but for those stumbling on this question, you might want to consider using the fullScreenImage rather than the fullResolutionImage of the defaultRepresentation. It's usually much smaller, but still large enough to maintain good quality for larger thumbnails.

Get image width and height before loading it completely in iPhone

I am loading an image in my UITableViewCell using
[NSData dataWithContentsOfURL:imageUrl]
For setting custom height for my tableview cell , i need the actual size of the image that am loading.
Can we get the width and height of an image before loading it completely ? Thanks in advance.
Try the Image I/O interface as done below. This will allow you to get the image size without having to load the entire file:
#import <ImageIO/ImageIO.h>
NSMutableString *imageURL = [NSMutableString stringWithFormat:#"http://www.myimageurl.com/image.png"];
CGImageSourceRef source = CGImageSourceCreateWithURL((CFURLRef)[NSURL URLWithString:imageURL], NULL);
NSDictionary* imageHeader = (__bridge NSDictionary*) CGImageSourceCopyPropertiesAtIndex(source, 0, NULL);
NSLog(#"Image header %#",imageHeader);
NSLog(#"PixelHeight %#",[imageHeader objectForKey:#"PixelHeight"]);
you can do it like this:
NSData *imageData = [NSData dataWithContentsOfURL:imageUrl];
UIImage *image = [UIImage imageWithData:imageData];
NSLog(#"image height: %f",image.size.height);
NSLog(#"image width: %f",image.size.width);
Take a look at this Question How do I extract the width and height of a PNG from looking at the header in objective c which shares how is it possible to parse image Meta-data.
I have a created OpenSource project Ottran that extracts the image size and type of a remote image by downloading as little as possible, which supports PNG, JPEG , BMP and GIF formats.
NSData is an "opaque" data, so you cannot do much with it before converting it to something more "useful" (e.g., creating an UIImage by means of it -initWithData: method). At that moment you could enquiry the image size, but it would be late for you.
The only approach I see, if you really need knowing the image size before the image is fully downloaded, is implementing a minimal server-side API so that you can ask for the image size before trying to download it.
Anyway, why do you need to know the image size before it is actually downloaded? Could you not set the row height at the moment when it has been downloaded (i.e., from your request delegate method)?
dataWithContentsOfURL is synchronous it will block your UI until its download complete, so please use header content to get resolution, Below is Swift 3.0 code
if let imageSource = CGImageSourceCreateWithURL(url! as CFURL, nil) {
if let imageProperties = CGImageSourceCopyPropertiesAtIndex(imageSource, 0, nil) as Dictionary? {
let pixelWidth = imageProperties[kCGImagePropertyPixelWidth] as! Int
let pixelHeight = imageProperties[kCGImagePropertyPixelHeight] as! Int
print("the image width is: \(pixelWidth)")
print("the image height is: \(pixelHeight)")
}
}

iPhone - Replacing a block of pixels in an image with another block of pixls

I would like to know if anyone can point me in the right direction regarding what frameworks/methods
to use when replacing a block of pixels in an image with another block of pixels.
For example lets say I have a simple image colored fully red and I consider my pixel block size as 5x5
pixels. Now every 3 blocks I want to replace the 3rd block with a blue colored block (5x5 pixels) so
that it would result in a red image with blue blocks appearing every 3 red blocks.
I've found information regarding pixel replacement but am trying to figure out how to determine and process pixel blocks. Any advice would be appreciated.. and any code would be a wet dream.
Although it may sound like a simple question, it's not quite easy to do something like that. You need a lot of background information to get everything going. It also depends on where the image comes from, and what you want to do with it.
I will not provide you with a working example, but here's some pointers:
Reading in an image:
UIImage *img = [UIImage imageNamed:#"myimage.png"];
// make sure you added myimage.png to your project
Creating your own "graphics context" to draw on:
UIGraphicsBeginImageContext(CGSizeMake(480.0f, 320.0f)); // the size (width,height)
CGContextRef ctx = UIGraphicsGetCurrentContext();
Painting your image onto your (current) context:
[img drawAtPoint:CGPointMake(0.0f, -480.0f)]; // (draws on current context)
Doing some other painting:
CGContextBeginPath(ctx); // start a path
CGContextSetRGBStrokeColor(ctx,1,0,0,1); // red,green,blue,alpha with range 0-1
CGContextMoveToPoint(ctx,50,0); // move to beginning of path at point (50,0)
CGContextAddLineToPoint(ctx,100,100); // add a line to point (100,100)
CGContextStrokePath(ctx); // do the drawing
Transforming the bitmap context to an image again:
UIImage *resultImage = UIGraphicsGetImageFromCurrentImageContext();
And then, either:
Show the image to the audience, from within a view controller:
[self.view addSubview:[[[UIImageView alloc] initWithImage:resultImage] autorelease]];
Or write the image as jpeg:
NSData *imagedata = UIImageJPEGRepresentation(img,0.8); // 0.8 = quality
[imagedata writeToFile:filename atomically:NO];
Or as PNG:
NSData *imagedata = UIImagePNGRepresentation(img);
[imagedata writeToFile:filename atomically:NO];
For those last ones you would need to have a filename, which is not so trivial either (given the phone environment you live in). That's another topic on which you'll find a lot, search for things like:
NSArray *path = NSSearchPathForDirectoriesInDomains(NSCachesDirectory, NSUserDomainMask, YES);
NSString *filename = [[path elementAtIndex:0] stringByAppendingPathComponent:#"myimage.png"];
This is the general outline, I might have made some errors but this should get you going at least.