I've implemented the custom cell with autolayout from this reference: https://github.com/smileyborg/TableViewCellWithAutoLayoutiOS8
In my cell I have an imageView and 5 labels with dynamic content.
What I want to achieve is something like the feed from Facebook where in every cell image size differs. I have metadata for the image sizes.
I tried using estimatedHeightForRow and given some average height.
Everything works fine in the layout but it jumps while scrolling.
How can I improve my tableview scroll performance?
You have posted this code in some of your comments:
let block: SDWebImageCompletionBlock! = {
(image: UIImage!, error: NSError!, cacheType: SDImageCacheType, imageURL: NSURL!) -> Void in
self.ImageFeedView.sd_setImageWithURL(self.cueData.attachments[0].URL);
}
self.ImageFeedView.sd_setImageWithURL(self.cueData.attachments[0].URL, completed:block)
You are setting the image two times. First with:
self.ImageFeedView.sd_setImageWithURL(self.cueData.attachments[0].URL, completed: block)
And then again, the same image in the completion block. That's unnecessary and could lead to stuttering. Just remove the block and do this:
self.ImageFeedView.sd_setImageWithURL(self.cueData.attachments[0].URL)
The stuttering should be better now. (or even resolved, please try)
Table sometimes twitches when upload large images. Maybe try to decode received photos:
UIImage *anImage = ... // Assume this exists.
CGImageRef originalImage = (CGImageRef)anImage;
assert(originalImage != NULL);
CFDataRef imageData = CGDataProviderCopyData(CGImageGetDataProvider(originalImage));
CGDataProviderRef imageDataProvider = CGDataProviderCreateWithCFData(imageData);
if (imageData != NULL) {
CFRelease(imageData);
}
CGImageRef image = CGImageCreate(CGImageGetWidth(originalImage),
CGImageGetHeight(originalImage),
CGImageGetBitsPerComponent(originalImage),
CGImageGetBitsPerPixel(originalImage),
CGImageGetBytesPerRow(originalImage),
CGImageGetColorSpace(originalImage),
CGImageGetBitmapInfo(originalImage),
imageDataProvider,
CGImageGetDecode(originalImage),
CGImageGetShouldInterpolate(originalImage),
CGImageGetRenderingIntent(originalImage));
if (imageDataProvider != NULL) {
CGDataProviderRelease(imageDataProvider);
}
// Do something with the image.
CGImageRelease(image);
The image received from CGImageCreate() is already decoded, which is exactly what we need. All that’s left is to wrap this code in NSOperation to execute it in background and finally send the results to the main thread. Using +[UIImage imageWithCGImage:] we get UIImage from the resulting CGImage and assign it to the required UIImageView right during scrolling. The animation works flawlessly.
Related
In my iOS application I'm writing, I deal with PNGs because I deal with the alpha channel. For some reason, I can load a PNG into my imageView just fine, but when it comes time to either copy the image out of my application (onto the PasteBoard) or save the image to my camera roll, the image rotates 90 degrees.
I've searched everywhere on this, and one of the things I learned is that if I used JPEGs, I wouldn't have this problem (it sounds), due to the EXIF information.
My app has full copy/paste functionality, and here's the kicker (I'll write this in steps so it is easier to follow):
Go to my camera roll and copy an image
Go into my app and press "Paste", image pastes just fine, and I can do that all day
Click the copy function I implemented, and then click "Paste", and the image pastes but is rotated.
I am 100% sure my copy and paste code isn't what is wrong here, because if I go back to Step 2 above, and click "save", the photo saves to my library but it is rotated 90 degrees!
What is even more strange is that it seems to work fine with images downloaded from the internet, but is very hit or miss with images I manually took with the phone. Some it works, some it doesn't...
Does anybody have any thoughts on this? Any possible work arounds I can use? I'm pretty confident in the code being it works for about 75% of my images. I can post the code upon request though.
For those that want a Swift solution, create an extension of UIImage and add the following method:
func correctlyOrientedImage() -> UIImage {
if self.imageOrientation == .up {
return self
}
UIGraphicsBeginImageContextWithOptions(size, false, scale)
draw(in: CGRect(x: 0, y: 0, width: size.width, height: size.height))
let normalizedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return normalizedImage ?? self;
}
If you're having trouble due to the existing image imageOrientation property, you can construct an otherwise identical image with different orientation like this:
CGImageRef imageRef = [sourceImage CGImage];
UIImage *rotatedImage = [UIImage imageWithCGImage:imageRef scale:1.0 orientation:UIImageOrientationUp];
You may need to experiment with just what orientation to set on your replacement images, possibly switching based on the orientation you started with.
Also keep an eye on your memory usage. Photography apps often run out, and this will double your storage per picture, until you release the source image.
Took a few days, but I finally figured it out thanks to the answer #Dondragmer posted. But I figured I'd post my full solution.
So basically I had to write a method to intelligently auto-rotate my images. The downside is that I have to call this method everywhere throughout my code and it is kind of processor intensive, especially when working on mobile devices, but the plus side is that I can take images, copy images, paste images, and save images and they all rotate properly. Here's the code I ended up using (the method isn't 100% complete yet, still need to edit memory leaks and what not).
I ended up learning that the very first time an image was insert into my application (whether that be due to a user pressing "take image", "paste image", or "select image", for some reason it insert just fine without auto rotating. At this point, I stored whatever the rotation value was in a global variable called imageOrientationWhenAddedToScreen. This made my life easier because when it came time to manipulate the image and save the image out of the program, I simply checked this cached global variable and determined if I needed to properly rotate the image.
- (UIImage*) rotateImageAppropriately:(UIImage*) imageToRotate {
//This method will properly rotate our image, we need to make sure that
//We call this method everywhere pretty much...
CGImageRef imageRef = [imageToRotate CGImage];
UIImage* properlyRotatedImage;
if (imageOrientationWhenAddedToScreen == 0) {
//Don't rotate the image
properlyRotatedImage = imageToRotate;
} else if (imageOrientationWhenAddedToScreen == 3) {
//We need to rotate the image back to a 3
properlyRotatedImage = [UIImage imageWithCGImage:imageRef scale:1.0 orientation:3];
} else if (imageOrientationWhenAddedToScreen == 1) {
//We need to rotate the image back to a 1
properlyRotatedImage = [UIImage imageWithCGImage:imageRef scale:1.0 orientation:1];
}
return properlyRotatedImage;
}
I am still not 100% sure why Apple has this weird image rotation behavior (try this... Take your phone and turn it upside down and take a picture, you'll notice that the final picture turns out right side up - perhaps this is why Apple has this type of functionality?).
I know I spent a great deal of time figuring this out, so I hope it helps other people!
This "weird rotation" behavior is really not that weird at all. It is smart, and by smart I mean memory efficient. When you rotate an iOS device the camera hardware rotates with it. When you take a picture that picture will be captured however the camera is oriented. The UIImage is able to use this raw picture data without copying by just keeping track of the orientation it should be in. When you use UIImagePNGRepresentation() you lose this orientation data and get a PNG of the underlying image as it was taken by the camera. To fix this instead of rotating you can tell the original image to draw itself to a new context and get the properly oriented UIImage from that context.
UIImage *image = ...;
//Have the image draw itself in the correct orientation if necessary
if(!(image.imageOrientation == UIImageOrientationUp ||
image.imageOrientation == UIImageOrientationUpMirrored))
{
CGSize imgsize = image.size;
UIGraphicsBeginImageContext(imgsize);
[image drawInRect:CGRectMake(0.0, 0.0, imgsize.width, imgsize.height)];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
NSData *png = UIImagePNGRepresentation(image);
Here is one more way to achieve that:
#IBAction func rightRotateAction(sender: AnyObject) {
let imgToRotate = CIImage(CGImage: sourceImageView.image?.CGImage)
let transform = CGAffineTransformMakeRotation(CGFloat(M_PI_2))
let rotatedImage = imgToRotate.imageByApplyingTransform(transform)
let extent = rotatedImage.extent()
let contex = CIContext(options: [kCIContextUseSoftwareRenderer: false])
let cgImage = contex.createCGImage(rotatedImage, fromRect: extent)
adjustedImage = UIImage(CGImage: cgImage)!
UIView.transitionWithView(sourceImageView, duration: 0.5, options: UIViewAnimationOptions.TransitionCrossDissolve, animations: {
self.sourceImageView.image = self.adjustedImage
}, completion: nil)
}
You can use Image I/O to save PNG image to file(or NSMutableData) with respect to the orientation of the image. In the example below I save the PNG image to a file at path.
- (BOOL)savePngFile:(UIImage *)image toPath:(NSString *)path {
NSData *data = UIImagePNGRepresentation(image);
int exifOrientation = [UIImage cc_iOSOrientationToExifOrientation:image.imageOrientation];
NSDictionary *metadata = #{(__bridge id)kCGImagePropertyOrientation:#(exifOrientation)};
NSURL *url = [NSURL fileURLWithPath:path];
CGImageSourceRef source = CGImageSourceCreateWithData((__bridge CFDataRef)data, NULL);
if (!source) {
return NO;
}
CFStringRef UTI = CGImageSourceGetType(source);
CGImageDestinationRef destination = CGImageDestinationCreateWithURL((__bridge CFURLRef)url, UTI, 1, NULL);
if (!destination) {
CFRelease(source);
return NO;
}
CGImageDestinationAddImageFromSource(destination, source, 0, (__bridge CFDictionaryRef)metadata);
BOOL success = CGImageDestinationFinalize(destination);
CFRelease(destination);
CFRelease(source);
return success;
}
cc_iOSOrientationToExifOrientation: is a method of UIImage category.
+ (int)cc_iOSOrientationToExifOrientation:(UIImageOrientation)iOSOrientation {
int exifOrientation = -1;
switch (iOSOrientation) {
case UIImageOrientationUp:
exifOrientation = 1;
break;
case UIImageOrientationDown:
exifOrientation = 3;
break;
case UIImageOrientationLeft:
exifOrientation = 8;
break;
case UIImageOrientationRight:
exifOrientation = 6;
break;
case UIImageOrientationUpMirrored:
exifOrientation = 2;
break;
case UIImageOrientationDownMirrored:
exifOrientation = 4;
break;
case UIImageOrientationLeftMirrored:
exifOrientation = 5;
break;
case UIImageOrientationRightMirrored:
exifOrientation = 7;
break;
default:
exifOrientation = -1;
}
return exifOrientation;
}
You can alternatively save the image to NSData using CGImageDestinationCreateWithData and pass NSMutableData instead of NSURL in CGImageDestinationCreateWithURL.
I'm trying to reuse a small chunk of code inside a custom button class. For this to work I need to pass in non stretchable images (an icon) or a stretchable image (a 'swoosh'). Within the code I need to set the rect to draw the height so I'd like, ideally, to simply determine if the image is stretchable or not? If it isn't I draw it at the size of the image, if not I draw at the bounds of the containing rect.
From my investigation so far capInsets (iOS 5) or leftCapWidth/topCapHeight (pre iOS 5) are not useful for this.
Is there something buried in the core or quartz information I can use?
Just curious, for now I'm coding around it with an extra parameter.
** I've since read through CGImageRef and the CI equivalent **
As far as I can tell there is no such information that we can access to identify such images, which begs the question how does the OS know?
There is no way to detect this unless you have some intense image analysis (which won't be 100% correct). UIImage is essentially some pixels with meta-information, all obtained from the file that you loaded it from. No file formats would have that information.
However, you can encode some information into the file name of the image. If you have an image called foo.png that is stretchable, why not call it foo.stretch.png? Your loading routines can analyse the file name and extract meta-information that you can associate with the UIImage (see http://labs.vectorform.com/2011/07/objective-c-associated-objects/ for associated objects) or by creating your own class that composites a UIImage with meta-information.
Good luck in your research.
When u create UIImage*, its .size property is absolute.
If u mean stretchable to your button view. just check scale for example.
- (BOOL) stretchableImage:(UIImage*)aImage toView:(UIView*)aView{
CGFloat scaleW = aView.size.width/aImage.size.width;
CGFloat scaleH = aView.size.height/aImage.size.height;
if (scaleW == scaleH){
if (scaleW < 1)
return YES;
}
return NO;
}
You can check it's class.
UIImage *img = [UIImage imageNamed:#"back"];
NSString *imgClass = NSStringFromClass(img.class);
UIImage *imgStretch = [img stretchableImageWithLeftCapWidth:10 topCapHeight:10];
NSString *imgStrClass = NSStringFromClass(imgStretch.class);
NSLog(#"Normal class:\t%#\nStretchable class:\t%#",imgClass,imgStrClass);
Console:
Normal class: UIImage
Stretchable class: _UIResizableImage
I am using some CoreImage filters to process an image. Applying the filter to my input image results in an output image called filterOutputImage of type CIImage.
I now wish to display that image, and tried doing:
self.modifiedPhoto = [UIImage imageWithCIImage:filterOutputImage];
self.photoImageView.image = self.modifiedPhoto;
The view however is blank - nothing is being displayed.
If I add logging statements that print out details about both filterOutputImage and self.modifiedPhoto, those logging statements are showing me that both those vars appear to contain legitimate image data: their size is being reported and the objects are not nil.
So after doing some Googling, I found a solution that requires going through a CGImage stage; vis:
CGImageRef outputImageRef = [context createCGImage:filterOutputImage fromRect:[filterOutputImage extent]];
self.modifiedPhoto = [UIImage imageWithCGImage:outputImageRef scale:self.originalPhoto.scale orientation:self.originalPhoto.imageOrientation];
self.photoImageView.image = self.modifiedPhoto;
CGImageRelease(outputImageRef);
This second approach works: I am getting the correct image displayed in the view.
Can someone please explain to me why my first attempt failed? What am I doing wrong with the imageWithCIImage method that is resulting in an image that seems to exist but can't be displayed? Is it always necessary to "pass through" a CGImage stage in order to generate a UIImage from a CIImage?
Hoping someone can clear up my confusion :)
H.
This should do it!
-(UIImage*)makeUIImageFromCIImage:(CIImage*)ciImage
{
self.cicontext = [CIContext contextWithOptions:nil];
// finally!
UIImage * returnImage;
CGImageRef processedCGImage = [self.cicontext createCGImage:ciImage
fromRect:[ciImage extent]];
returnImage = [UIImage imageWithCGImage:processedCGImage];
CGImageRelease(processedCGImage);
return returnImage;
}
I assume that self.photoImageView is a UIImageView? If so, ultimately, it is going to call -[UIImage CGImage] on the UIImage and then pass that CGImage as the contents property of a CALayer.
(See comments: my details were wrong)
Per the UIImage documentation for -[UIImage CGImage]:
If the UIImage object was initialized using a CIImage object, the
value of the property is NULL.
So the UIImageView calls -CGImage, but that results in NULL, so nothing is displayed.
I haven't tried this, but you could try making a custom UIView and then using UIImage's -draw... methods in -[UIView drawRect:] to draw the CIImage.
I have a tableview, and i am loading images to it. I have images which are ranging from 150kb - 2MB. Since this is too much for a tableview to handle (it takes long time to load, and makes the scrolling slow), i thought of using ImageIO framework to create thumbnail images of images.
I found a code that does this, but i can't undestand it.
1.) Can someone please explain me the code
2.) My problem is that, I have a tableview and i need to load thumbnail images to it. So how can i use the following code and display it on my tableview. Can someone show me some sample code or a tutorial that does this ?
heres the code ;
-(void)buildGallery
{
for (NSUInteger i = 0; i < kMaxPictures; i++)
{
NSInteger imgTag = i + 1;
NYXPictureView* v = [[NYXPictureView alloc] initWithFrame:(CGRect){.origin.x = x, .origin.y = y, .size = _thumbSize}];
NSString* imgPath = [[NSBundle mainBundle] pathForResource:[NSString stringWithFormat:#"%d", imgTag] ofType:#"jpg"];
CGImageSourceRef src = CGImageSourceCreateWithURL((CFURLRef)[NSURL fileURLWithPath:imgPath], NULL);
CFDictionaryRef options = (CFDictionaryRef)[[NSDictionary alloc] initWithObjectsAndKeys:(id)kCFBooleanTrue, (id)kCGImageSourceCreateThumbnailWithTransform, (id)kCFBooleanTrue, (id)kCGImageSourceCreateThumbnailFromImageIfAbsent, (id)[NSNumber numberWithDouble:_maxSize], (id)kCGImageSourceThumbnailMaxPixelSize, nil];
CGImageRef thumbnail = CGImageSourceCreateThumbnailAtIndex(src, 0, options); // Create scaled image
CFRelease(options);
CFRelease(src);
UIImage* img = [[UIImage alloc] initWithCGImage:thumbnail];
[v setImage:img];
[img release];
CGImageRelease(thumbnail);
}
}
Basically the problem you have is due to the fact that when you scale down an image, the number of bytes stored in memory doesnt change when you scale it down. The hardware still has to read your 2mb image, and then render it to a smaller scale. What you need to do is to either change the size of your image (use photoshop or other) or the way im suggesting is to compress your image, and then scale it down. The image will look rough at normal size, but will look ok when you scale it down to a thumbview.
To generate an NSData version of your image encoded as a PNG.
NSData *PNGFile = UIImagePNGRepresentation(myImage);
Or a JPEG, with a quality value set
NSData *JPEGFile = UIImageJPEGRepresentation(myImage, 0.9f);
Both of these will give you an image smaller than you currently have, which will be easier to render in the tableView.
In order to get better performance you're going to have to load the image in a background thread, and after it's in memory add the UIImage to the image view on the main thread. There are a couple ways to go about doing this, but the simplest is going to be using GCD's block based methods.
Resizing the image is definitely still important for memory considerations, but get the asynchronous image loading part down first.
I am loading an image in my UITableViewCell using
[NSData dataWithContentsOfURL:imageUrl]
For setting custom height for my tableview cell , i need the actual size of the image that am loading.
Can we get the width and height of an image before loading it completely ? Thanks in advance.
Try the Image I/O interface as done below. This will allow you to get the image size without having to load the entire file:
#import <ImageIO/ImageIO.h>
NSMutableString *imageURL = [NSMutableString stringWithFormat:#"http://www.myimageurl.com/image.png"];
CGImageSourceRef source = CGImageSourceCreateWithURL((CFURLRef)[NSURL URLWithString:imageURL], NULL);
NSDictionary* imageHeader = (__bridge NSDictionary*) CGImageSourceCopyPropertiesAtIndex(source, 0, NULL);
NSLog(#"Image header %#",imageHeader);
NSLog(#"PixelHeight %#",[imageHeader objectForKey:#"PixelHeight"]);
you can do it like this:
NSData *imageData = [NSData dataWithContentsOfURL:imageUrl];
UIImage *image = [UIImage imageWithData:imageData];
NSLog(#"image height: %f",image.size.height);
NSLog(#"image width: %f",image.size.width);
Take a look at this Question How do I extract the width and height of a PNG from looking at the header in objective c which shares how is it possible to parse image Meta-data.
I have a created OpenSource project Ottran that extracts the image size and type of a remote image by downloading as little as possible, which supports PNG, JPEG , BMP and GIF formats.
NSData is an "opaque" data, so you cannot do much with it before converting it to something more "useful" (e.g., creating an UIImage by means of it -initWithData: method). At that moment you could enquiry the image size, but it would be late for you.
The only approach I see, if you really need knowing the image size before the image is fully downloaded, is implementing a minimal server-side API so that you can ask for the image size before trying to download it.
Anyway, why do you need to know the image size before it is actually downloaded? Could you not set the row height at the moment when it has been downloaded (i.e., from your request delegate method)?
dataWithContentsOfURL is synchronous it will block your UI until its download complete, so please use header content to get resolution, Below is Swift 3.0 code
if let imageSource = CGImageSourceCreateWithURL(url! as CFURL, nil) {
if let imageProperties = CGImageSourceCopyPropertiesAtIndex(imageSource, 0, nil) as Dictionary? {
let pixelWidth = imageProperties[kCGImagePropertyPixelWidth] as! Int
let pixelHeight = imageProperties[kCGImagePropertyPixelHeight] as! Int
print("the image width is: \(pixelWidth)")
print("the image height is: \(pixelHeight)")
}
}