I have scenario, in which I am getting images using Web Service and all images are in different resolution. Now my requirement is that I want resolution of each Images and using that I want to define size of UIImageView so I can prevent my Images from getting blurred
For example image resolution if 326 pixel/inch the imageview should be as size of that image can represent fully without any blur.
UIImage *img = [UIImage imageNamed:#"foo.png"];
CGRect rect = CGRectMake(0, 0, img.size.width, img.size.height);
UIImageView *imgView = [[UIImageView alloc] initWithFrame:rect];
[imgView setImage:img];
Image size IS it's resolution.
Your problem might be - retina display!
Check for Retina display and thus - make UIImageView width/height twice smaller (so that each UIImageView pixel would consist of four smaller UIImage pixels for retina display).
How to check for retina display:
https://stackoverflow.com/a/7607087/894671
How to check image size (without actually loading image in memory):
NSString *mFullPath = [[NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) lastObject]
stringByAppendingPathComponent:#"imageName.png"];
NSURL *imageFileURL = [NSURL fileURLWithPath:mFullPath];
CGImageSourceRef imageSource = CGImageSourceCreateWithURL((CFURLRef)imageFileURL, NULL);
if (imageSource == NULL)
{
// Error loading image ...
}
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithBool:NO], (NSString *)kCGImageSourceShouldCache, nil];
CFDictionaryRef imageProperties = CGImageSourceCopyPropertiesAtIndex(imageSource, 0, (CFDictionaryRef)options);
NSNumber *mImgWidth;
NSNumber *mImgHeight;
if (imageProperties)
{
//loaded image width
mImgWidth = (NSNumber *)CFDictionaryGetValue(imageProperties, kCGImagePropertyPixelWidth);
//loaded image height
mImgHeight = (NSNumber *)CFDictionaryGetValue(imageProperties, kCGImagePropertyPixelHeight);
CFRelease(imageProperties);
}
if (imageSource != NULL)
{
CFRelease(imageSource);
}
So - for example:
UIImageView *mImgView = [[UIImageView alloc] init];
[mImgView setImage:[UIImage imageNamed:#"imageName.png"]];
[[self view] addSubview:mImgView];
if ([UIScreen instancesRespondToSelector:#selector(scale)])
{
CGFloat scale = [[UIScreen mainScreen] scale];
if (scale > 1.0)
{
//iphone retina screen
[mImgView setFrame:CGRectMake(0,0,[mImgWidth intValue]/2,[mImgHeight intValue]/2)];
}
else
{
//iphone screen
[mImgView setFrame:CGRectMake(0,0,[mImgWidth intValue],[mImgHeight intValue])];
}
}
Hope that helps!
You can get image size using following code. So, first calculate downloaded image size and than make image view according to that.
UIImage *Yourimage = [UIImage imageNamed:#"image.png"];
CGFloat width = Yourimage.size.width;
CGFloat height = Yourimage.size.height;
Hope, this will help you..
UIImage *oldimage = [UIImage imageWithContentsOfFile:imagePath]; // or you can set from url with NSURL
CGSize imgSize = [oldimage size];
imgview.frame = CGRectMake(10, 10, imgSize.width,imgSize.height);
[imgview setImage:oldimage];
100% working ....
To solve this problem, we need to take care of the device's display resolution..
For example you have an image of resolution 326ppi which is same as of iPhone4, iPhone4S and iPod 4th Gen. So you can simply use solutions suggested by #Nit and #Peko. But for other devices(or for image with different resolution on these devices) you will need to apply maths to calculate size for better display.
Now suppose you have 260ppi(with dimensions W x H) image and you wish to display it on iPhone4S, so as the information contained in it per inches is less than the display resolution of iPhone so we will need to resize it by reducing image size by 326/260 factor. so now the size for imageView that you will use is
imageViewWidth = W*(260/326);
imageViewHeight = H*(260/326);
In general:
resizeFactor = imageResolution/deviceDisplayResolution;
imageViewWidth = W*resizeFactor;
imageViewHeight = H*resizeFactor;
Here I am considering when we set an image in imageView and resize it, it does not removes or adds pixels from image,
Let the UIImageView do the work by utilizing the contentMode property to do your image resizing for you.
You probably want to be displaying your UIImageView with a static sizing (the "frame" property) that represents the maximum size of the image you want to display, and allowing the images to resize within that frame relative to their own particular size requirements (overall size, aspect ratio, etc.). You can let the UIImageView do the heavy lifting for you of dealing with different sized images by mastering the contentMode property. It has many different settings, one of which is UIViewContentModeScaleAspectFit, which will downsize your image as necessary to fit within the UIImageView, which if the image is smaller, it will simply display centered. You can play with the setting to get the results you want.
Note that with this approach, there is nothing special you need to do to deal with scaling issues associated with a Retina display.
As per the requirement you stated in the question body, I believe you need not change UIImageView size.
Image can represent fully without any blur using this line of code:
imageView.contentMode = UIViewContentModeScaleAspectFit;
Related
So I have this UIImageView with an image on top of it. When I'm using normal iPhone display, it shows inside the UIImageView exactly as it supposed to - in the UIImageView bounds, the problem is when I'm using retina display device, the image becomes big and doesn't fit the UIImageView bounds, it goes all over the screen.
How can I fix this issue? I want the image in Retina display to fit inside the UIImageView size.
This is my code:
- (UIImage *)loadScreenShotImageFromDocumentsDirectory
{
UIImage * tempimage;
NSArray *sysPaths = NSSearchPathForDirectoriesInDomains( NSDocumentDirectory, NSUserDomainMask, YES );
NSString *docDirectory = [sysPaths objectAtIndex:0];
NSString *filePath = [NSString stringWithFormat:#"%#/MeasureScreenShot.png", docDirectory];
tempimage = [[UIImage alloc] initWithContentsOfFile:filePath];
return tempimage;
}
- (void)viewDidLoad
{
// Setting the image
UIImage * Image = [self loadScreenShotImageFromDocumentsDirectory];
theImageView.frame = CGRectMake(0, 166, 290, 334);
theImageView.contentMode = UIViewContentModeCenter;
theImageView.image = Image;
}
Thank in advance!
Try setting the content mode to UIViewContentModeScaleAspectFit
When I was using an image with core plot, I had to scale the image for the retina display.
CPTImage *fillimage = [CPTImage imageWithCGImage:bubble.CGImage scale:bubble.scale];
the name of the image file was bubble and I had a bubble file and a bubble#2x file. This let the image fit the image view for either display. Hope this helps.
I have a blog application that I'm making. To compose a new entry, there is a "Compose Entry" view where the user can select a photo and input text. For the photo, there is a UIImageView placeholder and upon clicking this, a custom ImagePicker comes up where the user can select up to 3 photos.
This is where the problem comes in. I don't need the full resolution photo from the ALAsset, but at the same time, the thumbnail is too low resolution for me to use.
So what I'm doing at this point is resizing the fullResolution photos to a smaller size. However, this takes some time, especially when resizing up to 3 photos to a smaller size.
Here is a code snipped to show what I'm doing:
ALAssetRepresentation *rep = [[dict objectForKey:#"assetObject"] defaultRepresentation];
CGImageRef iref = [rep fullResolutionImage];
if (iref)
{
CGRect screenBounds = [[UIScreen mainScreen] bounds];
UIImage *previewImage;
UIImage *largeImage;
if([rep orientation] == ALAssetOrientationUp) //landscape image
{
largeImage = [[UIImage imageWithCGImage:iref] scaledToWidth:screenBounds.size.width];
previewImage = [[UIImage imageWithCGImage:iref] scaledToWidth:300];
}
else // portrait image
{
previewImage = [[[UIImage imageWithCGImage:iref] scaledToHeight:300] imageRotatedByDegrees:90];
largeImage = [[[UIImage imageWithCGImage:iref] scaledToHeight:screenBounds.size.height] imageRotatedByDegrees:90];
}
}
Here, from the fullresolution image, I am creating two images: a preview image (max 300px on the long end) and a large image (max 960px or 640px on the long end). The preview image is what is shown on the app itself in the "new entry" preview. The large image is what will be used when uploading to the server.
The actual code I'm using to resize, I grabbed somewhere from here:
-(UIImage*)scaledToWidth:(float)i_width
{
float oldWidth = self.size.width;
float scaleFactor = i_width / oldWidth;
float newHeight = self.size.height * scaleFactor;
float newWidth = oldWidth * scaleFactor;
UIGraphicsBeginImageContext(CGSizeMake(newWidth, newHeight));
[self drawInRect:CGRectMake(0, 0, newWidth, newHeight)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
Am I doing things wrong here? As it stands, the ALAsset thumbnail is too low clarity, and at the same time, I dont need the entire full resolution. It's all working now, but the resizing takes some time. Is this just a necessary consequence?
Thanks!
It is a necessary consequence of resizing your image that it will take some amount of time. How much depends on the device, the resolution of the asset and the format of the asset. But you don't have any control over that. But you do have control over where the resizing takes place. I suspect that right now you are resizing the image in your main thread, which will cause the UI to grind to a halt while you are doing the resizing. Do enough images, and your app will appear hung for long enough that the user will just go off and do something else (perhaps check out competing apps in the App Store).
What you should be doing is performing the resizing off the main thread. With iOS 4 and later, this has become much simpler because you can use Grand Central Dispatch to do the resizing. You can take your original block of code from above and wrap it in a block like this:
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_LOW, 0), ^{
ALAssetRepresentation *rep = [[dict objectForKey:#"assetObject"] defaultRepresentation];
CGImageRef iref = [rep fullResolutionImage];
if (iref)
{
CGRect screenBounds = [[UIScreen mainScreen] bounds];
__block UIImage *previewImage;
__block UIImage *largeImage;
if([rep orientation] == ALAssetOrientationUp) //landscape image
{
largeImage = [[UIImage imageWithCGImage:iref] scaledToWidth:screenBounds.size.width];
previewImage = [[UIImage imageWithCGImage:iref] scaledToWidth:300];
}
else // portrait image
{
previewImage = [[[UIImage imageWithCGImage:iref] scaledToHeight:300] imageRotatedByDegrees:90];
largeImage = [[[UIImage imageWithCGImage:iref] scaledToHeight:screenBounds.size.height] imageRotatedByDegrees:90];
}
dispatch_async(dispatch_get_main_queue(), ^{
// do what ever you need to do in the main thread here once your image is resized.
// this is going to be things like setting the UIImageViews to show your new images
// or adding new views to your view hierarchy
});
}
});
You'll have to think about things a little differently this way. For example, you've now broken up what used to be a single step into multiple steps now. Code that was running after this will end up running before the image resize is complete or before you actually do anything with the images, so you need to make sure that you didn't have any dependencies on those images or you'll likely crash.
A late answer, but for those stumbling on this question, you might want to consider using the fullScreenImage rather than the fullResolutionImage of the defaultRepresentation. It's usually much smaller, but still large enough to maintain good quality for larger thumbnails.
I'm trying to show an image taken through the camera in Portrait mode but I always get it shown on my UIImageView in Landascape mode. The image is scaled before being added to the UIImageView but it seems this is not the problem as I tried many different solutions found on the web (even some quite smart ones like the one coming from Trevor's Bike Shed).
Here is my code:
UIImage *image = [UIImage imageWithContentsOfFile:imgPath];
CGRect newFrame = [scrollView frame];
UIImage *resizedImage = [ImageViewController imageFromImage:image scaledToSize:newFrame.size];
imageView = [[UIImageView alloc] initWithFrame:newFrame];
[imageView setImage:resizedImage];
[scrollView setContentSize:imageView.frame.size];
[scrollView addSubview:imageView];
[imageView release];
imgPath is the path of the image coming as a parameter and scrollView is an IBOutlet linked to a UIScrollView.
Is there something I'm missing about the UIImageView?
As I wrote above, it seems that the problem is not related to the scaling...
Are you certain that imageOrientation is set as you expect on resizedImage? Incorrect scaling can absolutely mess up imageOrientation.
There is an imageOrientation property for a UIImage. Check out the docs. It can be set through the initializer:
UIImage *rotatedImage = [UIImage imageWithCGImage:[resizedImage CGImage] scale:1 orientation:UIImageOrientationUp]
Creates and returns an image object with the specified scale and
orientation factors.
+ (UIImage *)imageWithCGImage:(CGImageRef)imageRef scale:(CGFloat)scale orientation:(UIImageOrientation)orientation
Parameters:
imageRef The Quartz image object.
scale The scale factor to use when interpreting the image data.
Specifying a scale factor of 1.0 results in an image whose size
matches the pixel-based dimensions of the image. Applying a different
scale factor changes the size of the image as reported by the size
(page 12) property.
orientation The orientation of the image data. You can use this
parameter to specify any rotation factors applied to the image.
Return Value A new image object for the specified Quartz image, or
nil if the method could not initialize the image from the specified
image reference.
UIImage* image=[UIImage imageNamed:#"abc.jpg"];
UIImageOrientation orientation=image.imageOrientation;
Use image in imageView or where ever you want to use,
UIImageView* imageView=[[UIImageView alloc] initWithImage:image];
After that reset the image to orientation it was before usage
UIImage* image1=[UIImage imageWithCGImage:[image CGImage] scale:1.0 orientation:orientation];
image1 is your image in the orientation it was before
I've got a UIView which I'm rendering to a UIImage via the typical UIGraphicsBeginImageContextWithOptions method, using a scale of 2.0 so the image output will always be the "retina display" version of what would show up onscreen, regardless of the user's actual screen resolution.
The UIView I'm rendering contains both images and text (UIImages and UILabels). The image is appearing in the rendered UIImage at its full resolution, and looks great. But the UILabels appear to have been rasterized at a 1.0 scale and then upscaled to 2.0, resulting in blurry text.
Is there something I'm doing wrong, or is there some way to get the text to render nice and crisp at the higher scale level? Or is there some way to do this other than using the scaling parameter of UIGraphicsBeginImageContextWithOptions that would have better results? Thanks!
The solution is to change the labels's contentsScale to 2 before you draw it, then set it back immediately thereafter. I just coded up a project to verify it, and its working just fine making a 2x image in a normal retina phone (simulator). [If you have a public place I can put it let me know.]
EDIT: the extended code walks the subviews and any container UIViews to set/unset the scale
- (IBAction)snapShot:(id)sender
{
[self changeScaleforView:snapView scale:2];
UIGraphicsBeginImageContextWithOptions(snapView.bounds.size, snapView.opaque, 2);
[snapView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
imageDisplay.image = img; // contentsScale
imageDisplay.contentMode = UIViewContentModeScaleAspectFit;
[self changeScaleforView:snapView scale:1];
}
- (void)changeScaleforView:(UIView *)aView scale:(CGFloat)scale
{
[aView.subviews enumerateObjectsUsingBlock:^void(UIView *v, NSUInteger idx, BOOL *stop)
{
if([v isKindOfClass:[UILabel class]]) {
v.layer.contentsScale = scale;
} else
if([v isKindOfClass:[UIImageView class]]) {
// labels and images
// v.layer.contentsScale = scale; won't work
// if the image is not "#2x", you could subclass UIImageView and set the name of the #2x
// on it as a property, then here you would set this imageNamed as the image, then undo it later
} else
if([v isMemberOfClass:[UIView class]]) {
// container view
[self changeScaleforView:v scale:scale];
}
} ];
}
Try rendering to an image with double size, and then create the scaled image:
UIGraphicsBeginImageContextWithOptions(size, NO, 1.0);
// Do stuff
UImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
newImage=[UIImage imageWithCGImage:[newImage CGImage] scale:2.0 orientation:UIImageOrientationUp];
Where:
size = realSize * scale;
I have been struggling with much the same oddities in the context of textview to PDF rendering. I found out that there are some documented properties on the CALayer objects which make up the view. Maybe setting the rasterizationScale of the relevant (sub)layer(s) helps.
I have a grouped UITableView that contains several cells (just standard UITableViewCells), all of which are of UITableViewCellStyleSubtitle style. Bueno. However, when I insert images into them (using the provided imageView property), the corners on the left side become square.
Example Image http://files.lithiumcube.com/tableView.png
The code being used to assign the values into the cell is:
cell.textLabel.text = currentArticle.descriptorAndTypeAndDifferentiator;
cell.detailTextLabel.text = currentArticle.stateAndDaysWorn;
cell.imageView.image = currentArticle.picture;
and currentArticle.picture is a UIImage (also the pictures, as you can see, display just fine with the exception of the square corners).
It displays the same on my iPhone 3G, in the iPhone 4 simulator and in the iPad simulator.
What I'm going for is something similar to the UITableViewCells that Apple uses in its iTunes app.
Any ideas about what I'm doing wrong?
Thanks,
-Aaron
cell.imageView.layer.cornerRadius = 16; // 16 is just a guess
cell.imageView.clipsToBounds = YES;
This will round the UIImageView so it does not draw over the cell. It will also round all the corners of all your images, but that may be OK.
Otherwise, you will have to add your own image view that will just round the one corner. You can do that by setting up a clip region in drawRect: before calling super. Or just add your own image view that is not so close to the left edge.
You can add a category on UIImage and include this method:
// Return the image, but with rounded corners. Useful for masking an image
// being used in a UITableView which has grouped style
- (UIImage *)imageWithRoundedCorners:(UIRectCorner)corners radius:(CGFloat)radius {
// We need to create a CGPath to set a clipping context
CGRect aRect = CGRectMake(0.f, 0.f, self.size.width, self.size.height);
CGPathRef clippingPath = [UIBezierPath bezierPathWithRoundedRect:aRect byRoundingCorners:corners cornerRadii:CGSizeMake(radius, radius)].CGPath;
// Begin drawing
// Start a context with a scale of 0.0 uses the current device scale so that this doesn't unnecessarily drop resolution on a retina display.
// Use `UIGraphicsBeginImageContextWithOptions(aRect.size)` instead for pre-iOS 4 compatibility.
UIGraphicsBeginImageContextWithOptions(aRect.size, NO, 0.0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextAddPath(context, clippingPath);
CGContextClip(context);
[self drawInRect:aRect];
UIImage *croppedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return croppedImage;
}
Then when you're configuring your cells, in the table view controller, call something like:
if ( *SINGLE_ROW* ) {
// We need to clip to both corners
cell.imageView.image = [image imageWithRoundedCorners:(UIRectCornerTopLeft | UIRectCornerBottomLeft) radius:radius];
} else if (indexPath.row == 0) {
cell.imageView.image = [image imageWithRoundedCorners:UIRectCornerTopLeft radius:radius];
} else if (indexPath.row == *NUMBER_OF_ITEMS* - 1) {
cell.imageView.image = [image imageWithRoundedCorners:UIRectCornerBottomLeft radius:radius];
} else {
cell.imageView.image = image;
}
but replace the SINGLE_ROW etc with real logic to determine whether you've got a single row in a section, or it's the last row. One thing to note here, is that I've found (experimentally) that the radius for a group style table is 12, which works perfectly in the simulator, but not on an iPhone. I've not been able to test it on a non-retina device. A radius of 30 looks good on the iPhone 4 (so I'm wondering if this is an image scale thing, as the images I'm using are from the AddressBook, so don't have an implied scale factor). Therefore, I've got some code before this that modifies the radius...
CGFloat radius = GroupStyleTableCellCornerRadius;
if ([[UIScreen mainScreen] respondsToSelector:#selector(scale)] && [[UIScreen mainScreen] scale] == 2){
// iPhone 4
radius = GroupStyleTableCellCornerRadiusForRetina;
}
hope that helps.