Scaling the image -- Quality check - iphone

Does scaling the image using: image=[UIImage imageWithCGImage:[image CGImage] scale:2.0 orientation:UIImageOrientationUp]; reduces the image quality?

If you mean something like compression artefacts then no, cause those are added when it comes to saving to disc in a specific format like JPG.
There is also a yes, because you double the dimensions of the image which will interpolate the pixels that never have been there, this will always look blurry starting at a certain point.

Related

objective c how to change image quality?

I've googled a lot about this question, but couldn't find anything interesting for me.
Q: Is there any class or method to change image quality(not size or scale, but the quality keeping the same size and scale).
As I understand there is no native(default) classes or methods to do this, am I right?
Any help would be appreciated.
If you have an image as a UIImage, you can use the UIImageJPEGRepresentation function, specifying a compression quality,to create an NSData object. This data object can then be used to create a new UIImage.
See the Apple Docs
This functionality is already available in iOS SDK itself to set image quality.
In the UIImage class, there is one method UIImageJPEGRepresentation.
In this function pass your image and image compression quality. It will not change dimension of the image but it will change the dip of the image to maintain the quality. It will help to maintain the size in memory as well.
NSData * UIImageJPEGRepresentation (
UIImage *image,
CGFloat compressionQuality
);
Parameters:
- image : The original image data.
- compressionQuality: The quality of the resulting JPEG image, expressed as a value from 0.0 to 1.0. The value 0.0 represents the maximum compression (or lowest quality) while the value 1.0 represents the least compression (or best quality).
Hope this is what you required.
Enjoy Coding :)

The color of video is wrong when made from UIImage of the PNG files

I am taking a UIImage from Png file and feed it to the videoWriter.
avAdaptor appendPixelBuffer:pixelBuffer
When the end result video comes out, it seems to lacking the one color, missing the yellow color or something.
I take alook of the function that made the pixelbuffer out of the UIImage
CVPixelBufferCreateWithBytes(NULL,
myWidth,
myHeight,
kCVPixelFormatType_32BGRA,
(void*)CFDataGetBytePtr(image),
CGImageGetBytesPerRow(cgImage),
NULL,
0,
NULL,
&pixelBuffer);
I also try the kCVPixelFormatType_32AGRB and others, it didn't help.
any thoughts?
Please verify if your PNG image is with or without transparency element. If your PNG image doesn't contain transparency then it's 24-bit and not 32-bit per pixel.
Also, have you tried kCVPixelFormatType_32RGBA ?
Maybe the image sizes do not fit together.
Your input image should have the same width and height like the video output. If "myWidth" or "myHeight" is different (i.e. different aspect ratio) to the size of the image, one byte may be lost at the end of the line, which could lead to color shifting. kCVPixelFormatType_32BGRA seems to be the preferred (fastest) pixel format, so this should be okay.
There is no yellow color in RGB colorspace. This means yellow is only the >red< and >green< components. It seems that >blue< is missing.
I assume you are using a CFDataRef (maybe NSData) for the image. If it is an NSData object you can print the bytes to the debug console using
NSLog(#"data: %#", image);
This will print a hex dump to the console. Here you can see if you have alpha and what kind of byte order your png is alike. If your image has alpha, every fourth byte should be the same number.

Resize NSData to set into UIImage

I have these codes:
UIImage * img = [[UIImage alloc] initWithData:[NSData dataWithContentsOfURL:[NSURL URLWithString:IMAGEURL]]];
[self.imageView setImage:img];
But the IMAGEURL contains a high resolution picture so it takes much time to load. Can I resize the image data smaller to load faster?
Any help will be appreciated.
Thanks.
No, in order to resize the image you should at least read it in once, so unless the server has a low-quality version for you, there's nothing you can do.
Unless you're brave and the image is JPEG: libjpeg has a function to read in images downsampled by a factor of 2, 4 or 8. E.g. for scale 1/8 it reads DCT blocks and takes only the constant component. But this will be a little more complex. Read time will be drastically reduced.
See iphone-reading-an-area-of-an-image about this as well.

Writing a masked image to disk as a PNG file

Basically I'm downloading images off of a webserver and then caching them to the disk, but before I do so I want to mask them.
I'm using the masking code everyone seems to point at which can be found here:
http://iosdevelopertips.com/cocoa/how-to-mask-an-image.html
What happens though, is that the image displays fine, but the version that gets written to the disk with
UIImage *img = [self maskImage:[UIImage imageWithData:data] withMask:self.imageMask];
[UIImagePNGRepresentation(img) writeToFile:cachePath atomically:NO];
has it's alpha channel inverted when compared to the one displayed later on (using the same UIImage instance here).
Any ideas? I do need the cached version to be masked, otherwise displaying the images in a table view get's awfully slow if I have to mask them every time.
Edit: So yeah, UIImagePNGRepresentation(img) seems to invert the alpha channel, doesn't have anything to do with the code that writes to disk, which is rather obvious but I checked anyway.
How about drawing into a new image, and then save that?
UIGraphicsBeginImageContext(img.size);
[img drawAtPoint:CGPointZero];
UIImage *newImg = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[UIImagePNGRepresentation(newImg) writeToFile:cachePath atomically:NO];
(untested)
See the description in CGImageCreateWithMask in CGImage Reference:
The resulting image depends on whether the mask parameter is an image mask or an image. If the mask parameter is an image mask, then the source samples of the image mask act as an inverse alpha value. That is, if the value of a source sample in the image mask is S, then the corresponding region in image is blended with the destination using an alpha value of (1-S). For example, if S is 1, then the region is not painted, while if S is 0, the region is fully painted.
If the mask parameter is an image, then it serves as an alpha mask for blending the image onto the destination. The source samples of mask' act as an alpha value. If the value of the source sample in mask is S, then the corresponding region in image is blended with the destination with an alpha of S. For example, if S is 0, then the region is not painted, while if S is 1, the region is fully painted.
It seems for some reason the image mask is treated as a mask image to mask with while saving. According to:
UIImagePNGRepresentation and masked images
http://lists.apple.com/archives/quartz-dev/2010/Sep/msg00038.html
to correctly save with UIImagePNGRepresentation, there are several choices:
Use inverse version of the image mask.
Use "mask image" instead of "image mask".
Render to a bitmap context and then save it, like epatel mentioned.

Reduce UIImage size to a manageable size (reduce bytes)

I want to reduce the number of bytes of an image captured by the device, since i believe the _imageScaledToSize does not reduce the number of bytes of the picture (or does it?) - i want to store a thumbnail of the image in a local dictionary object and can't afford to put full size images in the dictionary. Any idea?
If you wish to simply compress your UIImage, you can use
NSData *dataForPNGFile = UIImagePNGRepresentation(yourImage);
to generate an NSData version of your image encoded as a PNG (easily inserted into an NSDictionary or written to disk), or you can use
NSData *dataForPNGFile = UIImageJPEGRepresentation(yourImage, 0.9f);
to do the same, only in a JPEG format. The second parameter is the image quality of the JPEG. Both of these should produce images that are smaller, memory-wise, than your UIImage.
Resizing a UIImage to create a smaller thumbnail (pixels-wise) using published methods is a little trickier. _imageScaledToSize is from the private API, and I'd highly recommend you not use it. For a means that works within the documented methods, see this post.
I ran into this problem the other day and did quite a bit of research. I found an awesome solution complete with code here:
http://vocaro.com/trevor/blog/2009/10/12/resize-a-uiimage-the-right-way/
You need to draw the image into a graphics context at a smaller size. Then, release the original image.
When you say 'physical size', are you talking about a print? Because you can just change the printer page size.
Are you talking about the number of pixels used to capture the image? As in, if you have a pixel array of 3000x2000, and you only want 150x150, then you can crop the images. At the time of capture, if you have a scientific imager, then you can just set the area that will be captured. The camera driver would include instructions for that. If you want to capture 3000x2000 in 1500x1000, you can try to bin the image, if that's what you need.
Or, you can use resampling post-capture in order to make the image smaller. One such algorithm is bicubic resampling, also linear resampling-- there are many variations.
I'm thinking this last is what you're most interested in... in which case, check out this Wikipedia page on the algorithm. Or, you can go to FreeImage and get a library that will read in the image and can also resize images.
UIImageJPEGRepresentation does the trick but I find that using the ImageIO framework often gets significantly better compression results for the same quality setting. It may be slower, but depending on your use case this may not be an issue.
(Code adapted for NSData from this blog post by Zachary West).
#import <MobileCoreServices/MobileCoreServices.h>
#import <ImageIO/ImageIO.h>
...
+ (NSData*)JPEGDataFromImage:(UIImage*)image quality:(double)quality
{
CFMutableDataRef outputImageDataRef = CFDataCreateMutable(kCFAllocatorDefault, 0);
CGImageDestinationRef imageDestinationRef = CGImageDestinationCreateWithData(outputImageDataRef, kUTTypeJPEG, 1, NULL);
NSDictionary* properties = #{
(__bridge NSString*)kCGImageDestinationLossyCompressionQuality: #(quality)
};
CGImageDestinationSetProperties(imageDestinationRef, (__bridge CFDictionaryRef)properties);
CGImageDestinationAddImage(imageDestinationRef, image.CGImage, NULL);
CGImageDestinationFinalize(imageDestinationRef);
CFRelease(imageDestinationRef);
NSData* imageData = CFBridgingRelease(outputImageDataRef);
return imageData;
}