Reproduce Springboard's icon shine with WebKit - iphone

anyone got any ideas on how to reproduce the gloss of iPhone app icons using WebKit and CSS3 and/or a transparent overlay image? Is this even possible?

iPhone OS uses the following images to compose the icon:
AppIconMask.png
AppIconShadow.png
AppIconOverlay.png (optional)

This is how I have implemented the above using only the AppIconOverlay.png in my app. I kind of like the desired effect.
It doesn't have the shadow, but I'm sure if you really wanted it you could modify the code in order to suit your needs. In terms of the AppIconMask.png I couldn't really see any need in using this since I use the #import <QuartzCore/QuartzCore.h> framework in order to achieve the desired effect by adding a layer.masksToBounds and layer.cornerRadius.
I hope this works for anyone who is interested in achieving the Apple Springboard overlay effect. Oh and thank-you to rpetrich for providing those images.
I apologise for the lack of comments in the code. It's a culmination of code from similar existing implementations scattered all over the internet. So I'd like to thank all of those people for providing those bits and pieces of code that are used as well.
- (UIImage *)getIconOfSize:(CGSize)size icon:(UIImage *)iconImage withOverlay:(UIImage *)overlayImage {
UIImage *icon = [self scaleImage:iconImage toResolution:size.width];
CGRect iconBoundingBox = CGRectMake (0, 0, size.width, size.height);
CGRect overlayBoundingBox = CGRectMake (0, 0, size.width, size.height);
CGContextRef myBitmapContext = [self createBitmapContextOfSize:size];
CGContextSetRGBFillColor (myBitmapContext, 1, 1, 1, 1);
CGContextFillRect (myBitmapContext, iconBoundingBox);
CGContextDrawImage(myBitmapContext, iconBoundingBox, icon.CGImage);
CGContextDrawImage(myBitmapContext, overlayBoundingBox, overlayImage.CGImage);
UIImage *result = [UIImage imageWithCGImage: CGBitmapContextCreateImage (myBitmapContext)];
CGContextRelease (myBitmapContext);
return result;
}
- (CGContextRef)createBitmapContextOfSize:(CGSize)size {
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
void * bitmapData;
int bitmapByteCount;
int bitmapBytesPerRow;
bitmapBytesPerRow = (size.width * 4);
bitmapByteCount = (bitmapBytesPerRow * size.height);
colorSpace = CGColorSpaceCreateDeviceRGB();
bitmapData = malloc( bitmapByteCount );
if (bitmapData == NULL) {
fprintf (stderr, "Memory not allocated!");
return NULL;
}
context = CGBitmapContextCreate (bitmapData,
size.width,
size.height,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedLast);
CGContextSetAllowsAntialiasing (context,NO);
if (context== NULL) {
free (bitmapData);
fprintf (stderr, "Context not created!");
return NULL;
}
CGColorSpaceRelease( colorSpace );
return context;
}
- (UIImage *)scaleImage:(UIImage *)image toResolution:(int)resolution {
CGFloat width = image.size.width;
CGFloat height = image.size.height;
CGRect bounds = CGRectMake(0, 0, width, height);
// If already at the minimum resolution, return the original image, otherwise scale.
if (width <= resolution && height <= resolution) {
return image;
} else {
CGFloat ratio = width/height;
if (ratio > 1) {
bounds.size.width = resolution;
bounds.size.height = bounds.size.width / ratio;
} else {
bounds.size.height = resolution;
bounds.size.width = bounds.size.height * ratio;
}
}
UIGraphicsBeginImageContext(bounds.size);
[image drawInRect:CGRectMake(0.0, 0.0, bounds.size.width, bounds.size.height)];
UIImage *imageCopy = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return imageCopy;
}
Usage:
UIImage *overlayImage = [UIImage imageNamed:#"AppIconOverlay.png"];
UIImage *profileImage = [Helper getIconOfSize:CGSizeMake(59, 60) icon:image withOverlay:overlayImage];
[profilePictureImageView setImage:profileImage];
profilePictureImageView.layer.masksToBounds = YES;
profilePictureImageView.layer.cornerRadius = 10.0;
profilePictureImageView.layer.borderColor = [[UIColor grayColor] CGColor];
profilePictureImageView.layer.borderWidth = 1.0;

Related

iOS - Where should I bew releasing my CFImageRef?

I have a method which returns a rotated image:
- (CGImageRef)rotateImage:(CGImageRef)original degrees:(float)degrees {
if (degrees == 0.0f) {
return original;
} else {
double radians = degrees * M_PI / 180;
#if TARGET_OS_EMBEDDED || TARGET_IPHONE_SIMULATOR
radians = -1 * radians;
#endif
size_t _width = CGImageGetWidth(original);
size_t _height = CGImageGetHeight(original);
CGRect imgRect = CGRectMake(0, 0, _width, _height);
CGAffineTransform _transform = CGAffineTransformMakeRotation(radians);
CGRect rotatedRect = CGRectApplyAffineTransform(imgRect, _transform);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL,
rotatedRect.size.width,
rotatedRect.size.height,
CGImageGetBitsPerComponent(original),
0,
colorSpace,
kCGImageAlphaPremultipliedFirst);
CGContextSetAllowsAntialiasing(context, FALSE);
CGContextSetInterpolationQuality(context, kCGInterpolationNone);
CGColorSpaceRelease(colorSpace);
CGContextTranslateCTM(context,
+(rotatedRect.size.width/2),
+(rotatedRect.size.height/2));
CGContextRotateCTM(context, radians);
CGContextDrawImage(context, CGRectMake(-imgRect.size.width/2,
-imgRect.size.height/2,
imgRect.size.width,
imgRect.size.height),
original);
CGImageRef rotatedImage = CGBitmapContextCreateImage(context);
CFRelease(context);
return rotatedImage;
}
}
Yet, instruments is telling me that rotatedImage: is not being released. I'm running this method a bunch, so the memory build up a lot. Should I be releasing it in the parent method which calls rotateImage:? Or should I release it before rotateImage: passes it back?
Thanks!
I would suggest to release the CGImageRef in the parent method calling rotateImage:
Moreover in that case you should use the naming convention and rename rotateImage: to createRotateImage: for clarity.

Received Memory Warning in DrawRect Method

I am developing an app in which i have recording of main screen and SetNeedsDisplay method is used.
But the problem is that it takes lot of memory and even i am not recording the screen.
I want to reduce the memory usage of the below mentioned code.
Any solution to this?
Thanks in advance.
- (void) drawRect:(CGRect)rect
{
NSDate* start = [NSDate date];
CGContextRef context = [self createBitmapContextOfSize:self.frame.size];
//NSLog(#"context value %#",context);
//not sure why this is necessary...image renders upside-down and mirrored
CGAffineTransform flipVertical = CGAffineTransformMake(1, 0, 0, -1, 0, self.frame.size.height);
CGContextConcatCTM(context, flipVertical);
[self.layer renderInContext:context];
CGImageRef cgImage = CGBitmapContextCreateImage(context);
UIImage* background = [UIImage imageWithCGImage: cgImage];
CGImageRelease(cgImage);
self.currentScreen = background;
if (_recording) {
float millisElapsed = [[NSDate date] timeIntervalSinceDate:startedAt] * 1000.0;
[self writeVideoFrameAtTime:CMTimeMake((int)millisElapsed, 1000)];
}
float processingSeconds = [[NSDate date] timeIntervalSinceDate:start];
delayRemaining = (1.0 / self.frameRate) - processingSeconds;
//redraw at the specified framerate
[self performSelector:#selector(setNeedsDisplay) withObject:nil afterDelay:delayRemaining > 0.0 ? delayRemaining : 0.01];
}
-(CGContextRef) createBitmapContextOfSize:(CGSize) size
{
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
int bitmapByteCount;
int bitmapBytesPerRow;
bitmapBytesPerRow = (size.width * 4);
bitmapByteCount = (bitmapBytesPerRow * size.height);
colorSpace = CGColorSpaceCreateDeviceRGB();
if (bitmapData != NULL) {
free(bitmapData);
}
bitmapData = malloc( bitmapByteCount );
if (bitmapData == NULL) {
fprintf (stderr, "Memory not allocated!");
return NULL;
}
context = CGBitmapContextCreate (bitmapData,
size.width,
size.height,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaNoneSkipFirst);
CGContextSetAllowsAntialiasing(context,NO);
if (context== NULL) {
free (bitmapData);
fprintf (stderr, "Context not created!");
return NULL;
}
CGColorSpaceRelease( colorSpace );
return context;
}
You create a context every frame, but never release it, try adding this to the end:
CGContextRelease(context);

Resizing image without losing its quality

When i send import images[other images] to server taking less time than iphone images[taking picture through default camera].Is there option to resize an image without losing its quality?
Use this code it will help
+ (UIImage *)scaleImage:(UIImage *)image maxWidth:(int) maxWidth maxHeight:(int) maxHeight
{
CGImageRef imgRef = image.CGImage;
CGFloat width = CGImageGetWidth(imgRef);
CGFloat height = CGImageGetHeight(imgRef);
if (width <= maxWidth && height <= maxHeight)
{
return image;
}
CGAffineTransform transform = CGAffineTransformIdentity;
CGRect bounds = CGRectMake(0, 0, width, height);
if (width > maxWidth || height > maxHeight)
{
CGFloat ratio = width/height;
if (ratio > 1)
{
bounds.size.width = maxWidth;
bounds.size.height = bounds.size.width / ratio;
}
else
{
bounds.size.height = maxHeight;
bounds.size.width = bounds.size.height * ratio;
}
}
CGFloat scaleRatio = bounds.size.width / width;
UIGraphicsBeginImageContext(bounds.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextScaleCTM(context, scaleRatio, -scaleRatio);
CGContextTranslateCTM(context, 0, -height);
CGContextConcatCTM(context, transform);
CGContextDrawImage(UIGraphicsGetCurrentContext(), CGRectMake(0, 0, width, height), imgRef);
UIImage *imageCopy = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return imageCopy;
}
try this bellow method ...
Call like bellow...
UIImage *imgBig = [self imageWithImage:yourImage scaledToSize::yourNewImageSize];
and use this bellow method..
-(UIImage *)imageWithImage:(UIImage*)image scaledToSize:(CGSize)newSize
{
UIGraphicsBeginImageContextWithOptions(newSize, YES, image.scale);
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}

Merging two UIImages faster than CGContextDrawImage

I'm merging two UIImages into one context. It works, but it performs pretty slow and I'm in the need of a faster solution. As my solution is it takes about 400ms to make the mergeImage: withImage: call on an iPad 1G.
Here's what I do:
-(CGContextRef)mergeImage:(UIImage*)img1 withImage:(UIImage*)img2
{
CGSize size = [ImageToolbox getScreenSize];
CGContextRef context = [ImageToolbox createARGBBitmapContextFromImageSize:CGSizeMake(size.width, size.height)];
CGContextSetRenderingIntent(context, kCGRenderingIntentSaturation);
CGContextDrawImage(context, CGRectMake(0, 0, size.width, size.height), img1.CGImage);
CGContextDrawImage(context, CGRectMake(0, 0, size.width, size.height), img2.CGImage);
return context;
}
And here's the static methods from the ImageToolbox class:
static CGRect screenRect;
+ (CGContextRef)createARGBBitmapContextFromImageSize:(CGSize)imageSize
{
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
void * bitmapData;
int bitmapByteCount;
int bitmapBytesPerRow;
size_t pixelsWide = imageSize.width;
size_t pixelsHigh = imageSize.height;
bitmapBytesPerRow = (pixelsWide * 4);
bitmapByteCount = (bitmapBytesPerRow * pixelsHigh);
colorSpace = CGColorSpaceCreateDeviceRGB();
if (colorSpace == NULL)
{
fprintf(stderr, "Error allocating color space\n");
return NULL;
}
bitmapData = malloc( bitmapByteCount );
if (bitmapData == NULL)
{
fprintf (stderr, "Memory not allocated!");
CGColorSpaceRelease( colorSpace );
return NULL;
}
context = CGBitmapContextCreate (bitmapData,
pixelsWide,
pixelsHigh,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedFirst);
if (context == NULL)
{
free (bitmapData);
fprintf (stderr, "Context not created!");
}
CGColorSpaceRelease( colorSpace );
return context;
}
+(CGSize)getScreenSize
{
if (screenRect.size.width == 0 && screenRect.size.height == 0)
{
screenRect = [[UIScreen mainScreen] bounds];
}
return CGSizeMake(screenRect.size.height, screenRect.size.width-20);
}
Any suggestions to increase the performance?
I would definitely recommend using Instruments to profile what message is taking the most time so you can really break it down. Also, I have written a couple methods which I think should do the same thing with a lot less code, but you must have everything written out the way you do to really keep things customizable. Here they are anyways:
-(CGContextRef)mergeImage:(UIImage *)img1 withImage:(UIImage *)img2
{
CGSize size = [ImageToolbox getScreenSize];
CGRect rect = CGRectMake(0, 0, size.width, size.height);
UIGraphicsBeginImageContextWithOptions(size, YES, 1.0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetRenderingIntent(context, kCGRenderingIntentSaturation);
[img1 drawInRect:rect];
[img2 drawInRect:rect];
UIGraphicsEndImageContext();
return context;
}
Or if you wanted the combined image right away:
- (UIImage *)mergeImage:(UIImage *)img1 withImage:(UIImage *)img2
{
CGSize size = [ImageToolbox getScreenSize];
CGRect rect = CGRectMake(0, 0, size.width, size.height);
UIGraphicsBeginImageContextWithOptions(size, YES, 1.0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetRenderingIntent(context, kCGRenderingIntentSaturation);
[img1 drawInRect:rect];
[img2 drawInRect:rect];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
I have no idea if they would be faster or not, but I really wouldn't know how to speed up what you have very easily unless you had the instruments profile break down.
In any case, I hope this helps.
I did no manage to find a faster way of merging the images. I reduced the images sizes to make the operation faster.

UIImage resize (Scale proportion) [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Resize UIImage with aspect ratio?
The following piece of code is resizing the image perfectly, but the problem is that it messes up the aspect ratio (resulting in a skewed image). Any pointers?
// Change image resolution (auto-resize to fit)
+ (UIImage *)scaleImage:(UIImage*)image toResolution:(int)resolution {
CGImageRef imgRef = [image CGImage];
CGFloat width = CGImageGetWidth(imgRef);
CGFloat height = CGImageGetHeight(imgRef);
CGRect bounds = CGRectMake(0, 0, width, height);
//if already at the minimum resolution, return the orginal image, otherwise scale
if (width <= resolution && height <= resolution) {
return image;
} else {
CGFloat ratio = width/height;
if (ratio > 1) {
bounds.size.width = resolution;
bounds.size.height = bounds.size.width / ratio;
} else {
bounds.size.height = resolution;
bounds.size.width = bounds.size.height * ratio;
}
}
UIGraphicsBeginImageContext(bounds.size);
[image drawInRect:CGRectMake(0.0, 0.0, bounds.size.width, bounds.size.height)];
UIImage *imageCopy = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return imageCopy;
}
I used this single line of code to create a new UIImage which is scaled. Set the scale and orientation params to achieve what you want. The first line of code just grabs the image.
// grab the original image
UIImage *originalImage = [UIImage imageNamed:#"myImage.png"];
// scaling set to 2.0 makes the image 1/2 the size.
UIImage *scaledImage =
[UIImage imageWithCGImage:[originalImage CGImage]
scale:(originalImage.scale * 2.0)
orientation:(originalImage.imageOrientation)];
That's ok not a big problem . thing is u got to find the proportional width and height
like if size is 2048.0 x 1360.0 which has to be resized to 320 x 480 resolution then the resulting image size should be 722.0 x 480.0
here is the formulae to do that . if w,h is original and x,y are resulting image.
w/h=x/y
=>
x=(w/h)*y;
submitting w=2048,h=1360,y=480 => x=722.0 ( here width>height. if height>width then consider x to be 320 and calculate y)
U can submit in this web page . ARC
Confused ? alright , here is category for UIImage which will do the thing for you.
#interface UIImage (UIImageFunctions)
- (UIImage *) scaleToSize: (CGSize)size;
- (UIImage *) scaleProportionalToSize: (CGSize)size;
#end
#implementation UIImage (UIImageFunctions)
- (UIImage *) scaleToSize: (CGSize)size
{
// Scalling selected image to targeted size
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL, size.width, size.height, 8, 0, colorSpace, kCGImageAlphaPremultipliedLast);
CGContextClearRect(context, CGRectMake(0, 0, size.width, size.height));
if(self.imageOrientation == UIImageOrientationRight)
{
CGContextRotateCTM(context, -M_PI_2);
CGContextTranslateCTM(context, -size.height, 0.0f);
CGContextDrawImage(context, CGRectMake(0, 0, size.height, size.width), self.CGImage);
}
else
CGContextDrawImage(context, CGRectMake(0, 0, size.width, size.height), self.CGImage);
CGImageRef scaledImage=CGBitmapContextCreateImage(context);
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
UIImage *image = [UIImage imageWithCGImage: scaledImage];
CGImageRelease(scaledImage);
return image;
}
- (UIImage *) scaleProportionalToSize: (CGSize)size1
{
if(self.size.width>self.size.height)
{
NSLog(#"LandScape");
size1=CGSizeMake((self.size.width/self.size.height)*size1.height,size1.height);
}
else
{
NSLog(#"Potrait");
size1=CGSizeMake(size1.width,(self.size.height/self.size.width)*size1.width);
}
return [self scaleToSize:size1];
}
#end
-- the following is appropriate call to do this if img is the UIImage instance.
img=[img scaleProportionalToSize:CGSizeMake(320, 480)];
This fixes the math to scale to the max size in both width and height rather than just one depending on the width and height of the original.
- (UIImage *) scaleProportionalToSize: (CGSize)size
{
float widthRatio = size.width/self.size.width;
float heightRatio = size.height/self.size.height;
if(widthRatio > heightRatio)
{
size=CGSizeMake(self.size.width*heightRatio,self.size.height*heightRatio);
} else {
size=CGSizeMake(self.size.width*widthRatio,self.size.height*widthRatio);
}
return [self scaleToSize:size];
}
This change worked for me:
// The size returned by CGImageGetWidth(imgRef) & CGImageGetHeight(imgRef) is incorrect as it doesn't respect the image orientation!
// CGImageRef imgRef = [image CGImage];
// CGFloat width = CGImageGetWidth(imgRef);
// CGFloat height = CGImageGetHeight(imgRef);
//
// This returns the actual width and height of the photo (and hence solves the problem
CGFloat width = image.size.width;
CGFloat height = image.size.height;
CGRect bounds = CGRectMake(0, 0, width, height);
Try to make the bounds's size integer.
#include <math.h>
....
if (ratio > 1) {
bounds.size.width = resolution;
bounds.size.height = round(bounds.size.width / ratio);
} else {
bounds.size.height = resolution;
bounds.size.width = round(bounds.size.height * ratio);
}