How to get status bar images in iPhone APP - iphone

I want to get status bar images without UIGetScreenImage() because this method is Private API.
Apple recommend
http://developer.apple.com/iphone/library/qa/qa2010/qa1703.html
as a alternate solution, but this way can't get status bar images.
Would like to know how to get status bar images?

This is impossible without using the Private API.
Referencing this duplicate: Emailing full screen of iPhone app
CGImageRef UIGetScreenImage();
#interface UIImage (ScreenImage)
+ (UIImage *)imageWithScreenContents;
#end
#implementation UIImage (ScreenImage)
+ (UIImage *)imageWithScreenContents
{
CGImageRef cgScreen = UIGetScreenImage();
if (cgScreen) {
UIImage *result = [UIImage imageWithCGImage:cgScreen];
CGImageRelease(cgScreen);
return result;
}
return nil;
}
#end

Related

New image name for iPhone 5

With the retina we make images with the #2x in the name. I see where the default image has to be default-568h#2x but this does not seem to be the case for other images. Like if my background is bg.png and bg#2x.png I tried using bg-568h#2x.png but that does not work. Can somebody tell me what the images need to be named to support the iPhone 5?
No special suffix for iPhone 5 (4'' display), just the specific Default-568h#2x.png file.
Here's a macro to handle it:
// iPhone 5 support
#define ASSET_BY_SCREEN_HEIGHT(regular, longScreen) (([[UIScreen mainScreen] bounds].size.height <= 480.0) ? regular : longScreen)
Usage: (assets names - image.png, image#2x.png, image-568h#2x.png)
myImage = [UIImage imageNamed:ASSET_BY_SCREEN_HEIGHT(#"image",#"image-568h")];
There is no specific image name. Having the Default-568h#2x will launch that image on an iPhone 5 or iPod Touch 5G and will enable the non-letterbox mode. After that, you need to design your views to be flexible. There is no special "image name" or anything for the new size.
For your background, for example, you should probably be using an image that is capable of stretching or tiling and have it configured properly before setting it.
iPhone 5 does not have a different pixel density, it's the same retina display PPI as the iPhone 4/4S, it's just a different screen size. The #2x images will be used on iPhone 5 as well as 4/4S.
To complete Jason's answser, I would propose: What about overriding the UIImage's imageNamed: method to have it happen the "-568" suffix to the name of your image? Or add a new method called resolutionAdaptedImageNamed: to the UIImage maybe using a category.
If I have a bit of time in the next days, I will try to post the code for that.
Caution: will not work for images in the Nib files.
If you are using Xcode 5, you can use asset catalog (see usage there Apple's documentation)
Once your asset catalog is created [ UIImage imagedNamed: #"your_image_set" ] will pull right image based on device.
You can also make category for this just make category as below .
UIImage+Retina4.h
#import <UIKit/UIKit.h>
#import <objc/runtime.h>
#interface UIImage (Retina4)
#end
UIImage+Retina4.m
#import "UIImage+Retina4.h"
static Method origImageNamedMethod = nil;
#implementation UIImage (Retina4)
+ (void)initialize {
origImageNamedMethod = class_getClassMethod(self, #selector(imageNamed:));
method_exchangeImplementations(origImageNamedMethod,
class_getClassMethod(self, #selector(retina4ImageNamed:)));
}
+ (UIImage *)retina4ImageNamed:(NSString *)imageName {
// NSLog(#"Loading image named => %#", imageName);
NSMutableString *imageNameMutable = [imageName mutableCopy];
NSRange retinaAtSymbol = [imageName rangeOfString:#"#"];
if (retinaAtSymbol.location != NSNotFound) {
[imageNameMutable insertString:#"-568h" atIndex:retinaAtSymbol.location];
} else {
CGFloat screenHeight = [UIScreen mainScreen].bounds.size.height;
if ([UIScreen mainScreen].scale == 2.f && screenHeight == 568.0f) {
NSRange dot = [imageName rangeOfString:#"."];
if (dot.location != NSNotFound) {
[imageNameMutable insertString:#"-568h#2x" atIndex:dot.location];
} else {
[imageNameMutable appendString:#"-568h#2x"];
}
}
}
NSString *imagePath = [[NSBundle mainBundle] pathForResource:imageNameMutable ofType:#"png"];
if (imagePath) {
return [UIImage retina4ImageNamed:imageNameMutable];
} else {
return [UIImage retina4ImageNamed:imageName];
}
return nil;
}
#end
And you can directly check using import this category as below where you wont to check 568 or normal image
imgvBackground.image=[UIImage imageNamed:#"bkground_bg"];//image name without extantion

loading images from disk in iPhone app is slow

In my iPhone app, I am using the iPhone's camera to take a photo and save it do disk (the application's documents folder). This is how i save it:
[UIImageJPEGRepresentation(photoTaken, 0.0) writeToFile:jpegPath atomically:YES];
Using the most compression, I figured reading the image from disk would be quick. But its not! I use the image as the background image for a button in one of my views. I load it like this:
[self.frontButton setBackgroundImage:[UIImage imageWithContentsOfFile:frontPath] forState:UIControlStateNormal];
When I navigate to the view with this button, it is slow and choppy. How do I fix this?
+imageWithContentsOfFile: is synchronous, so the UI on your main thread is being blocked by the image loading from disk operation and causing the choppiness. The solution is to use a method that loads the file asynchronously from disk. You could also do this in a background thread. This can be done easily by wrapping the +imageWithContentsOfFile: in dispatch_async(), then a nested dispatch_async() on the main queue that wraps -setBackgroundImage: since UIKit methods need to be run on the main thread. If you want the image to appear immediately after the view loads, you'll need to pre-cache the image from disk so it's in-memory immediately when the view appears.
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_LOW, 0), ^{
UIImage *image = [UIImage imageWithContentsOfFile:frontPath];
dispatch_async(dispatch_get_main_queue(), ^{
[self.frontButton setBackgroundImage:image forState:UIControlStateNormal];
});
});
As an aside, if the button image happens a gradient, consider using the following properties to ensure the image file loaded from disk is tiny:
- (UIImage *)resizableImageWithCapInsets:(UIEdgeInsets)capInsets
or (deprecated, only use if you need to support iOS 4.x):
- (UIImage *)stretchableImageWithLeftCapWidth:(NSInteger)leftCapWidth topCapHeight:(NSInteger)topCapHeight
This is the faster way I know. You'll need to import #import <ImageIO/ImageIO.h>
I use this code to download and compress images during a scroll, inside a scrollview and you barely notice the delay.
CGImageSourceRef src = CGImageSourceCreateWithData((CFDataRef)mutableData, NULL);
CFDictionaryRef options = (CFDictionaryRef)[[NSDictionary alloc] initWithObjectsAndKeys:(id)kCFBooleanTrue, (id)kCGImageSourceCreateThumbnailWithTransform, (id)kCFBooleanTrue, (id)kCGImageSourceCreateThumbnailFromImageIfAbsent, (id)[NSNumber numberWithDouble:200.0], (id)kCGImageSourceThumbnailMaxPixelSize, nil];
CGImageRef thumbnail = CGImageSourceCreateThumbnailAtIndex(src, 0, options);
UIImage *image = [[UIImage alloc] initWithCGImage:thumbnail];
// Cache
NSString *fileName = #"fileName.jpg";
NSString *path = [NSTemporaryDirectory() stringByAppendingPathComponent:#"thumbnail"];
path = [path stringByAppendingPathComponent:fileName];
if ([UIImagePNGRepresentation(image) writeToFile:path atomically:YES]) {
// Success
}
I face a very similar issue where I had to load hundreds of images from the directory. My performance was quite slow if I used UIImage(contentsOfFile:) method. The below method increased my performance to 70 %.
class ImageThumbnailGenerator: ThumbnailGenerator {
private let url: URL
init(url: URL) {
self.url = url
}
func generate(size: CGSize) -> UIImage? {
guard let imageSource = CGImageSourceCreateWithURL(url as NSURL, nil) else {
return nil
}
let options: [NSString: Any] = [
kCGImageSourceThumbnailMaxPixelSize: Double(max(size.width, size.height) * UIScreen.main.scale),
kCGImageSourceCreateThumbnailFromImageIfAbsent: true
]
return CGImageSourceCreateThumbnailAtIndex(imageSource, 0, options as NSDictionary).flatMap { UIImage(cgImage: $0) }
}
}

Image storage in iOS devices, the appropriate behavior?

I'm kinda puzzeled about image storage in iOS devices for an app i'm making.
My requirement is to load an Image onto a tableViewCell, lets say in the default Image space of a UITableViewCell and hence isnt a background image.
Now The user can either add an Image either via the PhotoDirectory or take an entirely new image.
If a new image is taken, where should that image be stored preferebly? In the default photo directory ? or in the documents folder of the app sandbox?
Because these are image files, I'm afraid that store images within the app bundle can make it pretty big, I'm afraid I dont wana cross the size limit.
Performance wise though... what would be a better option?
I have an app that also does some of the things you describe. My solutions was to create a singleton that I call my imageStore. You can find information about a singleton here
In this imageStore, I store all my "full size" images; however, like you I am concerned about the size of these images, so instead of using them directly, I use thumbnails. What I do is this. For each object that I want to represent in the table, I make sure the object has a UIImage defined that is about thumnail size (64x64 or any size you desire). Then an object is created, I create a thumbnail that I store along with the object. I use this thumbnail instead of the larger images where I can get a way with it, like on a table cell.
I'm not behind my Mac at the moment, but if you want I can post some code later to demonstrate both the singleton and the creation and usage of the thumbnail.
Here is my header file for the ImageStore
#import <Foundation/Foundation.h>
#interface BPImageStore : NSObject {
NSMutableDictionary *dictionary;
}
+ (BPImageStore *)defaultImageStore;
- (void)setImage:(UIImage *)i forKey:(NSString *)s;
- (UIImage *)imageForKey:(NSString *)s;
- (void)deleteImageForKey:(NSString *)s;
#end
Here is the ImageStore.m file - my Singleton
#import "BPImageStore.h"
static BPImageStore *defaultImageStore = nil;
#implementation BPImageStore
+ (id)allocWithZone:(NSZone *)zone {
return [[self defaultImageStore] retain];
}
+ (BPImageStore *)defaultImageStore {
if(!defaultImageStore) {
defaultImageStore = [[super allocWithZone:NULL] init];
}
return defaultImageStore;
}
- (id)init
{
if(defaultImageStore) {
return defaultImageStore;
}
self = [super init];
if (self) {
dictionary = [[NSMutableDictionary alloc] init];
NSNotificationCenter *nc = [NSNotificationCenter defaultCenter];
[nc addObserver:self selector:#selector(clearCach:) name:UIApplicationDidReceiveMemoryWarningNotification object:nil];
}
return self;
}
- (void) clearCache:(NSNotification *)note {
[dictionary removeAllObjects];
}
- (oneway void) release {
// no op
}
- (id)retain {
return self;
}
- (NSUInteger)retainCount {
return NSUIntegerMax;
}
- (void)setImage:(UIImage *)i forKey:(NSString *)s {
[dictionary setObject:i forKey:s];
// Create full path for image
NSString *imagePath = pathInDocumentDirectory(s);
// Turn image into JPEG data
NSData *d = UIImageJPEGRepresentation(i, 0.5);
// Write it to full path
[d writeToFile:imagePath atomically:YES];
}
- (UIImage *)imageForKey:(NSString *)s {
// if possible, get it from the dictionary
UIImage *result = [dictionary objectForKey:s];
if(!result) {
// Create UIImage object from file
result = [UIImage imageWithContentsOfFile:pathInDocumentDirectory(s)];
if (result)
[dictionary setObject:result forKey:s];
}
return result;
}
- (void)deleteImageForKey:(NSString *)s {
if(!s) {
return;
}
[dictionary removeObjectForKey:s];
NSString *path = pathInDocumentDirectory(s);
[[NSFileManager defaultManager] removeItemAtPath:path error:NULL];
}
#end
Here is where I use the image store. In my Object "player", I have a UIImage to store the thumbnail and I have an NSString to house a key that I create. Each original image I put into the store has a key. I store the key with my Player. If I ever need the original image, I get by the unique key. It is also worth noting here that I don't even store the original image at full size, I cut it down a bit already. After all in my case, it is a picture of a player and nobody has too look so good as to have a full resolution picture :)
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info
{
NSString *oldKey = [player imageKey];
// did the player already have an image?
if(oldKey) {
// delete the old image
[[BPImageStore defaultImageStore] deleteImageForKey:oldKey];
}
UIImage *image = [info objectForKey:UIImagePickerControllerOriginalImage];
// Create a CFUUID object it knows how to create unique identifier
CFUUIDRef newUniqueID = CFUUIDCreate(kCFAllocatorDefault);
// Create a string from unique identifier
CFStringRef newUniqueIDString = CFUUIDCreateString(kCFAllocatorDefault, newUniqueID);
// Use that unique ID to set our player imageKey
[player setImageKey:(NSString *)newUniqueIDString];
// we used Create in the functions to make objects, we need to release them
CFRelease(newUniqueIDString);
CFRelease(newUniqueID);
//Scale the images down a bit
UIImage *smallImage = [self scaleImage:image toSize:CGSizeMake(160.0,240.0)];
// Store image in the imageStore with this key
[[BPImageStore defaultImageStore] setImage:smallImage
forKey:[player imageKey]];
// Put that image onto the screen in our image view
[playerView setImage:smallImage];
[player setThumbnailDataFromImage:smallImage];
}
Here is an example of going back to get the original image from the imageStore:
// Go get image
NSString *imageKey = [player imageKey];
if (imageKey) {
// Get image for image key from image store
UIImage *imageToDisplay = [[BPImageStore defaultImageStore] imageForKey:imageKey];
[playerView setImage:imageToDisplay];
} else {
[playerView setImage:nil];
}
Finally, here is how I create a thumbnail from the original image:
- (void)setThumbnailDataFromImage:(UIImage *)image {
CGSize origImageSize = [image size];
CGRect newRect;
newRect.origin = CGPointZero;
newRect.size = [[self class] thumbnailSize]; // just give a size you want here instead
// How do we scale the image
float ratio = MAX(newRect.size.width/origImageSize.width, newRect.size.height/origImageSize.height);
// Create a bitmap image context
UIGraphicsBeginImageContext(newRect.size);
// Round the corners
UIBezierPath *path = [UIBezierPath bezierPathWithRoundedRect:newRect cornerRadius:5.0];
[path addClip];
// Into what rectangle shall I composite the image
CGRect projectRect;
projectRect.size.width = ratio * origImageSize.width;
projectRect.size.height = ratio *origImageSize.height;
projectRect.origin.x = (newRect.size.width - projectRect.size.width) / 2.0;
projectRect.origin.y = (newRect.size.height - projectRect.size.height) / 2.0;
// Draw the image on it
[image drawInRect:projectRect];
// Get the image from the image context, retain it as our thumbnail
UIImage *small = UIGraphicsGetImageFromCurrentImageContext();
[self setThumbnail:small];
// Get the image as a PNG data
NSData *data = UIImagePNGRepresentation(small);
[self setThumbnailData:data];
// Cleanup image context resources
UIGraphicsEndImageContext();
}

Objective C: EXC_BAD_ACCESS when generating a UIImage

I have a view that generates an image based on a series of layers. I have images for the background, for the thumbnail, and finally for an overlay. Together, it makes one cohesive display.
It seems to work a dream, except for when it doesn't. For seemingly no reason, I get an EXC_BAD_ACCESS on the specified line below after it's generated somewhere between 8 and 20 images. I've run it through the memory leak tool and allocation tool, and it's not eating up tons of memory and it's not leaking. I'm totally stumped.
Here's the relevant code:
- (UIImage *)addLayer:(UIImage *)layer toImage:(UIImage *)background atPoint:(CGPoint)point {
CGSize size = CGSizeMake(240, 240);
UIGraphicsBeginImageContext(size);
[background drawAtPoint:CGPointMake(0, 0)]; // <--- error here
[layer drawAtPoint:point];
UIImage* result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return result;
}
// Build the layered image -- thingPage onto thingBackground,
// then the screenshot on that, then the thingTop on top of it all.
// thingBackground, thingPage and thingTop are all preloaded UIImages.
-(UIImage *)getImageForThing:(Thing *)t {
[self loadImageCacheIfNecessary];
if (!t.screenshot) {
return [UIImage imageNamed:#"NoPreview.png"];
} else {
UIImage *screenshot = t.screenshot;
UIImage *currentImage = [self addLayer:thingPage toImage:thingBackground atPoint:CGPointMake(0, 0)];
currentImage = [self addLayer:screenshot toImage:currentImage atPoint:CGPointMake(39, 59)];
currentImage = [self addLayer:thingTop toImage:currentImage atPoint:CGPointMake(0, 1)];
return currentImage;
}
}
I can't find anywhere that this is going wrong, and I've been tearing my hair out for a couple of hours on this. It's the final known bug in the system, so you can imagine how antsy I am to fix it! :-)
Thanks in advance for any help.
As to me, I always use -(void)drawInRect: instead of -(void)drawAtPoint:
CGRect rtDraw;
rtDraw.origin = CGPointZero;
rtDraw.size = size;
[background drawInRect:rtDraw];
[layer drawInRect:rtDraw];
And ....
The paint method with UIGraphicsBeginImageContext(size) and UIGraphicsEndImageContext() is not thread-safe.
Those functions will push or pop a context with stack struct, which is managed by system.
EXC_BAD_ACCESS is almost always due to accessing an object that has already been released. In your code this seems to be t.screenshot. Check creation (and retaining if it is an instance variable) of the object returned by Thing's screenshot property.
As it turns out, the error wasn't in the code I posted, it was in my caching of the thingBackground, thingPage and thingTop images. I wasn't retaining them. Here's the missing code, fixed:
-(void)loadImageCacheIfNecessary {
if (!folderBackground) {
thingBackground = [[UIImage imageNamed:#"ThingBack.png"] retain];
}
if (!folderPage) {
thingPage = [[UIImage imageNamed:#"ThingPage.png"] retain];
}
if (!folderTop) {
thingTop = [[UIImage imageNamed:#"ThingTop.png"] retain];
}
}
I will admit I'm still not comfortable with the whole retain/release/autorelease stuff in Objective C. Hopefully it'll sink in one day soon. :-)

Photos Not Being Geo-Tagged - How Can I Geo-tag Photos in my iPhone App?

When you take photos with the iPhone camera, they are automatically geo-tagged.
I have written the following code to take photos in my GPS Recorder app, but the photos are not being geo-tagged.
What am I doing wrong? Should I be using a completely different method, or is there some way I can modify my code to add the geo-tag and other data to the EXIF info?
Note that self.photoInfo is the info dictionary returned by this delegate method:
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info {
Here's the code:
UIImage *oImage = [self.photoInfo objectForKey:UIImagePickerControllerOriginalImage];
CGSize size = oImage.size;
oImage = [CameraController imageWithImage:oImage
scaledToSize:CGSizeMake(size.width/2.56,size.height/2.56)];
NSDictionary *info = [[NSDictionary alloc] initWithObjectsAndKeys:
oImage, #"image", titleString, #"title", nil];
TrailTrackerAppDelegate *t = (TrailTrackerAppDelegate *)[[UIApplication sharedApplication] delegate];
[NSThread detachNewThreadSelector:#selector(saveImage:) toTarget:t withObject:info];
+ (UIImage*)imageWithImage:(UIImage*)image scaledToSize:(CGSize)newSize {
UIGraphicsBeginImageContext( newSize );
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
Here is a photo taken with the Camera, which has all the good EXIF data: http://www.flickr.com/photos/33766454#N02/3978868900/
And here is on taken with my code, which doesn't have the EXIF data:
http://www.flickr.com/photos/33766454#N02/3978860380/
UIImage does not really have any kind of EXIF, so you are not given an image with any EXIF. If you generate a JPG or PNG, you have to add any EXIF you need in place...