I'm trying to take a screenshot using this code:
- (void)viewDidLoad
{
[super viewDidLoad];
self.view.backgroundColor = [UIColor lightGrayColor];
// Do any additional setup after loading the view, typically from a nib.
}
- (void)viewDidAppear:(BOOL)animated{
UIImageWriteToSavedPhotosAlbum([self screenshot], self, #selector(image: didFinishSavingWithError:contextInfo:), nil);
}
- (UIImage*)screenshot
{
// Create a graphics context with the target size
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
CGSize imageSize = [[UIScreen mainScreen] bounds].size;
if (NULL != UIGraphicsBeginImageContextWithOptions)
UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0);
else
UIGraphicsBeginImageContext(imageSize);
CGContextRef context = UIGraphicsGetCurrentContext();
// Iterate over every window from back to front
for (UIWindow *window in [[UIApplication sharedApplication] windows])
{
if (![window respondsToSelector:#selector(screen)] || [window screen] == [UIScreen mainScreen])
{
// -renderInContext: renders in the coordinate space of the layer,
// so we must first apply the layer's geometry to the graphics context
CGContextSaveGState(context);
// Center the context around the window's anchor point
CGContextTranslateCTM(context, [window center].x, [window center].y);
// Apply the window's transform about the anchor point
CGContextConcatCTM(context, [window transform]);
// Offset by the portion of the bounds left of and above the anchor point
CGContextTranslateCTM(context,
-[window bounds].size.width * [[window layer] anchorPoint].x,
-[window bounds].size.height * [[window layer] anchorPoint].y);
// Render the layer hierarchy to the current context
[[window layer] renderInContext:context];
// Restore the context
CGContextRestoreGState(context);
}
}
// Retrieve the screenshot image
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
But the saved image has the status bar whited out, including the signal, time, and battery. How could I take the screenshot including the content of status bar?
Hide the status bar before taking screenshot as follows:
([[UIApplication sharedApplication] setStatusBarHidden:YES];)
(Tested with iOS 7 and autolayout)
I use the simulator 'take screenshot' feature combined with some conditional code compilation. By uncommenting a #define I can quickly switch to 'take screenshot' mode for my launch screen
The main trick is to hide the status bar when the view controller of your opening screen appears, but with a very long animation, so you have plenty of time to hit the screenshot command in the simulator, which places a screenshot right on your desktop (or take a screenshot on the device if you prefer that).
Using the code below, the status bar disappears immediately thanks to the 'UIStatusBarAnimationNone', but the 'sliding off screen' is animated over a very long period (5000.0 seconds in the code below). So before the bar starts 'moving' 1 point out of the 20 points in vertical size, you have roughly 5000/20 = 250 seconds to take your screenshot (and some coffee).
- (BOOL)prefersStatusBarHidden
{
return YES;
}
- (UIStatusBarAnimation)preferredStatusBarUpdateAnimation
{
#ifdef kTAKE_LAUNCHIMAGE_SCREENSHOT
return UIStatusBarAnimationNone;
#else
return UIStatusBarAnimationSlide;
#endif
}
- (void)removeStatusBarAnimated
{
#ifdef kTAKE_LAUNCHIMAGE_SCREENSHOT
[UIView animateWithDuration:5000.0 animations:^{
[self setNeedsStatusBarAppearanceUpdate];
}];
#else
[UIView animateWithDuration:2.0 animations:^{
[self setNeedsStatusBarAppearanceUpdate];
}];
#endif
}
for more information on taking control of the status bar in iOS 7, check my answer and code here :
https://stackoverflow.com/a/20594717/1869369
This is a solution without hiding the status bar, simply use the correct bounds and note that the y offset should be negative, here "map" is the a view that fills the screen below the status bar:
(Code is written inside a ViewController)
UIGraphicsBeginImageContextWithOptions(self.map.bounds.size, NO, [UIScreen mainScreen].scale);
CGRect bounds = CGRectMake(0, -1*(self.view.bounds.size.height-self.map.bounds.size.height), self.map.bounds.size.width, self.map.bounds.size.height);
BOOL success = [ self.view drawViewHierarchyInRect:bounds afterScreenUpdates:YES];
UIImage* image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Related
I am using a method to take a UIImage screenshot programatically which works very nicely. What I want to do now is remove the top most UIView from being included in the screenshot BUT not remove it from the actual view on screen so simply setting topView.hidden = YES won't do. I am using this method to save the screenshot:
- (UIImage*)screenshot
{
CGSize imageSize = [[UIScreen mainScreen] bounds].size;
if (NULL != UIGraphicsBeginImageContextWithOptions)
UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0);
else
UIGraphicsBeginImageContext(imageSize);
CGContextRef context = UIGraphicsGetCurrentContext();
// Iterate over every window from back to front
for (UIWindow *window in [[UIApplication sharedApplication] windows])
{
if (![window respondsToSelector:#selector(screen)] || [window screen] == [UIScreen mainScreen])
{
// -renderInContext: renders in the coordinate space of the layer,
// so we must first apply the layer's geometry to the graphics context
CGContextSaveGState(context);
// Center the context around the window's anchor point
CGContextTranslateCTM(context, [window center].x, [window center].y);
// Apply the window's transform about the anchor point
CGContextConcatCTM(context, [window transform]);
// Offset by the portion of the bounds left of and above the anchor point
CGContextTranslateCTM(context,
-[window bounds].size.width * [[window layer] anchorPoint].x,
-[window bounds].size.height * [[window layer] anchorPoint].y);
// Render the layer hierarchy to the current context
[[window layer] renderInContext:context];
// Restore the context
CGContextRestoreGState(context);
}
}
// Retrieve the screenshot image
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
My first thought was to remove the sublayer from the superlayer doing this:
NSArray *sublayers = [[window layer] sublayers];
[[window layer] replaceSublayer:[sublayers objectAtIndex:[sublayers count] - 1] with:nil];
but I found that the sublayers array is only 1 object. How could I remove the top view from the screenshot?
For example you have View1 on your view hierarchy and you don't want to take screenshot of View1 then you can do View1.hiden= TRUE; before screenshot and you can do View1.hiden= FALSE; after screenshot.
May be this will help you.
I am trying to export a screen shot from my application to Facebook using the iOS6 Facebook function. How can I bring out the options inside the application when the button is pressed?
Below is my current code. I want to take the screenshot and at the same time export to Facebook using iOS6 Facebook function.
- (IBAction)export1:(id)sender{
NSIndexPath *indexPath = [NSIndexPath indexPathForRow:0 inSection:0];
[self.tableView scrollToRowAtIndexPath:indexPath
atScrollPosition:UITableViewScrollPositionTop
animated:YES];
exporting.hidden=YES;
CGSize imageSize = [[UIScreen mainScreen] bounds].size;
if (NULL != UIGraphicsBeginImageContextWithOptions)
UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0);
else
UIGraphicsBeginImageContext(imageSize);
CGContextRef context = UIGraphicsGetCurrentContext();
// Iterate over every window from back to front
for (UIWindow *window in [[UIApplication sharedApplication] windows])
{
if (![window respondsToSelector:#selector(screen)] || [window screen] == [UIScreen mainScreen])
{
// -renderInContext: renders in the coordinate space of the layer,
// so we must first apply the layer's geometry to the graphics context
CGContextSaveGState(context);
// Center the context around the window's anchor point
CGContextTranslateCTM(context, [window center].x, [window center].y);
// Apply the window's transform about the anchor point
CGContextConcatCTM(context, [window transform]);
// Offset by the portion of the bounds left of and above the anchor point
CGContextTranslateCTM(context,
-[window bounds].size.width * [[window layer] anchorPoint].x,
-[window bounds].size.height * [[window layer] anchorPoint].y);
// Render the layer hierarchy to the current context
[[window layer] renderInContext:context];
// Restore the context
CGContextRestoreGState(context);
}
}
// Retrieve the screenshot image
CGRect contentRectToCrop = CGRectMake(0, 70, 740, 740);
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
CGImageRef imageRef = CGImageCreateWithImageInRect([image CGImage], contentRectToCrop);
UIImage *croppedImage = [UIImage imageWithCGImage:imageRef];
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(croppedImage, nil, nil, nil);
Here is how to post images to Facebook easily using the built in Facebook SDK in iOS6. Happy if this helps you in any ways. Happy coding :)
The Apple Developer Site Gives a new Social.framework for integrating the Social Experiences like Twitter and Facebook.
Refer here.
#Clarence i think you should check out latest Facebook SDK for iOS
it also has some tutorials on how to post images on facebook in iOS 6.
Just download the SDK and you will find sample codes in your Documents folder which include a sample HelloFacebookSample which shows how to post image on facebook in iOS 6.
you just need to pass your image object in this method -
[FBNativeDialogs presentShareDialogModallyFrom:self initialText:nil image:croppedImage url:nil handler:nil];
before using above method you need to properly follow v 3.1 installation instructions here
I'm using the sample code from http://developer.apple.com/library/ios/#qa/qa1714/_index.html almost verbatim and I can save my background image or my overlay but I can't combine the two. When I try and combine them using the following code the overlay renders the background white and I'm assuming its overwriting the background. Any idea why this is?
Here is my method which tries to combine the view overlay with the image:
-(void) annotateStillImage
{
UIImage *image = stillImage;
// Create a graphics context with the target size
CGSize imageSize = [[UIScreen mainScreen] bounds].size;
UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
// Draw the image returned by the camera sample buffer into the context.
// Draw it into the same sized rectangle as the view that is displayed on the screen.
UIGraphicsPushContext(context);
[image drawInRect:CGRectMake(0, 0, imageSize.width, imageSize.height)];
UIGraphicsPopContext();
// -renderInContext: renders in the coordinate space of the layer,
// so we must first apply the layer's geometry to the graphics context
CGContextSaveGState(context);
// Center the context around the window's anchor point
CGContextTranslateCTM(context, [[self view] center].x, [[self view] center].y);
// Apply the window's transform about the anchor point
CGContextConcatCTM(context, [[self view] transform]);
// Offset by the portion of the bounds left of and above the anchor point
CGContextTranslateCTM(context,
-[[self view] bounds].size.width * [[[self view] layer] anchorPoint].x,
-[[self view] bounds].size.height * [[[self view] layer] anchorPoint].y);
// Render the layer hierarchy to the current context
[[[self view] layer] renderInContext:context];
// Restore the context
CGContextRestoreGState(context);
// Retrieve the screenshot image containing both the camera content and the overlay view
stillImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
Here is a snippet of my method which creates the view in case that's relevant:
CGRect layerRect = [[[self view] layer] bounds];
[[self previewLayer] setBounds:layerRect];
[[self previewLayer] setPosition:CGPointMake(CGRectGetMidX(layerRect), CGRectGetMidY(layerRect))];
[[[self view] layer] addSublayer:[self previewLayer]];
UIButton *overlayButton = [UIButton buttonWithType:UIButtonTypeCustom];
[overlayButton setImage:[UIImage imageNamed:#"acceptButton.png"] forState:UIControlStateNormal];
[overlayButton setFrame:[self rectForAcceptButton] ];
[overlayButton setAutoresizesSubviews:TRUE];
[overlayButton setAutoresizingMask:UIViewAutoresizingFlexibleTopMargin | UIViewAutoresizingFlexibleLeftMargin];
[overlayButton addTarget:self action:#selector(acceptButtonPressed) forControlEvents:UIControlEventTouchUpInside];
[[self view] addSubview:overlayButton];
I'm expecting the image in the annotate method to be the background and the accept button from the view drawing to be added on top. All I get is the accept button and a white background but if I don't run the renderInContext method all I get is the image background and no overlay.
Thanks!
Turns out I was adding the overlay to the same view as the preview layer rather than having it on its own view. I realized this by reviewing this great project which covers all types of screenshots possible: https://github.com/cocoacoderorg/Screenshots
Fix was to create a separate overlay view in IB with a transparent background and add my overlay elements to this.
I would like to call a method which takes a screenshot and then load this screenshot as a subview. I am using Apples sample code on how to take a screen shot (see below) and was trying to use the result (an image) in my code. However, I don't really know how to get the image from the method into my code. This is what I tried; it's obviously wrong, but it's all I could come up with:
// Test Screenshot:
screenShot = [UIImage screenshot]; // THIS DOESN'T WORK
screenShotView = [[UIImageView alloc] initWithImage:screenShot];
[screenShotView setFrame:CGRectMake(0, 0, 320, 480)];
[self.view addSubview:screenShotView];
And this is Apple's sample code for the method:
- (UIImage*)screenshot
{
NSLog(#"Shot");
// Create a graphics context with the target size
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
CGSize imageSize = [[UIScreen mainScreen] bounds].size;
if (NULL != UIGraphicsBeginImageContextWithOptions)
UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0);
else
UIGraphicsBeginImageContext(imageSize);
CGContextRef context = UIGraphicsGetCurrentContext();
// Iterate over every window from back to front
for (UIWindow *window in [[UIApplication sharedApplication] windows])
{
if (![window respondsToSelector:#selector(screen)] || [window screen] == [UIScreen mainScreen])
{
// -renderInContext: renders in the coordinate space of the layer,
// so we must first apply the layer's geometry to the graphics context
CGContextSaveGState(context);
// Center the context around the window's anchor point
CGContextTranslateCTM(context, [window center].x, [window center].y);
// Apply the window's transform about the anchor point
CGContextConcatCTM(context, [window transform]);
// Offset by the portion of the bounds left of and above the anchor point
CGContextTranslateCTM(context,
-[window bounds].size.width * [[window layer] anchorPoint].x,
-[window bounds].size.height * [[window layer] anchorPoint].y);
// Render the layer hierarchy to the current context
[[window layer] renderInContext:context];
// Restore the context
CGContextRestoreGState(context);
}
}
// Retrieve the screenshot image
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
Any help would be very much appreciated! Thanks.
EDIT: This is the viewDidLoad method in which I create a TextView and then try to capture a screen shot of it:
- (void)viewDidLoad {
// Setup TextView:
NSString* someText = #"Some Text";
CGRect frameText = CGRectMake(0, 0, 320, 480);
aTextView = [[UITextView alloc] initWithFrame:frameText];
aTextView.text = someText;
[self.view addSubview:aTextView];
// Test Screenshot:
screenShotView = [[UIImageView alloc] initWithImage:[self screenshot]];
[screenShotView setFrame:CGRectMake(10, 10, 200, 200)];
[self.view addSubview:screenShotView];
[self.view bringSubviewToFront:screenShotView];
[super viewDidLoad];
}
To use the image just change this line
screenShot = [UIImage screenshot];
To
screenShot = [self screenshot];
Edit: Check to see if the [self screenshot] returns a valid image or nil.
- (void)viewDidLoad {
// Setup TextView:
NSString* someText = #"Some Text";
CGRect frameText = CGRectMake(0, 0, 320, 480);
aTextView = [[UITextView alloc] initWithFrame:frameText];
aTextView.text = someText;
[self.view addSubview:aTextView];
// Test Screenshot:
UIImage *screenShotImage = [self screenShot];
if(screenShot){
screenShotView = [[UIImageView alloc] initWithImage:screenShotImage];
[screenShotView setFrame:CGRectMake(10, 10, 200, 200)];
[self.view addSubview:screenShotView];
[self.view bringSubviewToFront:screenShotView];
}else
NSLog(#"Something went wrong in screenShot method, the image is nil");
[super viewDidLoad];
}
Thanks in advance.
I want to add an image view as a overlay view to the camera and save both the (cameraview and image view )as a single image. I searched for it and tried the example given in stackoverflow but it is not displaying any thing than blank screen .If any one know please help me.
- (void)renderView:(UIView*)view inContext:(CGContextRef)context
{
// -renderInContext: renders in the coordinate space of the layer,
// so we must first apply the layer's geometry to the graphics context
CGContextSaveGState(context);
// Center the context around the window's anchor point
CGContextTranslateCTM(context, [view center].x, [view center].y);
// Apply the window's transform about the anchor point
CGContextConcatCTM(context, [view transform]);
// Offset by the portion of the bounds left of and above the anchor point
CGContextTranslateCTM(context,
-[view bounds].size.width * [[view layer] anchorPoint].x,
-[view bounds].size.height * [[view layer] anchorPoint].y);
// Render the layer hierarchy to the current context
[[view layer] renderInContext:context];
// Restore the context
CGContextRestoreGState(context);
}
// this get called when an image has been taken from the camera
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info
{
UIImage *image = [info valueForKey:UIImagePickerControllerOriginalImage];
//cameraImage = image;
CGSize imageSize = [[UIScreen mainScreen] bounds].size;
if (NULL != UIGraphicsBeginImageContextWithOptions)
UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0);
else
UIGraphicsBeginImageContext(imageSize);
CGContextRef context = UIGraphicsGetCurrentContext();
// Draw the image returned by the camera sample buffer into the context.
// Draw it into the same sized rectangle as the view that is displayed on the screen.
float menubarUIOffset = 44.0;
UIGraphicsPushContext(context);
[image drawInRect:CGRectMake(0, 0, imageSize.width, imageSize.height-menubarUIOffset)];
UIGraphicsPopContext();
// Render the camera overlay view into the graphic context that we created above.
[self renderView:self.imagePicker.view inContext:context];
// Retrieve the screenshot image containing both the camera content and the overlay view
UIImage *screenshot = UIGraphicsGetImageFromCurrentImageContext();
UIImageWriteToSavedPhotosAlbum(screenshot, nil, nil, nil);
UIGraphicsEndImageContext();
[self.imagePicker dismissModalViewControllerAnimated:YES];
}
Hope this helps