I have the following code:
if ([[UIScreen mainScreen] respondsToSelector:#selector(scale)]) {
UIGraphicsBeginImageContextWithOptions(mainView.bounds.size, NO, [UIScreen mainScreen].scale);
}
else {
UIGraphicsBeginImageContext(mainView.bounds.size);
}
[mainView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *saveImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
In mainView, there is a masked subview that does not appear in saveImage when using this method. However, I understand there used to be a UIGetScreenImage method pre iOS 4 that did capture such activity. My question is, what is the best way to capture CALayer activities in iOS 6? Is UIGetScreenImage still private?
I think there was a similar question about a week ago: Mask does not work when capturing a uiview to a uiimage
On iOS6 there is a problem capturing a UIView with the mask applied (btw, in iOS 7 it has been fixed): you capture the image but the mask is not applied.
I posted a lengthy solution which involved applying the mask manually to the captured image. It's not very difficult and I also made a demo project of this. You can download it here:
https://bitbucket.org/reydan/so_imagemask
If I did not understand your problem correctly, please tell me so I can remove this answer.
try getting the presentation layer instead, as it will contain the layer's state.
[mainView.layer.presentationLayer renderInContext:UIGraphicsGetCurrentContext()];
https://developer.apple.com/library/mac/documentation/graphicsimaging/reference/CALayer_class/Introduction/Introduction.html#//apple_ref/occ/instm/CALayer/presentationLayer
Related
I have the following code which I use to capture the contents of a view into an image:
if ([[UIScreen mainScreen] respondsToSelector:#selector(scale)])
{
UIGraphicsBeginImageContextWithOptions(self.mainView.bounds.size, NO, [UIScreen mainScreen].scale);
}
else
{
UIGraphicsBeginImageContext(self.mainView.bounds.size);
}
[self.mainView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = [UIGraphicsGetImageFromCurrentImageContext() retain];
Recently, I tried attaching a GLKView (which I use to apply Core Image filters in real time on the GPU) to the mainView. When I execute the above code, it doesn't capture the graphics in the GLKView (instead basically just ignores it).
So my question is, is it possible to capture graphics to an image that are drawn on the GPU and haven't yet been copied back to the CPU?
You need to grab the view's framebuffer pixel data using OpenGL ES. You can't do it with renderInContext:.
There are a couple of ways to use OpenGL ES to grab the data. Look at this answer for details.
In my FirstViewController I have write this code for switch background if device is iPhone4 or iPhone5:
Filename:
bg-for5-568#2x.png
bg-for4#2x.png
bg-for4.png
- (void)viewDidLoad
{
[super viewDidLoad];
UIImage *backgroundImage = [[UIImage alloc] init];
if([[UIScreen mainScreen]bounds].size.height == 568)
{
backgroundImage = [UIImage imageNamed:#"bg-for5-568h"];
}
else
{
backgroundImage = [UIImage imageNamed:#"bg-for4"];
}
self.view.backgroundColor = [[[UIColor alloc] initWithPatternImage:backgroundImage] autorelease];
[backgroundImage release];
}
When i lanch the app on my simulator, the background for iphone5 show double size, out of the view.
/* RESOLVED THANK YOU */
I am not sure if this is the solution for this problem as I am missing some infos, but:
At first: Strip the .png from your imageNamed: method. Since iOS4, you shouldn't do this anymore. The next thing is: What are the Names of your image? Note that an iPhone5 has a retina display, and your image should be named bg-for5-568h#2x.png but referred in the sourcecode as bg-for5-568h.
Besides that: In almost every case where your image isn't a photograph, what you are doing is a bad idea. And even if it is a photograph, simply use the bigger image for the iPhone 4 and 4S as well. It's not that much bigger, so the memory footprint isn't a problem here! Have a look on UIImageView's contentMode property. You can use this to adjust the position of the larger image. You also might want to check UIImageViews clipSubviews property to clip the image if it isn't fullscreen.
Trust me, in my company we had a loot of hooks for stuff like ~ipad, ~iphone, ~2x and even stretchable images. And all these hooks worked fine till the date, apple announced something similar or simply a new device. So I decided to not do these kind of hooks anymore. They seem to be very helpful in the first place, but the trouble you get when there is something new on the market isn't worth it!
You should append #2x suffix to all of your retina images.
In your case your image should be stored as "bg-for5-568h#2x.png".
Hope it will resolve the issue.
I would not advise doing this, what if Apple change their screensize again and you have to go back and rewrite all your code?
A simple fix is to use:
self.view.backgroundColor = [UIColor colorWithPatternImage:/* your image*/];
This could give you some issues with stretching or tiling.
I prefer using
UIImage* imageName = [[UIImage imageNamed:/*image name*/]resizableImageWithCapInsets:UIEdgeInsetsMake(top,left,bottom,right)];
In iOS 6 you can improve this further by defining if you want the image to stretch or tile. This allows you to create a border which won't change and then the centre of your image by default being tiled and filling the space of your imageview
I'm having this issue that seems to only be occuring in iOS 6. I am calling renderInContext on a view's layer. This view is fairly simple. It has a few UIButtons. The UIImage I get back from the following code seems to be corrupted. The UIButtons don't seem to be drawing correctly.
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, self.view.opaque, 0.0);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewSnapShot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
This code produces a bitmap that looks like this: http://imgur.com/MyMTX
When I run the app in the iPhone 5.1 simulator this problem doesn't seem to be occurring. I'm starting to wonder if it's just a bug in iOS 6.
Anyone run into a similar issue?
I'd like to get the image that is being displayed on the UIImagePickerController when user uses the camera. And when I get I want to process the image and display instead of regular camera view.
But the problem is when I want to get the camera view, the image is just a black rectangle.
Here's my code:
UIView *cameraView = [[[[[[imagePicker.view subviews] objectAtIndex:0]
subviews] objectAtIndex: 0]
subviews] objectAtIndex: 0];
UIGraphicsBeginImageContext( CGSizeMake(320, 427) );
[cameraView.layer renderInContext: UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
imageToDisplay.image = [PixelProcessing processImage: viewImage]; //In this case the image is black
//imageToDisplay.image = viewImage; //In this case the image is black too
//imageToDisplay.image = [UIImage imageNamed: #"icon.png"]; //In this case image is being displayed properly
What am I doing wrong?
Thanks.
This one is also working quite good. Use it when the camera preview is open:
UIImage *viewImage = [[(id)objc_getClass("PLCameraController")
performSelector:#selector(sharedInstance)]
performSelector:#selector(_createPreviewImage)];
But as far as I found out it brings the same results than the following solution which takes a 'screenshot' of the current screen:
extern CGImageRef UIGetScreenImage();
CGImageRef cgoriginal = UIGetScreenImage();
CGImageRef cgimg = CGImageCreateWithImageInRect(cgoriginal, rect);
UIImage *viewImage = [UIImage imageWithCGImage:cgimg];
CGImageRelease(cgoriginal);
CGImageRelease(cgimg);
A problem I didn't still find a fix for is, how can one get the camera image very fast without any overlays?
The unofficial call is:
UIGetScreenImage()
which you declare above the #implementation as:
extern CGImageRef UIGetScreenImage();
There may be a documented way to do this in 3.1, but I'm not sure. If not, please please file a Radar with Apple asking them to make some kind of screen grab access public!!!
That uses your same AppleID you log in to the iPhone development portal with.
Update: This call is not yet documented, but Apple explicitly has given the OK to use it with App Store apps.
at least for now, there's no way to do this. (certainly no official documented way, and as far as I know nobody's figured out an unofficial way either.)
the camera preview data is being drawn by the OS in some way that bypasses the normal graphics methods.
I am looking for a way to capture a screenshot on the iPhone with the top status bar included, I am currently using the following code:
UIGraphicsBeginImageContext(self.view.bounds.size); //self.view.window.frame.size
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(viewImage, nil, nil, nil);
The above code sucessfully takes a screenshot of the iPhone UIView but does not include the top status bar (In its place is just a blank 20px space).
As of iOS 7, you can do this without the use of private APIs using
UIView *screenshotView = [[UIScreen mainScreen] snapshotViewAfterScreenUpdates:NO];
See my answer to this question:
Moving status bar in iOS 7
You can get the entire contents of the screen by calling the private API UIGetScreenImage. See my previous answer to a similar question for details.
As of Oct 8, 2009, the use of UIGetScreenImage got my app rejected! :( I would not advise using it. I believe Apple is trying to clean up all the apps and make them conform to the new 3.x OS/APIs. I'm looking for an alternative. If anyone has any suggestions. The video API?
Instead of using private API, why not render the entire UIWindow into the image context? It might be enough to replace self.view with self.view.window in your code.
You can also get the current window(s) as a property of the [UIApplication sharedApplication] instance. It's possible the status bar is on a separate window layer and maybe you'll have to render the windows in order.
Something like this:
UIGraphicsBeginImageContext(self.view.window.frame.size);
for (UIWindow *window in [[UIApplication sharedApplication] windows]) {
[window.layer renderInContext:UIGraphicsGetCurrentContext()];
}
UIImage *screenshotImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(viewImage, nil, nil, nil);
At any rate, you probably don't need to resort to private API.