I have managed to add an iAd to a table cell which is working as expected. When the user taps a table cell I want to perform a specific animation, so I captured the view using the code below into a UIImage then transform the image. This works perfectly apart form the fact that the captured image contains everything appart from the iAd. I have swapped iAds for AdMob and it works fine, so must be something to do with the way an iAd is attached to the view tree. Any one have any ideas on how to capture the iAd image.
CGRect rect = view.bounds;
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[view.layer.superlayer renderInContext:context];
UIImage *imageCaptureRect;
imageCaptureRect = UIGraphicsGetImageFromCurrentImageContext();
self.capturedImage = imageCaptureRect;
UIGraphicsEndImageContext();
I would hazard a guess that this is by design. Apple uses the personal information they have on their users and their account histories to target iAds to the correct demographics. If Apple allowed developers to determine which iAds a user was receiving, they would be leaking this personal information. Consider the possibility that an iAd was targeted to people under the age of 30. An application that captured your iAds could watch out for this advert and determine your age bracket.
Related
I have the following code:
if ([[UIScreen mainScreen] respondsToSelector:#selector(scale)]) {
UIGraphicsBeginImageContextWithOptions(mainView.bounds.size, NO, [UIScreen mainScreen].scale);
}
else {
UIGraphicsBeginImageContext(mainView.bounds.size);
}
[mainView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *saveImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
In mainView, there is a masked subview that does not appear in saveImage when using this method. However, I understand there used to be a UIGetScreenImage method pre iOS 4 that did capture such activity. My question is, what is the best way to capture CALayer activities in iOS 6? Is UIGetScreenImage still private?
I think there was a similar question about a week ago: Mask does not work when capturing a uiview to a uiimage
On iOS6 there is a problem capturing a UIView with the mask applied (btw, in iOS 7 it has been fixed): you capture the image but the mask is not applied.
I posted a lengthy solution which involved applying the mask manually to the captured image. It's not very difficult and I also made a demo project of this. You can download it here:
https://bitbucket.org/reydan/so_imagemask
If I did not understand your problem correctly, please tell me so I can remove this answer.
try getting the presentation layer instead, as it will contain the layer's state.
[mainView.layer.presentationLayer renderInContext:UIGraphicsGetCurrentContext()];
https://developer.apple.com/library/mac/documentation/graphicsimaging/reference/CALayer_class/Introduction/Introduction.html#//apple_ref/occ/instm/CALayer/presentationLayer
I've created a custom button out of layers. I create a bunch of layers in init. Everything works as expected until I go to modify* them. It is only on the actual devices that the problem occurs, on the simulator the code works as expected.
I've tried forcing the layers to render by setting needsDisplay and needsLayout to YES.
-(void) setBackgroundColor:(UIColor *)backgroundColor
{
//return; // if I return here, the button gets initialized with it's default royal blue color. This works correctly on the simulator and device
[primaryBackground setColors: [NSArray arrayWithObject:(id)backgroundColor.CGColor]];
//return; //if I return here, primaryBackground will display a clear background on the device, but will display CGColor on the simulator.
CALayer* ll = [CALayer layer];
ll.frame=self.frame;
ll.cornerRadius=primaryBackground.cornerRadius;
[ll setBackgroundColor:backgroundColor.CGColor];
[primaryBackground addSublayer:ll];
//if I return here, primaryBackground will display the "expected" results on both platforms. IE: it dispalys the new layer, but not it's background color.
}
SIMULATOR
DEVICE
TESTS
I've tested on iOS 5.1 and 4.2 with the same results. Appears to work the same on simulators version 4.1, 4.2 and 5.1
I see at least 2 problems with your code:
You are adding a new layer every time the color is changed, without removing the old one. First, remove the previously created sublayer (or simply change its color without creating and adding a new one). Remember that the default opaque property of CALayer is set to NO, which means it is being blended with all the other sublayers.
Why adding another sublayer anyway? In your code, you cover primaryBackground completely. Can't setBackgroundColor: function just look like this:
- (void)setBackgroundColor:(UIColor *)backgroundColor {
[primaryBackground setBackgroundColor:backgroundColor.CGColor];
}
?
But if you really must have an additional layer, you should set ll's frame to the bounds of the superlayer (not frame). It should be ll.frame = self.bounds. You should also override the - (void)layoutSubivews method and set the frame of the background layer to the bounds of the root layer again.
Remember that, on iPhone Simulator, things are rendered by different subsystem and a lot faster than on real device. And some frame calculations may overlap, etc.
I am perfoming image sequences animation is on main thread.
At the same time i want to take snapshot of device screen in back ground.
And by using that snapshots i want make video..
Thanks,
Keyur Prajapati
For taking the screenshots while running the image animations
use
[self.view.layer.presentationLayer renderInContext:UIGraphicsGetCurrentContext()];
instead of
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
it will take screenshots while runing animations
iOS uses Core Animation as the rendering system. Each UIView is backed by a CALayer. Each visible layer tree is backed by a presentation tree. The layer tree contains the object model values for each layer, i.e., values you set when you assign a value to a layer property (A and B). The presentation tree contains the values that are currently being presented to the user as an animation takes place (interpolated values between A and B).
If you're doing it in CoreAnimation you can render the layer contents into a bitmap using -renderInContext:. Have a look at Matt Longs Tutorial. It's for Objective-C on the Mac, but it can be easily converted for use on the iPhone.
Create one another thread where you can do this:
//Create rect portion for image if full screen then 320X480
CGRect contextRect = CGRectMake(0, 0, 320, 480);
// this is whate you need
UIGraphicsBeginImageContext(contextRect.size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage* viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
viewImage is the image which you needed.
You can write this code in function which can be called timely bases like per 5 seconds or according to your requirements.
Hope this is what you needed.
I'm working on an app in which I am looking to take a screenshot of the map to record where a person was when the action was performed, which is then sent in a report back to the server. I do not want the person to see the map pop up while the screenshot is being taken and the user should be oblivious of the screenshot until they review the report.
I have looked at the following question:
how to take a screen shot from a mapview
But they are making the map visible in order to take the screenshot.
Thank you for your time.
If understand correctly, you have a view somewhere and you do not want it on screen, but you want to capture an image of it.
UIGraphicsBeginImageContextWithOptions(mapView.bounds.size,mapView.alpha, mapView.contentScaleFactor);
CGContextRef context = UIGraphicsGetCurrentContext();
mapView.backgroundColor = [UIColor whiteColor];
[mapView.layer renderInContext:context];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Ok So I have this code, which allows me to put a background image in:
I would love to know how to size this, so on the iPhone 4 I can get a 320x480 size but make it nice with an image of 570.855.
self.view.backgroundColor = [UIColor colorWithPatternImage:[UIImage imageNamed:#"background_stream-570x855.jpg"]];
I have tried this:
UIImageView *image = [[UIImageView alloc]
initWithImage:[UIImage imageNamed:#"background_stream-570x855.jpg"]];
[self.view sendSubviewToBack:streamBG];
image.frame = CGRectMake(0, 0, 320, 480);
Which works, however the problem is it's behind the view, so I can't see it. I could make the view clear, but it has objects on it that need to be displayed.
Any help would be most apretiated
There are multiple options to put Views at desired location.
[self.view sendSubviewToBack:streamBG]; //Sends subview to last position i.e. at the last
[self.view bringSubviewToFront:streamBG] //Brings subview to first position i.e. at the first.
[self.view insertSubview:streamBG atIndex:yourDesiredIndex];//Put at desired index.
This is not answer to your question, though it may help you to set your imageview to desired location.
Hope it helps.
To answer part of your question about sizing. You need to use 2 different images for your app if you want the full resolution of the retina display (iPhone4) You should provide a 320x480 image (i.e. myImage.png) for older iPhones and one at twice the resolution, 640x960, for the iPhone 4. Use the exact same name for the high res version but add an "#2x" to the end (i.e. myImage#2x.png). In your code all you ever have to call is the base name (i.e. myImage.png) the app will choose the correct image based on the hardware its running on. That will get you the best experience on your app.
On the other part about the background view visibility, you could make the background color clear ([UIColor clearColor]) on the view that is blocking it. That would leave the content visible in that view but the view its self would be clear. Alternatively you could just insert the background at a specific index as #Jennis has suggested instead of forcing it to the back.