I am using a UIWebView to display HTML formatted text. I am not loading a webpage, just supplying a string of HTML to the UIWebView.
Now I want to animate this UIWebView on screen, actually several of them (2-10 at a time). UIWebView is a little heavy, and although I haven't attempted it yet, I am planning for the worst. (I don't think this is premature optimization, I 'm almost positive this will be an issue)
To get around the problem, I figured I could convert the contents of the UIWebViews to UIImages and animate them instead.
So, my questions are:
How do you convert UIWebview
contents to a UIImage (or
CGImageRef)?
My UIWebViews have
transparent bacgrounds, will the
transparency be carried over to the
UIImage?
Thanks for any suggestions
UIGraphicsBeginImageContext(webview.bounds.size);
[webview.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
You might run into issues if the webview dimensions are large because the webview uses a CATiledLayer that doesn't draw everything for memory reasons.
The image will include transparency
I could be wrong but if you set the cache param of CoreAnimation the OS will handle this optimisation for you.
e.g.
[UIView setAnimationTransition:transition forView:self.view cache:YES];
Related
I am trying to display a grid of documents (saved in the documents directory) but I don't know how to generate the thumbnails for the documents. The documents can be anything that a QLPreviewController can display. PDF's and Images are fine to do but other things like .doc's I don't know about. Any guidance would help.
Since you have an UIView that can display any of this documents you could just take a shortcut:
-Create an instance of your preview controller with displayed document
-Do not add this view/controller to anything
-Create image from its layer
This might help:
+ (UIImage *)imageFromView:(UIView *)view {
CALayer *layer = view.layer;
UIGraphicsBeginImageContext([layer frame].size);
[layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *outputImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return outputImage;
}
Play around a bit with layers and initialization as I didn't test the code..
The better option of you can use a uiwebview in which you can just load the file giving the filepath. Then take the screen shot by using the code given above by Matic Oblak and you are done.
I am perfoming image sequences animation is on main thread.
At the same time i want to take snapshot of device screen in back ground.
And by using that snapshots i want make video..
Thanks,
Keyur Prajapati
For taking the screenshots while running the image animations
use
[self.view.layer.presentationLayer renderInContext:UIGraphicsGetCurrentContext()];
instead of
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
it will take screenshots while runing animations
iOS uses Core Animation as the rendering system. Each UIView is backed by a CALayer. Each visible layer tree is backed by a presentation tree. The layer tree contains the object model values for each layer, i.e., values you set when you assign a value to a layer property (A and B). The presentation tree contains the values that are currently being presented to the user as an animation takes place (interpolated values between A and B).
If you're doing it in CoreAnimation you can render the layer contents into a bitmap using -renderInContext:. Have a look at Matt Longs Tutorial. It's for Objective-C on the Mac, but it can be easily converted for use on the iPhone.
Create one another thread where you can do this:
//Create rect portion for image if full screen then 320X480
CGRect contextRect = CGRectMake(0, 0, 320, 480);
// this is whate you need
UIGraphicsBeginImageContext(contextRect.size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage* viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
viewImage is the image which you needed.
You can write this code in function which can be called timely bases like per 5 seconds or according to your requirements.
Hope this is what you needed.
Current I am trying to show a simple table in my iPhone application where I use UITableViewCell's with the style UITableViewCellStyleValue1 (image to the left, detail-label right-alligned). The cells all have the default height (50.0f). Before I add an image to the cell, I resize the image to be 40x40, so that it is not the total height of the cell (I think that looks ugly).
I do this with this code:
cell.imageView.image = [UIImage imageNamed:#"icon.png"];
cell.imageView.image = [RootViewController imageWithImage:cell.imageView.image scaledToSize:CGSizeMake(40, 40)];
This is all very nice and works flawlessly. But I want to accomplish this also on the iPhone 4 (with the higher resolution screen). The problem is, that everything is scaled without problems on the iPhone 4 but the images appear very pixelated.
The reason for this is ofcourse that everything on the screen is blown up to scale to the new resolution, also the images, so the images should probably be something like 80x80. But when I resize them to 80x80 (originals are 120x120) they appear way to big, because of the scaling thing.
Is there a way to actually make my images not the complete height of a tablecell, but I want them in the higher resolution on the iPhone 4. Should I create a complete new View for this?
Oops, after the first reply I realised that my own written function was missing:
+ (UIImage*)imageWithImage:(UIImage*)image scaledToSize:(CGSize)newSize
{
UIGraphicsBeginImageContextWithOptions(newSize, NO, [[UIScreen mainScreen] scale]);
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
As you can see, after the first reply, I tried to get this to work with the method UIGraphicsBeginImageContextWithOptions but somehow this results in an empty image.
I assume you wrote "imageWithImage:scaledToSize:", right?
I further assume you use "UIGraphicsBeginImageContext(yourSize)" within this call. Replace that with "UIGraphicsBeginImageContextWithOptions(yourSize, NO, 2.0)" in case your platform is iPhone 4.
The "2.0" defines the scale factor for points (you define the size in points not in pixels). On pre-retina-display a point is 1x1 pixel. On retina display a point is 2x2 pixels.
Edit:
Make sure you have a high-res version of "icon.png" in your resources called "icon#2x.png". This is automatically loaded in case it is a retina display.
In my code I'm trying to show a UIWebView as a page is loading, and then, when it's done, capture an image from the web view to cache and display later (so I don't have to reload and render the web page).
I have something along the lines of:
CGContextRef context = CGBitmapContextCreate(…);
[[webView layer] renderInContext:context];
CGImageRef imageRef = CGBitmapContextCreateImage(context);
UIImage *image = [UIImage imageWithCGImage:imageRef];
The problem I'm running into is that, due to UIWebView's tiling, sometimes only half of the page is rendered to the context by the time I capture the image.
Is there a way to detect or block on UIWebView's background rendering thread so that I can get the image only after all of the rendering has finished?
UPDATE: It may be that thread race conditions were a red herring (it's unclear from the documentation, at any rate, whether UIWebView's custom layer or a CATiledLayer in general blocks on its background threads).
This may instead have been an invalidation issue (despite several sorts of calls to setNeedsDisplay on both the UIWebView and its layer). Changing the bounds of the UIWebView before rendering it appears to have eliminated the "not drawing the whole thing" problem.
I still ran into a problem where a few tiles were being drawn at the old scale, but calling renderInContext: twice seems to have mitigated that sufficiently.
UIWebView is probably using a CATiledLayer or custom derivative. You may be able to replace the layer with something of your own choosing such as a simple CALayer which does not do threaded drawing. Replace the layer before you start loading content.
If replacing the layer with a standard CALayer does not work, you may have to make your own subclass that emulates the behavior of a CATiledLayer without actually being threaded.
Edit:
From CATiledLayer.h
/* Note: do not attempt to directly modify the `contents' property of
* an CATiledLayer object - doing so will effectively turn it into a
* regular CALayer. */
So you may just be able to set the contents to nil before calling renderInContext:
I'm developing an app for iPhone using a coverFlow view, when the app is building the cards it is using a UIView in order to add labels and other stuff. Then I convert the UIView into UIImage using the following code:
UIGraphicsBeginImageContext(imageView.bounds.size);
[imageView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// returning the UIImage
return viewImage;
Every Time I redraw the coverflow I have a huge memory allocation increment, that never decreases even if I dealloc my coverFlow view.
I think the memory leak is in the code that I added, what do you think?
There is no memory leak apparent in the code snippet you provided. That operation could not be performed on a background thread because of UIGraphicsBeginImageContext(), so you should have an NSAutoreleasePool in place (the return value of UIGraphicsGetImageFromCurrentContext() is autoreleased). Without further information, its impossible to tell where the memory leak could be - I suggest you look at whatever objects eventually own the viewImage object and make sure you are properly releasing the UIImage if you retain it.
Use drawViewHierarchyInRect:afterScreenUpdates: instead of renderInContext: it is 15x faster.
You can see the comparison on this article.
Also, I have created a Swift extension for doing this: https://stackoverflow.com/a/32042439/517707