I have the following method which takes some CAShapeLayers and converts them into a UIImage. The UIImage is used in a cell for a UITableView (much like the photos app when you select a photo from one of your libraries). This method is called from within a GCD block:
-(UIImage*) imageAtIndex:(NSUInteger)index
{
Graphic *graphic = [[Graphic graphicWithType:index]retain];
CALayer *layer = [[CALayer alloc] init];
layer.bounds = CGRectMake(0,0, graphic.size.width, graphic.size.height);
layer.shouldRasterize = YES;
layer.anchorPoint = CGPointZero;
layer.position = CGPointMake(0, 0);
for (int i = 0; i < [graphic.shapeLayers count]; i++)
{
[layer addSublayer:[graphic.shapeLayers objectAtIndex:i]];
}
CGFloat largestDimension = MAX(graphic.size.width, graphic.size.height);
CGFloat maxDimension = self.thumbnailDimension;
CGFloat multiplicationFactor = maxDimension / largestDimension;
CGSize graphicThumbnailSize = CGSizeMake(multiplicationFactor * graphic.size.width, multiplicationFactor * graphic.size.height);
layer.sublayerTransform = CATransform3DScale(layer.sublayerTransform, graphicThumbnailSize.width / graphic.size.width, graphicThumbnailSize.height / graphic.size.height, 1);
layer.bounds = CGRectMake(0,0, graphicThumbnailSize.width, graphicThumbnailSize.height);
UIGraphicsBeginImageContextWithOptions(layer.bounds.size, NO, 0);
[layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = [UIGraphicsGetImageFromCurrentImageContext() retain];
UIGraphicsEndImageContext();
[layer release];
[graphic release];
return [image autorelease];
}
For whatever reason, when I'm scrolling the UITableView and loading the images in, it is stuttering a little bit. I know the GCD code is fine because it's worked previously so it appears something in this code is causing the stuttering. Does anyone know what that could be? Is CAAnimation not thread safe? Or does anyone know a better way to take a bunch of CAShapeLayers and convert them into a UIImage?
In the end I believe:
[layer renderInContext:UIGraphicsGetCurrentContext()];
Cannot be done on a separate thread, so I had to do the following:
UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
//draw the mutable paths of the CAShapeLayers to the context
UIImage *image = [UIGraphicsGetImageFromCurrentImageContext() retain];
UIGraphicsEndImageContext();
There is a great example of this (where I learned to do it) in the WWDC2012 video "Building Concurrent User Interfaces on iOS"
Related
I am trying to overlap two local images and trying to show the overlapped one in third image.
I am using this code but simulator shows nothing.
- (void)viewDidLoad
{
[super viewDidLoad];
image1 = [[UIImage alloc]init];
image1 = [UIImage imageNamed:#"iphone.png"];
imageA = [[UIImageView alloc]initWithImage:image1];
[self merge];
}
-(void)merge
{
CGSize size = CGSizeMake(320, 480);
UIGraphicsBeginImageContext(size);
CGPoint thumbPoint = CGPointMake(0,0);
imageview.image = imageA.image;
[imageA.image drawAtPoint:thumbPoint];
imageB = [[UIImage alloc]init];
imageB = [UIImage imageNamed:#"Favorites.png"];
CGPoint starredPoint = CGPointMake(0, 0);
[imageB drawAtPoint:starredPoint];
UIImage *imageC = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
imageview.image = imageC;
[self.view addSubview:imageview];
}
I can't figure out/don't know where i am making mistake.
Any help would be appreciable.
Remove all the code from every where except the below code in Merge.
-(void)merge
{
CGSize size = CGSizeMake(320, 480);
UIGraphicsBeginImageContext(size);
CGPoint point1 = CGPointMake(0,0);
// The second point has to be some where different than the first point, other wise, the second image will be above the first image, and you wont even know that the two images are there.
CGPoint point2 = CGPointMake(100,100);
UIImage *imageOne = [UIImage imageNamed:#"Image1.png"];
[imageOne drawAtPoint:point1];
UIImage *imageTwo = [UIImage imageNamed:#"Image2.png"];
// If you want the above image to have some blending, then you can do some thing like below.
// [imageTwo drawAtPoint:point2 blendMode:kCGBlendModeMultiply alpha:0.5];
[imageTwo drawAtPoint:point2];
UIImage *imageC = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageView *iv = [[UIImageView alloc] initWithFrame:CGRectMake(100,100,200,200)];
iv.image=imageC;
[self.view addSubview:iv];
}
here's a general purpose "merge" function wrote as a UIImage category... allows image overlay/underlay.
http://saveme-dot-txt.blogspot.com/2011/06/merge-image-function.html
I tried to capture image from UIWebView using this method but the image contains only visible area of the screen. How do I capture full content of UIWebView including invisible areas, i.e. the entire web page into one single image?
-(UIImage*)captureScreen:(UIView*) viewToCapture{
UIGraphicsBeginImageContext(viewToCapture.bounds.size);
[viewToCapture.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return viewImage;
}
Check this out Rendering a UIWebView into an ImageContext
or just use this :) :
(UIImage*) imageFromWebview:(UIWebView*) webview{
//store the original framesize to put it back after the snapshot
CGRect originalFrame = webview.frame;
//get the width and height of webpage using js (you might need to use another call, this doesn't work always)
int webViewHeight = [[webview stringByEvaluatingJavaScriptFromString:#"document.body.scrollHeight;"] integerValue];
int webViewWidth = [[webview stringByEvaluatingJavaScriptFromString:#"document.body.scrollWidth;"] integerValue];
//set the webview's frames to match the size of the page
[webview setFrame:CGRectMake(0, 0, webViewWidth, webViewHeight)];
//make the snapshot
UIGraphicsBeginImageContextWithOptions(webview.frame.size, false, 0.0);
[webview.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//set the webview's frame to the original size
[webview setFrame:originalFrame];
//and VOILA :)
return image;
}
EDIT (from a comment by Vad)
Solution was to call
webView.scalesPageToFit = YES;
in the initialization, and
[webView sizeToFit]
when the page did finish loading.
You are currently capturing only the visible part because you are limiting the image context to what's visible. You should limit it to what's available.
UIView has a scrollView property that has contentSize, telling you what is the size of the web view inside the scroll view. You can use that size to set your image context like this:
-(UIImage*)captureScreen:(UIView*) viewToCapture{
CGSize overallSize = overallSize;
UIGraphicsBeginImageContext(viewToCapture.scrollView.contentSize);
// Save the current bounds
CGRect tmp = viewToCapture.bounds;
viewToCapture.bounds = CGRectMake(0, 0, overallSize.width, overallSize.height);
// Wait for the view to finish loading.
// This is not very nice, but it should work. A better approach would be
// to use a delegate, and run the capturing on the did finish load event.
while (viewToCapture.loading) {
[NSThread sleepForTimeInterval:0.1];
}
[viewToCapture.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Restore the bounds
viewToCapture.bounds = tmp;
return viewImage;
}
EDIT : New answer from me with tested code.
Add below method to capture UIWebViewinto UIImage. It will also capture unvisible area as well.
- (UIImage*)webviewToImage:(UIWebView*)webView
{
int currentWebViewHeight = webView.scrollView.contentSize.height;
int scrollByY = webView.frame.size.height;
[webView.scrollView setContentOffset:CGPointMake(0, 0)];
NSMutableArray* images = [[NSMutableArray alloc] init];
CGRect screenRect = webView.frame;
int pages = currentWebViewHeight/scrollByY;
if (currentWebViewHeight%scrollByY > 0) {
pages ++;
}
for (int i = 0; i< pages; i++)
{
if (i == pages-1) {
if (pages>1)
screenRect.size.height = currentWebViewHeight - scrollByY;
}
if (IS_RETINA)
UIGraphicsBeginImageContextWithOptions(screenRect.size, NO, 0);
else
UIGraphicsBeginImageContext( screenRect.size );
if ([webView.layer respondsToSelector:#selector(setContentsScale:)]) {
webView.layer.contentsScale = [[UIScreen mainScreen] scale];
}
//UIGraphicsBeginImageContext(screenRect.size);
CGContextRef ctx = UIGraphicsGetCurrentContext();
[[UIColor blackColor] set];
CGContextFillRect(ctx, screenRect);
[webView.layer renderInContext:ctx];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
if (i == 0)
{
scrollByY = webView.frame.size.height;
}
else
{
scrollByY += webView.frame.size.height;
}
[webView.scrollView setContentOffset:CGPointMake(0, scrollByY)];
[images addObject:newImage];
}
[webView.scrollView setContentOffset:CGPointMake(0, 0)];
UIImage *resultImage;
if(images.count > 1) {
//join all images together..
CGSize size;
for(int i=0;i<images.count;i++) {
size.width = MAX(size.width, ((UIImage*)[images objectAtIndex:i]).size.width );
size.height += ((UIImage*)[images objectAtIndex:i]).size.height;
}
if (IS_RETINA)
UIGraphicsBeginImageContextWithOptions(size, NO, 0);
else
UIGraphicsBeginImageContext(size);
if ([webView.layer respondsToSelector:#selector(setContentsScale:)]) {
webView.layer.contentsScale = [[UIScreen mainScreen] scale];
}
CGContextRef ctx = UIGraphicsGetCurrentContext();
[[UIColor blackColor] set];
CGContextFillRect(ctx, screenRect);
int y=0;
for(int i=0;i<images.count;i++) {
UIImage* img = [images objectAtIndex:i];
[img drawAtPoint:CGPointMake(0,y)];
y += img.size.height;
}
resultImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
} else {
resultImage = [images objectAtIndex:0];
}
[images removeAllObjects];
return resultImage;
}
Also add these macro for checking if iOS is retina display
#define IS_RETINA ([[UIScreen mainScreen] respondsToSelector:#selector(displayLinkWithTarget:selector:)] && ([UIScreen mainScreen].scale == 2.0))
I'm developing an app and i'm implementing an application to manage orders some orders of a client.
In this view i have implemented a facebook style menu ( the new one that appears by shifting the whole window right) and i'have added a greyscale effect to the main view when it's shifted to right.
I've accomplished it by creating a UIImage of the current screen and by adding it over the real view and animating it's alpha from 1 to 0
Here's the code i've used to
-(void)toggleMenu {
UIGraphicsBeginImageContext(self.view.bounds.size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
if (![menu isOpened]){
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImage *blackAndWhiteImage = [UIImage getBlackAndWhiteVersionOfImage:viewImage];
overlayImage = [[UIImageView alloc]initWithImage:blackAndWhiteImage];
overlayImage.alpha = 0.1;
overlayImage.userInteractionEnabled=NO;
[self.view addSubview:overlayImage];
}
[UIView animateWithDuration:0.3 animations:^{
CGRect newRect = [self tabBarController].view.frame;
if (![menu isOpened]){
newRect.origin.x += 150;
[menu setOpened:YES];
overlayImage.alpha = 1.0;
} else {
newRect.origin.x -= 150;
[menu setOpened:NO];
overlayImage.alpha = 0;
}
[self tabBarController].view.frame = newRect;
} completion:^(BOOL finished){
if(![menu isOpened]){
[overlayImage removeFromSuperview];
overlayImage = nil;
}
}];
}
The problem is that i'm having issues with performances during animation ( little with iPhone 4 i'm trying an 3gs in next hours... )
Does anyone have any suggestions on what to do to get better performances ?
Regards
+ (UIImage *)getBlackAndWhiteVersionOfImage:(UIImage *)anImage {
UIImage *newImage;
if (anImage) {
CGColorSpaceRef colorSapce = CGColorSpaceCreateDeviceGray();
CGContextRef context = CGBitmapContextCreate(nil, anImage.size.width * anImage.scale, anImage.size.height * anImage.scale, 8, anImage.size.width * anImage.scale, colorSapce, kCGImageAlphaNone);
CGContextSetInterpolationQuality(context, kCGInterpolationHigh);
CGContextSetShouldAntialias(context, NO);
CGContextDrawImage(context, CGRectMake(0, 0, anImage.size.width, anImage.size.height), [anImage CGImage]);
CGImageRef bwImage = CGBitmapContextCreateImage(context);
CGContextRelease(context);
CGColorSpaceRelease(colorSapce);
UIImage *resultImage = [UIImage imageWithCGImage:bwImage];
CGImageRelease(bwImage);
UIGraphicsBeginImageContextWithOptions(anImage.size, NO, anImage.scale);
[resultImage drawInRect:CGRectMake(0.0, 0.0, anImage.size.width, anImage.size.height)];
newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
return newImage;
}
Making an image of the screen is costly. I'm not clear exactly what effect you want but I would overlay another view instead or animate a move of the main view instead.
For example, you can overlay a UIView that has a black background but an alpha of 0.1 to grey out a region.
I'm wanting to know if there's a way I can transform my view to look something like iPhone folders. In other words, I want my view to split somewhere in the middle and reveal a view underneath it. Is this possible?
EDIT:
Per the suggestion below, I could take a screenshot of my application by doing this:
UIGraphicsBeginImageContext(self.view.bounds.size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Not sure what to do with this, however.
EDIT:2
I've figured out how to add some shadows to my view, and here's what I've achieved (cropped to show relevant part):
EDIT:3
http://github.com/jwilling/JWFolders
the basic thought will be to take a picture of your current state and split it somewhere. Then animate both parts by setting a new frame. I don't know how to take a screenshot programmatically so I can't provide sample codeā¦
EDIT: hey hey it's not looking great but it works ^^
// wouldn't be sharp on retina displays, instead use "withOptions" and set scale to 0.0
// UIGraphicsBeginImageContext(self.view.bounds.size);
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, NO, 0.0);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *f = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGRect fstRect = CGRectMake(0, 0, 320, 200);
CGRect sndRect = CGRectMake(0, 200, 320, 260); // was 0,200,320,280
CGImageRef fImageRef = CGImageCreateWithImageInRect([f CGImage], fstRect);
UIImage *fCroppedImage = [UIImage imageWithCGImage:fImageRef];
CGImageRelease(fImageRef);
CGImageRef sImageRef = CGImageCreateWithImageInRect([f CGImage], sndRect);
UIImage *sCroppedImage = [UIImage imageWithCGImage:sImageRef];
CGImageRelease(sImageRef);
UIImageView *first = [[UIImageView alloc]initWithFrame:fstRect];
first.image = fCroppedImage;
//first.contentMode = UIViewContentModeTop;
UIImageView *second = [[UIImageView alloc]initWithFrame:sndRect];
second.image = sCroppedImage;
//second.contentMode = UIViewContentModeBottom;
UIView *blank = [[UIView alloc]initWithFrame:CGRectMake(0, 0, 320, 460)];
blank.backgroundColor = [UIColor darkGrayColor];
[self.view addSubview:blank];
[self.view addSubview:first];
[self.view addSubview:second];
[UIView animateWithDuration:2.0 animations:^{
second.center = CGPointMake(second.center.x, second.center.y+75);
}];
You can uncomment the two .contentMode lines and the quality will improve but in my case the subview has an offset of 10px or so (you can see it by setting a background color to both subviews)
//EDIT 2: ok found that bug. Had used the whole 320x480 screen, but had to cut off the status bar so it should be 320x460 and all is working great ;)
Instead of taking a snapshot of the view, you could use a separate view for each row of icons. You'll have to do a bit more work with repositioning stuff, but the rows won't be static when the folder is open (in other words, they'll keep redrawing as necessary).
I took relikd's code as a base and made it a bit more dynamic.
You can specify split position and direction when calling the function and I added a boarder to the split images.
#define splitAnimationTime 0.5
- (void)split:(SplitDirection)splitDirection
atYPostition:(int)splitYPosition
withRevealedViewHeight:(int)revealedViewHeight{
// wouldn't be sharp on retina displays, instead use "withOptions" and set scale to 0.0
// UIGraphicsBeginImageContext(self.view.bounds.size);
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, NO, 0.0);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *f = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGRect fullScreenRect = [self getScreenFrameForCurrentOrientation];
CGRect upperSplitRect = CGRectMake(0, 0,fullScreenRect.size.width, splitYPosition);
CGRect lowerSplitRect = CGRectMake(0, splitYPosition, fullScreenRect.size.width, fullScreenRect.size.height-splitYPosition);
CGImageRef upperImageRef = CGImageCreateWithImageInRect([f CGImage], upperSplitRect);
UIImage *upperCroppedImage = [UIImage imageWithCGImage:upperImageRef];
CGImageRelease(upperImageRef);
CGImageRef lowerImageRef = CGImageCreateWithImageInRect([f CGImage], lowerSplitRect);
UIImage *lowerCroppedImage = [UIImage imageWithCGImage:lowerImageRef];
CGImageRelease(lowerImageRef);
UIImageView *upperImage = [[UIImageView alloc]initWithFrame:upperSplitRect];
upperImage.image = upperCroppedImage;
//first.contentMode = UIViewContentModeTop;
UIView *upperBoarder = [[UIView alloc]initWithFrame:CGRectMake(0, splitYPosition, fullScreenRect.size.width, 1)];
upperBoarder.backgroundColor = [UIColor whiteColor];
[upperImage addSubview:upperBoarder];
UIImageView *lowerImage = [[UIImageView alloc]initWithFrame:lowerSplitRect];
lowerImage.image = lowerCroppedImage;
//second.contentMode = UIViewContentModeBottom;
UIView *lowerBoarder = [[UIView alloc]initWithFrame:CGRectMake(0, 0, fullScreenRect.size.width, 1)];
lowerBoarder.backgroundColor = [UIColor whiteColor];
[lowerImage addSubview:lowerBoarder];
int reveledViewYPosition = splitYPosition;
if(splitDirection==SplitDirectionUp){
reveledViewYPosition = splitYPosition - revealedViewHeight;
}
UIView *revealedView = [[UIView alloc]initWithFrame:CGRectMake(0, reveledViewYPosition, fullScreenRect.size.width, revealedViewHeight)];
revealedView.backgroundColor = [UIColor scrollViewTexturedBackgroundColor];
[self.view addSubview:revealedView];
[self.view addSubview:upperImage];
[self.view addSubview:lowerImage];
[UIView animateWithDuration:splitAnimationTime animations:^{
if(splitDirection==SplitDirectionUp){
upperImage.center = CGPointMake(upperImage.center.x, upperImage.center.y-revealedViewHeight);
} else { //assume down
lowerImage.center = CGPointMake(lowerImage.center.x, lowerImage.center.y+revealedViewHeight);
}
}];
}
This means I can call it like this:
[self split:SplitDirectionUp atYPostition:500 withRevealedViewHeight:200];
I used these conveniance functions in the updated split function:
- (CGRect)getScreenFrameForCurrentOrientation {
return [self getScreenFrameForOrientation:[UIApplication sharedApplication].statusBarOrientation];
}
- (CGRect)getScreenFrameForOrientation:(UIInterfaceOrientation)orientation {
UIScreen *screen = [UIScreen mainScreen];
CGRect fullScreenRect = screen.bounds;
BOOL statusBarHidden = [UIApplication sharedApplication].statusBarHidden;
//implicitly in Portrait orientation.
if(orientation == UIInterfaceOrientationLandscapeRight || orientation == UIInterfaceOrientationLandscapeLeft){
CGRect temp = CGRectZero;
temp.size.width = fullScreenRect.size.height;
temp.size.height = fullScreenRect.size.width;
fullScreenRect = temp;
}
if(!statusBarHidden){
CGFloat statusBarHeight = 20;
fullScreenRect.size.height -= statusBarHeight;
}
return fullScreenRect;
}
and this enum:
typedef enum SplitDirection
{
SplitDirectionDown,
SplitDirectionUp
}SplitDirection;
Adding a return to normaal function and adding the arrow would be a great addition.
I'm not sure I did understand that very well in the Apple doc. I'm considering using CATiledLayer to display a JPEG image. However, I only have an entire JPEG file at my disposal and no small tiles. Is it still possible to use a CATiledLayer and let it "tile" the JPEG?
Thanks!
No. You will have to tile them yourself, unfortunately. The WWDC 2010 videos on Core Animation discuss how to do this and in their sample code, they demonstrate how to use a CATileLayer when the tiles already exist.
Correction
I meant to say watch the Scroll View session. It's session 104 "Designing Apps with Scroll Views"
You may use This for making CATileLayer with image and Zoom functionality in scrollview.
- (void)viewDidLoad
{
[super viewDidLoad];
// You do not need to and must not autorelease [NSBundle mainBundle]
NSString *path = [[NSBundle mainBundle] pathForResource:#"img" ofType:#"jpg"];
NSData *data = [NSData dataWithContentsOfFile:path];
image = [UIImage imageWithData:data];
// CGRect pageRect = CGRectMake(0, 0, 1600, 2400);
CGRect pageRect = CGRectMake(0.0, 0.0, image.size.width, image.size.height);
CATiledLayer *tiledLayer = [CATiledLayer layer];
tiledLayer.delegate = self;
tiledLayer.tileSize = CGSizeMake(256.0, 256.0);
tiledLayer.levelsOfDetail = 5; // was 1000, which makes no sense. Each level of detail is a power of 2.
tiledLayer.levelsOfDetailBias = 5; // was 1000, which also makes no sense.
// Do not change the tiledLayer.frame.
tiledLayer.bounds = pageRect; // I think you meant bounds instead of frame.
// Flip the image vertically.
tiledLayer.transform = CATransform3DMakeScale(1.0f, -1.0F, 1.0f);
myContentView = [[UIView alloc] initWithFrame:CGRectMake(500,400,image.size.width,image.size.height)];
[myContentView.layer addSublayer:tiledLayer];
// UIScrollView *scrollView = [[UIScrollView alloc] initWithFrame:self.view.bounds];
scrollView=[[UIScrollView alloc]initWithFrame:CGRectMake(10, 10, 300, 400)];
scrollView.backgroundColor = [UIColor whiteColor];
// [scrollView setContentSize:CGSizeMake(image.size.width,image.size.height)];
[scrollView setContentSize:image.size];
scrollView.maximumZoomScale = 32; // = 2 ^ 5
scrollView.delegate = self;
// scrollView.contentSize = pageRect.size;
[scrollView addSubview:myContentView];
[self.view addSubview:scrollView];
}
- (UIView *)viewForZoomingInScrollView:(UIScrollView *)scrollView
{
return myContentView;
}
- (void)drawLayer:(CALayer *)layer inContext:(CGContextRef)context
{
NSString *path = [[NSBundle mainBundle] pathForResource:#"img" ofType:#"jpg"];
NSData *data = [NSData dataWithContentsOfFile:path];
image = [UIImage imageWithData:data];
CGRect imageRect = CGRectMake (0.0, 0.0, image.size.width, image.size.height);
CGContextDrawImage (context, imageRect, [image CGImage]);
}
The easier way is to just downscale the image until it fits (I think devices support up to 2044x2044 or so). You can create subimages of a CGImage with CGImageCreateWithImageInRect()