I am trying to capture (screen shot) a view. For that I am using a piece of code shown below that saves it to my document directory as a PNG image.
UIGraphicsBeginImageContextWithOptions(highlightViewController.fhView.centerView.frame.size, YES, 1.0);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *screenshot = UIGraphicsGetImageFromCurrentImageContext();
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
NSString *appFile = [documentsDirectory stringByAppendingPathComponent:#"1.png"];
NSData *imageData = UIImagePNGRepresentation(screenshot);
[imageData writeToFile:appFile atomically:YES];
UIGraphicsEndImageContext();
Question: can I capture part of the view? Because in the above code I can't change the origin (frame). If anyone has other approach to capture a particular part of view please share it.
You could crop the image:
http://iosdevelopertips.com/graphics/how-to-crop-an-image.html
CGRect rect = CGRectMake(0,0,10,10);
CGImageRef imageRef = CGImageCreateWithImageInRect([screenshot CGImage], rect);
UIImage *croppedScreenshot = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
Try this code. This surely works as I have implemented it in many of my projects:
- (UIImage *)image
{
if (cachedImage == nil) {
//YOU CAN CHANGE THE FRAME HERE TO WHATEVER YOU WANT TO CAPTURE
CGRect imageFrame = CGRectMake(0, 0, 400, 300);
UIView *imageView = [[UIView alloc] initWithFrame:imageFrame];
[imageView setOpaque:YES];
[imageView setUserInteractionEnabled:NO];
[self renderInView:imageView withTheme:nil];
UIGraphicsBeginImageContext(imageView.bounds.size);
CGContextRef c = UIGraphicsGetCurrentContext();
CGContextGetCTM(c);
CGContextScaleCTM(c, 1, -1);
CGContextTranslateCTM(c, 0, -imageView.bounds.size.height);
[imageView.layer renderInContext:c];
cachedImage = [UIGraphicsGetImageFromCurrentImageContext() retain];
// rescale graph
UIImage* bigImage = UIGraphicsGetImageFromCurrentImageContext();
CGImageRef scaledImage = [self newCGImageFromImage:[bigImage CGImage] scaledToSize:CGSizeMake(100.0f, 75.0f)];
cachedImage = [[UIImage imageWithCGImage:scaledImage] retain];
CGImageRelease(scaledImage);
UIGraphicsEndImageContext();
[imageView release];
}
return cachedImage;
}
I hope this will help you.
See if you can specify the rect like this and then take screenshot.
CGRect requiredRect = CGRectMake(urView.frame.origin.x, urView.frame.origin.y, urView.bounds.size.width, urView.bounds.size.height);
UIGraphicsBeginImageContext(requiredRect.size);
You can alter the origin and see if it works.
If this doesn't work out, you can try cropping the image as mentioned by #mcb
You can use this code
UIGraphicsBeginImageContext(self.view.bounds.size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGRect rect;
rect = CGRectMake(250,61 ,410, 255);
CGImageRef imageRef = CGImageCreateWithImageInRect([viewImage CGImage], rect);
UIImage *img = [UIImage imageWithCGImage:imageRef];
UIImageWriteToSavedPhotosAlbum(img, nil, nil, nil);
CGImageRelease(imageRef);
Related
I am having an app in which I am taking a screenshot of a view and saving that image on documents folder.
I am using the following code.
CGSize size = self.view.bounds.size;
CGRect cropRect;
CGRect screenBounds = [[UIScreen mainScreen] bounds];
if([self isPad])
{
cropRect = CGRectMake(145, 110, 476, 476);
}
else
{
if (screenBounds.size.height ==568)
{
cropRect = CGRectMake(40, 69, 240, 240);
}
else
{
cropRect = CGRectMake(40, 62, 240, 240);
}
}
/* Get the entire on screen map as Image */
UIGraphicsBeginImageContext(size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage * mapImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
/* Crop the desired region */
CGImageRef imageRef = CGImageCreateWithImageInRect(mapImage.CGImage, cropRect);
UIImage * cropImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
/* Save the cropped image
UIImageWriteToSavedPhotosAlbum(cropImage, nil, nil, nil);*/
//save to document folder
NSData * imageData = UIImageJPEGRepresentation(cropImage, 1.0);
NSArray* paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString* documentsDirectory = [paths objectAtIndex:0];
imagename=[NSString stringWithFormat:#"Fff.jpg"];
NSString* fullPathToFile = [documentsDirectory stringByAppendingPathComponent:imagename];
////NSLog(#"full path %#",fullPathToFile);
[imageData writeToFile:fullPathToFile atomically:NO];
It works fine if I take the screenshot 15 to 20 times but after that It gives me low memory warning and the app crashes after that on this code.
Is there a more optimized code that I can use which does not cause such memory problems.
Please help me.
Capture screen with my bellow method..
- (UIImage *)captureView {
//hide controls if needed
CGRect rect = [self.view bounds];// Here define CGRect with your requirement of take screenshot of some part
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[self.view.layer renderInContext:context];
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
See my another answer howe-to-capture-uiview-top-uiview
I want my WebView to save as image or pdf of any formate,
I tried with saving the web page using the code :
UIGraphicsBeginImageContext(WebPage.frame.size);
[WebPage.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(viewImage, nil, nil, nil);
but i'm getting the part which is visible..
i want to know how to get the entire web page image or how the apple people giving the air print so that the entire web page can b printed.. i dont want to know the "AirPrint function" i want to know how to get web page image using the iPhone..
As i'm fresher to iOS development.
can any one help me with he working code of saving web page?
- (void)printAndSave
{
webViewHeight = [[self.myWebView stringByEvaluatingJavaScriptFromString:#"document.body.scrollHeight;"] integerValue];
CGRect screenRect = self.myWebView.frame;
double currentWebViewHeight = webViewHeight;
while (currentWebViewHeight > 0)
{
imageName ++;
UIGraphicsBeginImageContext(screenRect.size);
CGContextRef ctx = UIGraphicsGetCurrentContext();
[[UIColor blackColor] set];
CGContextFillRect(ctx, screenRect);
[self.myWebView.layer renderInContext:ctx];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
NSString *pngPath = [documentsDirectory stringByAppendingPathComponent:[NSString stringWithFormat:#"%d.png",imageName]];
if(currentWebViewHeight < 460)
{
CGRect lastImageRect = CGRectMake(0, 457 - currentWebViewHeight, self.myWebView.frame.size.width, currentWebViewHeight);
CGImageRef lastImageRef = CGImageCreateWithImageInRect([newImage CGImage], lastImageRect);
newImage = [UIImage imageWithCGImage:lastImageRef];
CGImageRelease(lastImageRef);
}
[UIImagePNGRepresentation(newImage) writeToFile:pngPath atomically:YES];
[self.myWebView stringByEvaluatingJavaScriptFromString:#"window.scrollBy(0,460);"];
currentWebViewHeight -= 460;
}
}
- (IBAction)printSaveTheWebView:(id)sender
{
UIImage *viewImage;
UIScrollView *Scroll_view = webView.scrollView;
CGRect savedFrame;
UIGraphicsBeginImageContext(Scroll_view.contentSize);
{
CGPoint savedContentOffset = Scroll_view.contentOffset;
savedFrame = Scroll_view.frame;
Scroll_view.contentOffset = CGPointZero;
Scroll_view.frame = CGRectMake(0, 0, Scroll_view.contentSize.width, Scroll_view.contentSize.height);
[Scroll_view.layer renderInContext: UIGraphicsGetCurrentContext()];
viewImage = UIGraphicsGetImageFromCurrentImageContext();
Scroll_view.contentOffset = savedContentOffset;
Scroll_view.frame = savedFrame;
}
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(viewImage, self, nil, nil);
[UIImagePNGRepresentation(viewImage) writeToFile:[NSHomeDirectory() stringByAppendingPathComponent:#"view.png"] atomically:YES];
}
Call this method when you click your button
For getting your reqirement you have to change the context of the webview by changing the frame ,for that take one for loop and loop it according to the number of pages by changing the contex.
once check this one
I want to take the screenhot of viewcontroller
I had write this:-
- (UIImage *)captureScreenInRectWOScroll:(CGRect)captureFrame {
CALayer *layer;
layer = self.view.layer;
UIGraphicsBeginImageContext(captureFrame.size);
CGContextClipToRect (UIGraphicsGetCurrentContext(),captureFrame);
[layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *screenImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return screenImage;
}
The way i am calling this method:-
UIImage *img_woscroll1 =[self captureScreenInRectWOScroll:CGRectMake(35,147,497,260)];
I wann to take screenshot of Address from below attached image:-
When i am taking hte screenshot from the above code i Got the image with lot of blank space(greencolor image at top) at top side if image:-
Please help me..How to take proper screenshot from image such that i will not get blank image on topside.
Try This
UIGraphicsBeginImageContext(myView.frame.size);
[myView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSData *data=UIImagePNGRepresentation(viewImage);
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
NSString *strPath = [documentsDirectory stringByAppendingPathComponent:#"myimage.png"];
[data writeToFile:strPath atomically:YES];
Try This:-
CGRect screenRect = CGRectMake(0, 90, 320,460);
UIGraphicsBeginImageContext(screenRect.size);
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextFillRect(ctx, screenRect);
//you probably need an offset, adjust here
CGContextTranslateCTM(ctx, -20, -20);
[self.view.layer renderInContext:ctx];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
It may help you. Thanks :)
Try with this. This may help you.
- (UIImage *)captureView:(UIView *)view {
CGRect screenRect = [[UIScreen mainScreen] bounds];
UIGraphicsBeginImageContext(screenRect.size);
CGContextRef ctx = UIGraphicsGetCurrentContext();
[[UIColor blackColor] set];
CGContextFillRect(ctx, screenRect);
[view.layer renderInContext:ctx];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
I'm trying to create a UIPickerView with some images in it, but I can't seem to figure out how to get the images to fit in the view (right now they're too large and are overlapping each other).
I'm trying to use a function to resize each image when it's drawn, but I'm getting errors when the function is called, although the program compiles and runs fine (with the exception of the image not resizing). The resizing function and initialization functions are:
-(UIImage *)resizeImage:(UIImage *)image width:(int)width height:(int)height {
NSLog(#"resizing");
CGImageRef imageRef = [image CGImage];
CGImageAlphaInfo alphaInfo = CGImageGetAlphaInfo(imageRef);
//if (alphaInfo == kCGImageAlphaNone)
alphaInfo = kCGImageAlphaNoneSkipLast;
CGContextRef bitmap = CGBitmapContextCreate(NULL, width, height, CGImageGetBitsPerComponent(imageRef),
4 * width, CGImageGetColorSpace(imageRef), alphaInfo);
CGContextDrawImage(bitmap, CGRectMake(0, 0, width, height), imageRef);
CGImageRef ref = CGBitmapContextCreateImage(bitmap);
UIImage *result = [UIImage imageWithCGImage:ref];
CGContextRelease(bitmap);
CGImageRelease(ref);
return result;
}
- (void)viewDidLoad {
UIImage *h1 = [UIImage imageNamed:#"h1.png"];
h1 = [self resizeImage:h1 width:50 height: 50];
UIImageView *h1View = [[UIImageView alloc] initWithImage:h1];
NSArray *imageViewArray = [[NSArray alloc] initWithObjects:
h1View, nil];
NSString *fieldName = [[NSString alloc] initWithFormat:#"column1"];
[self setValue:imageViewArray forKey:fieldName];
[fieldName release];
[imageViewArray release];
[h1View release];
}
Console Output:
TabTemplate[29322:207] resizing
TabTemplate[29322] : CGBitmapContextCreate: unsupported colorspace
TabTemplate[29322] : CGContextDrawImage: invalid context
TabTemplate[29322] : CGBitmapContextCreateImage: invalid context
I can't figure out what's going wrong. Any help is greatly appreciated.
You don't require to resize your UIImage if you use the contentMode property of UIImageView.
myImageView.contentMode = UIViewContentModeScaleAspectFit;
Or if you still want to resize your UIImage, Have look at below SO post.
resizing a UIImage without loading it entirely into memory?
UIImage: Resize, then Crop
Use below to scale the image using aspect ratio, then clip the image to imageview's bounds.
imageView.contentMode = UIViewContentModeScaleAspectFill;
imageView.clipsToBounds = YES;
In case of swift
imageView.contentMode = .ScaleAspectFill
imageView.clipsToBounds = true
UIImage *image = [UIImage imageNamed:#"myImage"];
[image drawInRect: destinationRect];
UIImage *thumbnail = UIGraphicsGetImageFromCurrentImageContext();
UIImageWriteToSavedPhotosAlbum(image,nil,nil,nil);
I've been trying to use a method commonly used to resize an image. Without using this method, here is the code that takes a url of an image.
NSData *data = [NSData dataWithContentsOfURL:url];
UIImage *img = [[UIImage alloc] initWithData:data];
cell.imageView.image = img;
This works fine. But when I try to use this method:
-(UIImage*)imageWithImage:(UIImage*)image scaledToSize:(CGSize)newSize;
{
UIGraphicsBeginImageContext( newSize );
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
and call it using this:
UIImage *scaledImage = [self imageWithImage:img scaledToSize:CGSizeMake(10.0f,10.0f)];
then putting into my table like this:
cell.imageView.image = scaledImage;
Nothing shows up. Is there something I'm missing here?
This is similar to Exporting customized UITableViewCells into UIImage
Here's what you need to do in your -imageWithImage:scaledToSize: method, modified from my answer to that question:
-(UIImage*)imageWithImage:(UIImage*)image scaledToSize:(CGSize)newSize
{
// Create a bitmap context.
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef bitmapContextForScaledImage = CGBitmapContextCreate(nil, newSize.width, newSize.height, 8, 0, colorSpace, kCGImageAlphaNone);
CGColorSpaceRelease(colorSpace);
// Draw the image's layer into the context.
UIImageView * imageView = [[UIImageView alloc] initWithImage:image];
[imageView.layer renderInContext:bitmapContextForCell];
[imageView release];
// Create a CGImage from the context.
CGImageRef cgScaledImage = CGBitmapContextCreateImage(bitmapContextForScaledImage);
// Create a UIImage from the CGImage.
UIImage * scaledImage = [UIImage imageWithCGImage:cgScaledImage];
// Clean up.
CGImageRelease(cgScaledImage);
CGContextRelease(bitmapContextForScaledImage);
return scaledImage;
}