Thanks in advance.
I want to add an image view as a overlay view to the camera and save both the (cameraview and image view )as a single image. I searched for it and tried the example given in stackoverflow but it is not displaying any thing than blank screen .If any one know please help me.
- (void)renderView:(UIView*)view inContext:(CGContextRef)context
{
// -renderInContext: renders in the coordinate space of the layer,
// so we must first apply the layer's geometry to the graphics context
CGContextSaveGState(context);
// Center the context around the window's anchor point
CGContextTranslateCTM(context, [view center].x, [view center].y);
// Apply the window's transform about the anchor point
CGContextConcatCTM(context, [view transform]);
// Offset by the portion of the bounds left of and above the anchor point
CGContextTranslateCTM(context,
-[view bounds].size.width * [[view layer] anchorPoint].x,
-[view bounds].size.height * [[view layer] anchorPoint].y);
// Render the layer hierarchy to the current context
[[view layer] renderInContext:context];
// Restore the context
CGContextRestoreGState(context);
}
// this get called when an image has been taken from the camera
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info
{
UIImage *image = [info valueForKey:UIImagePickerControllerOriginalImage];
//cameraImage = image;
CGSize imageSize = [[UIScreen mainScreen] bounds].size;
if (NULL != UIGraphicsBeginImageContextWithOptions)
UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0);
else
UIGraphicsBeginImageContext(imageSize);
CGContextRef context = UIGraphicsGetCurrentContext();
// Draw the image returned by the camera sample buffer into the context.
// Draw it into the same sized rectangle as the view that is displayed on the screen.
float menubarUIOffset = 44.0;
UIGraphicsPushContext(context);
[image drawInRect:CGRectMake(0, 0, imageSize.width, imageSize.height-menubarUIOffset)];
UIGraphicsPopContext();
// Render the camera overlay view into the graphic context that we created above.
[self renderView:self.imagePicker.view inContext:context];
// Retrieve the screenshot image containing both the camera content and the overlay view
UIImage *screenshot = UIGraphicsGetImageFromCurrentImageContext();
UIImageWriteToSavedPhotosAlbum(screenshot, nil, nil, nil);
UIGraphicsEndImageContext();
[self.imagePicker dismissModalViewControllerAnimated:YES];
}
Hope this helps
Related
I have an application in which i am cropping the image taken from the camera.all are going well.but after the cropping the image seems to blured and streched.
CGRect rect = CGRectMake(20,40,280,200);
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
// translated rectangle for drawing sub image
CGRect drawRect = CGRectMake(-rect.origin.x, -rect.origin.y,280,200);
// clip to the bounds of the image context
// not strictly necessary as it will get clipped anyway?
CGContextClipToRect(context, CGRectMake(0, 0, rect.size.width, rect.size.height));
// draw image
[image drawInRect:drawRect];
// grab image
UIImage* croppedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGSize size = [croppedImage size];
NSLog(#" = %#",NSStringFromCGSize(size));
NSData* pictureData = UIImagePNGRepresentation(croppedImage);
Can anybody help me in finding out where i am going wrong?
try replacing
UIGraphicsBeginImageContext(rect.size);
with
UIGraphicsBeginImageContextWithOptions(rect.size, NO, [[UIScreen mainScreen] scale]);
to account for retina
I am using a method to take a UIImage screenshot programatically which works very nicely. What I want to do now is remove the top most UIView from being included in the screenshot BUT not remove it from the actual view on screen so simply setting topView.hidden = YES won't do. I am using this method to save the screenshot:
- (UIImage*)screenshot
{
CGSize imageSize = [[UIScreen mainScreen] bounds].size;
if (NULL != UIGraphicsBeginImageContextWithOptions)
UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0);
else
UIGraphicsBeginImageContext(imageSize);
CGContextRef context = UIGraphicsGetCurrentContext();
// Iterate over every window from back to front
for (UIWindow *window in [[UIApplication sharedApplication] windows])
{
if (![window respondsToSelector:#selector(screen)] || [window screen] == [UIScreen mainScreen])
{
// -renderInContext: renders in the coordinate space of the layer,
// so we must first apply the layer's geometry to the graphics context
CGContextSaveGState(context);
// Center the context around the window's anchor point
CGContextTranslateCTM(context, [window center].x, [window center].y);
// Apply the window's transform about the anchor point
CGContextConcatCTM(context, [window transform]);
// Offset by the portion of the bounds left of and above the anchor point
CGContextTranslateCTM(context,
-[window bounds].size.width * [[window layer] anchorPoint].x,
-[window bounds].size.height * [[window layer] anchorPoint].y);
// Render the layer hierarchy to the current context
[[window layer] renderInContext:context];
// Restore the context
CGContextRestoreGState(context);
}
}
// Retrieve the screenshot image
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
My first thought was to remove the sublayer from the superlayer doing this:
NSArray *sublayers = [[window layer] sublayers];
[[window layer] replaceSublayer:[sublayers objectAtIndex:[sublayers count] - 1] with:nil];
but I found that the sublayers array is only 1 object. How could I remove the top view from the screenshot?
For example you have View1 on your view hierarchy and you don't want to take screenshot of View1 then you can do View1.hiden= TRUE; before screenshot and you can do View1.hiden= FALSE; after screenshot.
May be this will help you.
I'm taking the screenshot following way.
- (UIImage*)screenshot
{
UIWindow *keyWindow = [[UIApplication sharedApplication] keyWindow];
CGRect rect = [keyWindow bounds];
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[keyWindow.layer renderInContext:context];
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
On UIView I'm drawing paths using UIBezierPath.
The screenshots that I'm getting are incorrect like this
Any ideas why the upper part is cut to blank ? On UIView all drawing is displayed correctly.
UPDATE : This happens when I draw a long path with UIBezierPath when I release the my brush and get the screenshot it gets the screenshot correctly.
I saw the another thread which you post.
According to the apple technical Q&A, you need to adjust the geometry coordinate first.
So, before render the layer of window, do following things first.
// -renderInContext: renders in the coordinate space of the layer,
// so we must first apply the layer's geometry to the graphics context
CGContextSaveGState(context);
// Center the context around the window's anchor point
CGContextTranslateCTM(context, [window center].x, [window center].y);
// Apply the window's transform about the anchor point
CGContextConcatCTM(context, [window transform]);
// Offset by the portion of the bounds left of and above the anchor point
CGContextTranslateCTM(context,
-[window bounds].size.width * [[window layer] anchorPoint].x,
-[window bounds].size.height * [[window layer] anchorPoint].y);
- (UIImage*)screenshot :(UIView*)vw
{
CGRect rect = [vw bounds];
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[vw.layer renderInContext:context];
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
Use This one
-(IBAction)takeScreenShot:(id)sender
{
UIGraphicsBeginImageContext(self.view.bounds.size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(viewImage, nil, nil, nil);
}
I want to send a form details(name , age etc ..) and the signature of the person as a Image from Ipad to an java web application server.
[http://192.168.1.100:8080/Intranet/checkforimage.htm?name=meena&image=**][1]
I want to know how should I send the image from ipad to the server via http.
Since the server side coding is in java. I am comfortable to receive as bytearray and use.
But the problem is how to convert the image in Ipad to bytearray and to send via HTTP?
Kindly help!
You can do the following:
1) Capture the screen shot of your form after filling all the details and save it as an UIImage object. You can capture the screen shot using below function:
- (UIImage*)screenshot
{
CGSize imageSize = [[UIScreen mainScreen] bounds].size;
if (NULL != UIGraphicsBeginImageContextWithOptions)
UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0);
else
UIGraphicsBeginImageContext(imageSize);
CGContextRef context = UIGraphicsGetCurrentContext();
for (UIWindow *window in [[UIApplication sharedApplication] windows])
{
if (![window respondsToSelector:#selector(screen)] || [window screen] == [UIScreen mainScreen])
{
CGContextSaveGState(context);
// Center the context around the window's anchor point
CGContextTranslateCTM(context, [window center].x, [window center].y);
// Apply the window's transform about the anchor point
CGContextConcatCTM(context, [window transform]);
// Offset by the portion of the bounds left of and above the anchor point
CGContextTranslateCTM(context,
-[window bounds].size.width * [[window layer] anchorPoint].x,
-[window bounds].size.height * [[window layer] anchorPoint].y);
// Render the layer hierarchy to the current context
[[window layer] renderInContext:context];
// Restore the context
CGContextRestoreGState(context);
}
}
// Retrieve the screenshot image
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
2) Convert UIImage object to NSData object.3) Send NSData to server using NSURLConnection.
OK try below code:
NSData *imageData = UIImagePNGRepresentation(image);
this will return PNG representation of your UIImage object in the form of NSData.
In case your image is JPEG or JPG than you can use the following:
NSData *imageData = UIImageJPEGRepresentation(image, 1.0);
You can send this "imageData" directly to server using NSURLConnection.
I'm using the sample code from http://developer.apple.com/library/ios/#qa/qa1714/_index.html almost verbatim and I can save my background image or my overlay but I can't combine the two. When I try and combine them using the following code the overlay renders the background white and I'm assuming its overwriting the background. Any idea why this is?
Here is my method which tries to combine the view overlay with the image:
-(void) annotateStillImage
{
UIImage *image = stillImage;
// Create a graphics context with the target size
CGSize imageSize = [[UIScreen mainScreen] bounds].size;
UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
// Draw the image returned by the camera sample buffer into the context.
// Draw it into the same sized rectangle as the view that is displayed on the screen.
UIGraphicsPushContext(context);
[image drawInRect:CGRectMake(0, 0, imageSize.width, imageSize.height)];
UIGraphicsPopContext();
// -renderInContext: renders in the coordinate space of the layer,
// so we must first apply the layer's geometry to the graphics context
CGContextSaveGState(context);
// Center the context around the window's anchor point
CGContextTranslateCTM(context, [[self view] center].x, [[self view] center].y);
// Apply the window's transform about the anchor point
CGContextConcatCTM(context, [[self view] transform]);
// Offset by the portion of the bounds left of and above the anchor point
CGContextTranslateCTM(context,
-[[self view] bounds].size.width * [[[self view] layer] anchorPoint].x,
-[[self view] bounds].size.height * [[[self view] layer] anchorPoint].y);
// Render the layer hierarchy to the current context
[[[self view] layer] renderInContext:context];
// Restore the context
CGContextRestoreGState(context);
// Retrieve the screenshot image containing both the camera content and the overlay view
stillImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
Here is a snippet of my method which creates the view in case that's relevant:
CGRect layerRect = [[[self view] layer] bounds];
[[self previewLayer] setBounds:layerRect];
[[self previewLayer] setPosition:CGPointMake(CGRectGetMidX(layerRect), CGRectGetMidY(layerRect))];
[[[self view] layer] addSublayer:[self previewLayer]];
UIButton *overlayButton = [UIButton buttonWithType:UIButtonTypeCustom];
[overlayButton setImage:[UIImage imageNamed:#"acceptButton.png"] forState:UIControlStateNormal];
[overlayButton setFrame:[self rectForAcceptButton] ];
[overlayButton setAutoresizesSubviews:TRUE];
[overlayButton setAutoresizingMask:UIViewAutoresizingFlexibleTopMargin | UIViewAutoresizingFlexibleLeftMargin];
[overlayButton addTarget:self action:#selector(acceptButtonPressed) forControlEvents:UIControlEventTouchUpInside];
[[self view] addSubview:overlayButton];
I'm expecting the image in the annotate method to be the background and the accept button from the view drawing to be added on top. All I get is the accept button and a white background but if I don't run the renderInContext method all I get is the image background and no overlay.
Thanks!
Turns out I was adding the overlay to the same view as the preview layer rather than having it on its own view. I realized this by reviewing this great project which covers all types of screenshots possible: https://github.com/cocoacoderorg/Screenshots
Fix was to create a separate overlay view in IB with a transparent background and add my overlay elements to this.