I have a bit of code that does what I want and takes a screen shot. Apple have now said that screen shots have to be taken not using UIGetScreenImage(), but UIGraphicsBeginImageContextWithOptions. Could anyone point me in the right direction. Here's a snippet of code that shows how I am doing it at the moment.
CGImageRef inImage = UIGetScreenImage();
UIScreen* mainscr = [UIScreen mainScreen];
CGSize screenSize = mainscr.currentMode.size;
CGImageRef handRef;
if(screenSize.height > 500)
{
handRef = CGImageCreateWithImageInRect(inImage, CGRectMake(320,460,1,1));
}
else
{
handRef = CGImageCreateWithImageInRect(inImage, CGRectMake(160,230,1,1));
}
unsigned char rawData[4];
CGContextRef context = CGBitmapContextCreate(
rawData,
CGImageGetWidth(handRef),
CGImageGetHeight(handRef),
CGImageGetBitsPerComponent(handRef),
CGImageGetBytesPerRow(handRef),
CGImageGetColorSpace(handRef),
kCGImageAlphaPremultipliedLast
);
Anyone how I can do this now?
Thanks to any answers in advance!
should be something like this:
UIGraphicsBeginImageContext(theView.bounds.size);
theView.layer.renderInContext(UIGraphicsGetCurrentContext());
UIImage* yourFinalScreenshot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Where theView is the topmost View you want to render, you could for example use ][UIApplication sharedApplication] keyWindow].
It looks like you are accessing only parts of your screenshot. That could be done more easily with this approach. Just use the view you want to capture, it done not need to be the window.
Related
I have a blog application that I'm making. To compose a new entry, there is a "Compose Entry" view where the user can select a photo and input text. For the photo, there is a UIImageView placeholder and upon clicking this, a custom ImagePicker comes up where the user can select up to 3 photos.
This is where the problem comes in. I don't need the full resolution photo from the ALAsset, but at the same time, the thumbnail is too low resolution for me to use.
So what I'm doing at this point is resizing the fullResolution photos to a smaller size. However, this takes some time, especially when resizing up to 3 photos to a smaller size.
Here is a code snipped to show what I'm doing:
ALAssetRepresentation *rep = [[dict objectForKey:#"assetObject"] defaultRepresentation];
CGImageRef iref = [rep fullResolutionImage];
if (iref)
{
CGRect screenBounds = [[UIScreen mainScreen] bounds];
UIImage *previewImage;
UIImage *largeImage;
if([rep orientation] == ALAssetOrientationUp) //landscape image
{
largeImage = [[UIImage imageWithCGImage:iref] scaledToWidth:screenBounds.size.width];
previewImage = [[UIImage imageWithCGImage:iref] scaledToWidth:300];
}
else // portrait image
{
previewImage = [[[UIImage imageWithCGImage:iref] scaledToHeight:300] imageRotatedByDegrees:90];
largeImage = [[[UIImage imageWithCGImage:iref] scaledToHeight:screenBounds.size.height] imageRotatedByDegrees:90];
}
}
Here, from the fullresolution image, I am creating two images: a preview image (max 300px on the long end) and a large image (max 960px or 640px on the long end). The preview image is what is shown on the app itself in the "new entry" preview. The large image is what will be used when uploading to the server.
The actual code I'm using to resize, I grabbed somewhere from here:
-(UIImage*)scaledToWidth:(float)i_width
{
float oldWidth = self.size.width;
float scaleFactor = i_width / oldWidth;
float newHeight = self.size.height * scaleFactor;
float newWidth = oldWidth * scaleFactor;
UIGraphicsBeginImageContext(CGSizeMake(newWidth, newHeight));
[self drawInRect:CGRectMake(0, 0, newWidth, newHeight)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
Am I doing things wrong here? As it stands, the ALAsset thumbnail is too low clarity, and at the same time, I dont need the entire full resolution. It's all working now, but the resizing takes some time. Is this just a necessary consequence?
Thanks!
It is a necessary consequence of resizing your image that it will take some amount of time. How much depends on the device, the resolution of the asset and the format of the asset. But you don't have any control over that. But you do have control over where the resizing takes place. I suspect that right now you are resizing the image in your main thread, which will cause the UI to grind to a halt while you are doing the resizing. Do enough images, and your app will appear hung for long enough that the user will just go off and do something else (perhaps check out competing apps in the App Store).
What you should be doing is performing the resizing off the main thread. With iOS 4 and later, this has become much simpler because you can use Grand Central Dispatch to do the resizing. You can take your original block of code from above and wrap it in a block like this:
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_LOW, 0), ^{
ALAssetRepresentation *rep = [[dict objectForKey:#"assetObject"] defaultRepresentation];
CGImageRef iref = [rep fullResolutionImage];
if (iref)
{
CGRect screenBounds = [[UIScreen mainScreen] bounds];
__block UIImage *previewImage;
__block UIImage *largeImage;
if([rep orientation] == ALAssetOrientationUp) //landscape image
{
largeImage = [[UIImage imageWithCGImage:iref] scaledToWidth:screenBounds.size.width];
previewImage = [[UIImage imageWithCGImage:iref] scaledToWidth:300];
}
else // portrait image
{
previewImage = [[[UIImage imageWithCGImage:iref] scaledToHeight:300] imageRotatedByDegrees:90];
largeImage = [[[UIImage imageWithCGImage:iref] scaledToHeight:screenBounds.size.height] imageRotatedByDegrees:90];
}
dispatch_async(dispatch_get_main_queue(), ^{
// do what ever you need to do in the main thread here once your image is resized.
// this is going to be things like setting the UIImageViews to show your new images
// or adding new views to your view hierarchy
});
}
});
You'll have to think about things a little differently this way. For example, you've now broken up what used to be a single step into multiple steps now. Code that was running after this will end up running before the image resize is complete or before you actually do anything with the images, so you need to make sure that you didn't have any dependencies on those images or you'll likely crash.
A late answer, but for those stumbling on this question, you might want to consider using the fullScreenImage rather than the fullResolutionImage of the defaultRepresentation. It's usually much smaller, but still large enough to maintain good quality for larger thumbnails.
I am adding 2 images to each other and wanted to know if this is a good way to do this? This code works and looked to be powerful.
So, my question really is, It this good or is there a better way?
PS: Warning code written by a designer.
Call the function:
- (IBAction) {
UIImage *MyFirstImage = UIImage imageNamed: #"Image.png"];
UIImage *MyTopImage = UIImage imageNamed: #"Image2.png"];
CGFloat yFloat = 50;
CGFloat xFloat = 50;
UIImage *newImage = [self placeImageOnImage:MyFirstImage imageOver:MyTopImage x:&xFloat y:&yFloat];
}
The Function:
- (UIImage*) placeImageOnImage:(UIImage *)image topImage:(UIImage *)topImage x:(CGFloat *)x y:(CGFloat *)y {
// if you want the image to be added next to the image make this CGSize bigger.
CGSize newSize = CGSizeMake(image.size.width,image.size.height);
UIGraphicsBeginImageContext( newSize );
[topImage drawInRect:CGRectMake(*x,*y,topImage.size.width,topImage.size.height)];
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height) blendMode:kCGBlendModeDestinationOver alpha:1];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
Looks OK. Perhaps you don't really need the CGFloat pointers, but that's fine, too.
The main idea is correct. There is no better way to do what you want.
Minuses:
1) Consider UIGraphicsBeginImageContextWithOptions method. UIGraphicsBeginImageContext isn't good for retina.
2) Don't pass floats as pointers. Use x:(CGFloat)x y:(CGFloat)y instead
You should use the begin context version, UIGraphicsBeginImageContextWithOptions, that allows you to specify options for scale (and pass 0 as the scale) do you don't lose any quality on retina displays.
If you want one image drawn on top of another image, just draw the one in back, then the one in front, exactly as if you were using paint. There is no need to use blend modes.
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
[topImage drawInRect:CGRectMake(*x,*y,topImage.size.width,topImage.size.height)];
i need to take a snapshot of my current ipad-view. That view loads and plays a video.
I found this function which works almost really well.
- (UIImage*)captureView:(UIView *)view {
CGRect rect = [[UIScreen mainScreen] bounds];
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[view.layer renderInContext:context];
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
(Source from: LINK)
The problem is, that the current frame of the playing video is not captured, only the view but without video-content. Do i forgot to update the display or anything else before saving the image? Is there a special function that reads the latest screen-values?
Thanks for your time and help.
Have you tried using the UIGetScreenImage() function?
It's private, but Apple allows to use it, so your app will be validated for the App Store, even if you use it.
You just need to declare its prototype, so the compiler won't complain:
CGImageRef UIGetScreenImage( void );
Note: you can create a NSImage from the CGImageRef with the following NSImage method:
- ( id )initWithCGImage: ( CGImageRef )cgImage size:( NSSize )size;
I'm having problems with drawing rotated images on PDF, my output is worse.
My case is, we don't know have any fixed co-ordinates. X,Y, rotation, etc. depends on ImageView itself. I select the ImageView and rotate it through Sliders.
Check on ZOSH application. I need to implement functionalities like that app. I want to make PDF by adding images one by one.
Please send me link for any example that can help me out, I'm stuck here. I'm drawing the image on PDF based on center of the imageview.
Please help me, Thank You.
Was having the same problem,after a fair amount of time i was able to find a solution....might be helpful to you
-(UIImage*)RotateImage:(UIImage*)Image :(float)Angle
{
CGFloat angleInRadians =-1*Angle* (M_PI / 180.0);
CGAffineTransform transform=CGAffineTransformIdentity;
transform = CGAffineTransformMakeRotation(angleInRadians);
//transform = CGAffineTransformMakeScale(1, -1);
//transform =CGAffineTransformMakeTranslation(0,80);
CGRect rotatedRect = CGRectApplyAffineTransform(CGRectMake(0,0,Image.size.width,Image.size.height), transform);
UIGraphicsBeginImageContext(rotatedRect.size);
//[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
CGContextTranslateCTM(UIGraphicsGetCurrentContext(), 0,rotatedRect.size.height);
CGContextScaleCTM(UIGraphicsGetCurrentContext(),1, -1);
//CGContextTranslateCTM(UIGraphicsGetCurrentContext(), +(rotatedRect.size.width/2),+(rotatedRect.size.height/2));
CGContextTranslateCTM(UIGraphicsGetCurrentContext(), (rotatedRect.origin.x)*-1,(rotatedRect.origin.y)*-1);
CGContextRotateCTM(UIGraphicsGetCurrentContext(), angleInRadians);
//CGContextTranslateCTM(UIGraphicsGetCurrentContext(), -(rotatedRect.size.width/2),-(rotatedRect.size.height/2));
CGImageRef temp = [Image CGImage];
CGContextDrawImage(UIGraphicsGetCurrentContext(), CGRectMake(0, 0, Image.size.width,Image.size.height), temp);
//CGContextRotateCTM(UIGraphicsGetCurrentContext(), -angleInRadians);
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//[viewImage autorelease];
return viewImage;
}
for Zooming issues....i suggest you try to make a subclass for the application window
I found this web page that shows how to subclass the application window so as to observe taps and forward those taps to the view controller.
http://mithin.in/2009/08/26/detecting-taps-and-events-on-uiwebview-the-right-way/
Hope that helps you...
The answer is in Apple's documentation. Listings 13-3 and 13-4 seems like what you are after.
I'm trying to write an animation on the iPhone, without much success, getting crashes and nothing seems to work.
What I wanna do appears simple, create a UIImage, and draw part of another UIImage into it, I got a bit confused with the context and layers and stuff.
Could someone please explain how to write something like that (efficiently), with example code?
For the record, this turns out to be fairly straightforward - everything you need to know is somewhere in the example below:
+ (UIImage*) addStarToThumb:(UIImage*)thumb
{
CGSize size = CGSizeMake(50, 50);
UIGraphicsBeginImageContext(size);
CGPoint thumbPoint = CGPointMake(0, 25 - thumb.size.height / 2);
[thumb drawAtPoint:thumbPoint];
UIImage* starred = [UIImage imageNamed:#"starred.png"];
CGPoint starredPoint = CGPointMake(0, 0);
[starred drawAtPoint:starredPoint];
UIImage* result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return result;
}
I just want to add a comment about the answer above by dpjanes, because it is a good answer but will look blocky on iPhone 4 (with high resolution retina display), since "UIGraphicsGetImageFromCurrentImageContext()" does not render at the full resolution of an iPhone 4.
Use "...WithOptions()" instead. But since WithOptions is not available until iOS 4.0, you could weak link it (discussed here) then use the following code to only use the hires version if it is supported:
if (UIGraphicsBeginImageContextWithOptions != NULL) {
UIGraphicsBeginImageContextWithOptions(size, NO, 0.0);
}
else {
UIGraphicsBeginImageContext();
}
Here is an example to merge two images that are the same size into one. I don't know if this is the best and don't know if this kind of code is posted somewhere else. Here is my two cents.
+ (UIImage *)mergeBackImage:(UIImage *)backImage withFrontImage:(UIImage *)frontImage
{
UIImage *newImage;
CGRect rect = CGRectMake(0, 0, backImage.size.width, backImage.size.height);
// Begin context
UIGraphicsBeginImageContextWithOptions(rect.size, NO, 0);
// draw images
[backImage drawInRect:rect];
[frontImage drawInRect:rect];
// grab context
newImage = UIGraphicsGetImageFromCurrentImageContext();
// end context
UIGraphicsEndImageContext();
return newImage;
}
Hope this helps.