changing image's bounds depends on image's size - iphone

I am trying to make the UIImage object's bounds to be changed depends on the image size. I do not know why the image bounds would not update
#interface AsunaEventViewController ()
#property (weak, nonatomic) IBOutlet UIImageView *AsunaImageView;
#end
#implementation AsunaEventViewController
CGFloat x = self.AsunaImageView.bounds.center.x;
CGFloat y = self.AsunaImageView.bounds.center.y;
if (self.answer == 0){
CGSize imageSize = [[UIImage imageNamed:#"qwr.png"] size];
CGRect imageBounds = CGRectMake(x - imageSize.width/2, y - imageSize.height/2, imageSize.width, imageSize.height);
self.AsunaImageView.frame = imageBounds;
self.AsunaImageView.image = [UIImage imageNamed:#"rts.png"]

Change the image name of the UIImageview
if (self.answer == 0){
CGSize imageSize = [[UIImage imageNamed:#"qwr.png"] size];
CGRect imageBounds = CGRectMake(x - imageSize.width/2, y - imageSize.height/2, imageSize.width, imageSize.height);
self.AsunaImageView.frame = imageBounds;
self.AsunaImageView.image = [UIImage imageNamed:#"qwr.png"];
}

Related

Resize UIImage and change the size of UIImageView

I have this UIImageView and I have the values of its max height and max width. What I want to achieve is that I want to take the image (with any aspect ratio and any resolution) and I want it to fit in the borders, so the picture does not exceed them, but it can shrink them as it wants. (marked red in the picture):
Right now the image fits the necessary size properly, but I have 2 worries:
1. The UIImageView is not equal the size of the resized image, thus leaving red background (and I don't want that)
2. If the image is smaller that the height of my UIImageView it is not resized to be smaller, it stays the same height.
Here's my code and I know its wrong:
UIImage *actualImage = [attachmentsArray lastObject];
UIImageView *attachmentImageNew = [[UIImageView alloc] initWithFrame:CGRectMake(5.5, 6.5, 245, 134)];
attachmentImageNew.image = actualImage;
attachmentImageNew.backgroundColor = [UIColor redColor];
attachmentImageNew.contentMode = UIViewContentModeScaleAspectFit;
So how do I dynamically change the size not only of the UIImageView.image, but of the whole UIImageView, thus making its size totally adjustable to its content. Any help would be much appreciated, thanks!
When you get the width and height of a resized image Get width of a resized image after UIViewContentModeScaleAspectFit, you can resize your imageView:
imageView.frame = CGRectMake(0, 0, resizedWidth, resizedHeight);
imageView.center = imageView.superview.center;
I haven't checked if it works, but I think all should be OK
- (UIImage *)image:(UIImage*)originalImage scaledToSize:(CGSize)size
{
//avoid redundant drawing
if (CGSizeEqualToSize(originalImage.size, size))
{
return originalImage;
}
//create drawing context
UIGraphicsBeginImageContextWithOptions(size, NO, 0.0f);
//draw
[originalImage drawInRect:CGRectMake(0.0f, 0.0f, size.width, size.height)];
//capture resultant image
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//return image
return image;
}
This is the Swift equivalent for Rajneesh071's answer, using extensions
UIImage {
func scaleToSize(aSize :CGSize) -> UIImage {
if (CGSizeEqualToSize(self.size, aSize)) {
return self
}
UIGraphicsBeginImageContextWithOptions(aSize, false, 0.0)
self.drawInRect(CGRectMake(0.0, 0.0, aSize.width, aSize.height))
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
}
Usage:
let image = UIImage(named: "Icon")
item.icon = image?.scaleToSize(CGSize(width: 30.0, height: 30.0))
Use the category below and then apply border from Quartz into your image:
[yourimage.layer setBorderColor:[[UIColor whiteColor] CGColor]];
[yourimage.layer setBorderWidth:2];
The category:
UIImage+AutoScaleResize.h
#import <Foundation/Foundation.h>
#interface UIImage (AutoScaleResize)
- (UIImage *)imageByScalingAndCroppingForSize:(CGSize)targetSize;
#end
UIImage+AutoScaleResize.m
#import "UIImage+AutoScaleResize.h"
#implementation UIImage (AutoScaleResize)
- (UIImage *)imageByScalingAndCroppingForSize:(CGSize)targetSize
{
UIImage *sourceImage = self;
UIImage *newImage = nil;
CGSize imageSize = sourceImage.size;
CGFloat width = imageSize.width;
CGFloat height = imageSize.height;
CGFloat targetWidth = targetSize.width;
CGFloat targetHeight = targetSize.height;
CGFloat scaleFactor = 0.0;
CGFloat scaledWidth = targetWidth;
CGFloat scaledHeight = targetHeight;
CGPoint thumbnailPoint = CGPointMake(0.0,0.0);
if (CGSizeEqualToSize(imageSize, targetSize) == NO)
{
CGFloat widthFactor = targetWidth / width;
CGFloat heightFactor = targetHeight / height;
if (widthFactor > heightFactor)
{
scaleFactor = widthFactor; // scale to fit height
}
else
{
scaleFactor = heightFactor; // scale to fit width
}
scaledWidth = width * scaleFactor;
scaledHeight = height * scaleFactor;
// center the image
if (widthFactor > heightFactor)
{
thumbnailPoint.y = (targetHeight - scaledHeight) * 0.5;
}
else
{
if (widthFactor < heightFactor)
{
thumbnailPoint.x = (targetWidth - scaledWidth) * 0.5;
}
}
}
UIGraphicsBeginImageContext(targetSize); // this will crop
CGRect thumbnailRect = CGRectZero;
thumbnailRect.origin = thumbnailPoint;
thumbnailRect.size.width = scaledWidth;
thumbnailRect.size.height = scaledHeight;
[sourceImage drawInRect:thumbnailRect];
newImage = UIGraphicsGetImageFromCurrentImageContext();
if(newImage == nil)
{
NSLog(#"could not scale image");
}
//pop the context to get back to the default
UIGraphicsEndImageContext();
return newImage;
}
#end
If you have the size of the image, why don't you set the frame.size of the image view to be of this size?
EDIT----
Ok, so seeing your comment I propose this:
UIImageView *imageView;
//so let's say you're image view size is set to the maximum size you want
CGFloat maxWidth = imageView.frame.size.width;
CGFloat maxHeight = imageView.frame.size.height;
CGFloat viewRatio = maxWidth / maxHeight;
CGFloat imageRatio = image.size.height / image.size.width;
if (imageRatio > viewRatio) {
CGFloat imageViewHeight = round(maxWidth * imageRatio);
imageView.frame = CGRectMake(0, ceil((self.bounds.size.height - imageViewHeight) / 2.f), maxWidth, imageViewHeight);
}
else if (imageRatio < viewRatio) {
CGFloat imageViewWidth = roundf(maxHeight / imageRatio);
imageView.frame = CGRectMake(ceil((maxWidth - imageViewWidth) / 2.f), 0, imageViewWidth, maxHeight);
} else {
//your image view is already at the good size
}
This code will resize your image view to its image ratio, and also position the image view to the same centre as your "default" position.
PS: I hope you're setting imageView.layer.shouldRasterise = YES
and imageView.layer.rasterizationScale = [UIScreen mainScreen].scale;
if you're using CALayer shadow effect ;) It will greatly improve the performance of your UI.
I think what you want is a different content mode. Try using UIViewContentModeScaleToFill. This will scale the content to fit the size of ur UIImageView by changing the aspect ratio of the content if necessary.
Have a look to the content mode section on the official doc to get a better idea of the different content mode available (it is illustrated with images).
if([[SDWebImageManager sharedManager] diskImageExistsForURL:[NSURL URLWithString:#"URL STRING1"]])
{
NSString *key = [[SDWebImageManager sharedManager] cacheKeyForURL:[NSURL URLWithString:#"URL STRING1"]];
UIImage *tempImage=[self imageWithImage:[[SDImageCache sharedImageCache] imageFromDiskCacheForKey:key] scaledToWidth:cell.imgview.bounds.size.width];
cell.imgview.image=tempImage;
}
else
{
[cell.imgview sd_setImageWithURL:[NSURL URLWithString:#"URL STRING1"] placeholderImage:nil completed:^(UIImage *image, NSError *error, SDImageCacheType cacheType, NSURL *imageURL)
{
UIImage *tempImage=[self imageWithImage:image scaledToWidth:cell.imgview.bounds.size.width];
cell.imgview.image=tempImage;
// [tableView beginUpdates];
// [tableView endUpdates];
}];
}

Extract a part of UIImageView

I was wondering if it's possible to "extract" a part of UIImageView.
For example, I select using Warp Affine a part of the UIImageView and I know the selected part frame.
like in this image:
Is it possible to get from the original UIImageView only the selected part without losing quality?
Get the snapshot of the view via category method:
#implementation UIView(Snapshot)
-(UIImage*)makeSnapshot
{
CGRect wholeRect = self.bounds;
UIGraphicsBeginImageContextWithOptions(wholeRect.size, YES, [UIScreen mainScreen].scale);
CGContextRef ctx = UIGraphicsGetCurrentContext();
[[UIColor blackColor] set];
CGContextFillRect(ctx, wholeRect);
[self.layer renderInContext:ctx];
UIImage* image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
#end
then crop it to your rect via another category method:
#implementation UIImage(Crop)
-(UIImage*)cropFromRect:(CGRect)fromRect
{
fromRect = CGRectMake(fromRect.origin.x * self.scale,
fromRect.origin.y * self.scale,
fromRect.size.width * self.scale,
fromRect.size.height * self.scale);
CGImageRef imageRef = CGImageCreateWithImageInRect(self.CGImage, fromRect);
UIImage* crop = [UIImage imageWithCGImage:imageRef scale:self.scale orientation:self.imageOrientation];
CGImageRelease(imageRef);
return crop;
}
#end
in your VC:
UIImage* snapshot = [self.imageView makeSnapshot];
UIImage* imageYouNeed = [snapshot cropFromRect:selectedRect];
selectedRect should be in you self.imageView coordinate system, if no so then use
selectedRect = [self.imageView convertRect:selectedRect fromView:...]
Yes, it's possibile.First you should get the UIImageView's image, using this property:
#property(nonatomic, retain) UIImage *image;
And NSImage's :
#property(nonatomic, readonly) CGImageRef CGImage;
Then you get the cut image:
CGImageRef cutImage = CGImageCreateWithImageInRect(yourCGImageRef, CGRectMake(x, y, w, h));
If you want again a UIImage you should use this UIImage's method:
+ (UIImage *)imageWithCGImage:(CGImageRef)cgImage;
PS: I don't know how to do it directly, without convert it to CGImageRef, maybe there's a way.

How can I dynamically set the size of the imageView according to the size of the image

I user Interface Builder to layout my view and image view. I put a imageView on the view and set it as an outlet. The problem is that no matter what size of image (768*1024、1024*700) I set in my program:
self.imageView.image = [UIImage imageNamed:self.filename];
The size to be on the screen is always the size of the imageView I set in the Interface Builder. How can I dynamically set the size of the imageView according to the size of the image? Thanks!
Something like:
UIImageView *imageView = [[UIImageView alloc] init];
UIImage *image = [UIImage imageNamed:#"image.png"];
CGSize imageSize = [image size];
imageView.frame = CGRectMake(0, 0, imageSize.width, imageSize.height);
UIImage *image = [UIImage imageNamed:self.filename];
CGFloat width = image.size.width;
CGFloat height = image.size.height;
self.imageView.frame = CGRectMake(x, y , width, height);
You could subclass your image and do the following override for eachTime a new image is set :
#interface MyImageSubClass : UIImageView;
#end
#implementation MyImageSubClass
- (void)setImage:(UIImage *)image {
// Let the original UIImageView parent class do its job
[super image];
// Now set the frame according to the image size
// and keeping the original position of your image frame
if (image) {
CGRect r = self.frame;
CGSize imageSize = [image size];
self.frame = (r.origin.x, r.origin.y, imageSize.width, imageSize.height);
}
}
#end
UIImage *buttonImage2=[UIImage imageNamed:#"blank_button_blue.png"];
oButton=[[UIButton alloc] init];
// [oButton setImage:buttonImage2 forState:UIControlStateNormal];
[oButton setFrame:CGRectMake(0, 0, buttonImage2.size.width, buttonImage2.size.height)];
UIImage* sourceImage = [UIImage imageNamed:self.filename];;
CGRect rect;
rect.size.width = CGImageGetWidth(sourceImage.CGImage);
rect.size.height = CGImageGetHeight(sourceImage.CGImage);
[ yourimageview setframe:rect];

10 degrees rotation image cut off

I rotate my image with:
UIImage *image = [UIImage imageNamed:#"doneBtn.png"];
CGImageRef imgRef = image.CGImage;
CGFloat width = CGImageGetWidth(imgRef);
CGFloat height = CGImageGetHeight(imgRef);
CGAffineTransform transform = CGAffineTransformIdentity;
CGRect bounds = CGRectMake(0, 0, width, height);
transform = CGAffineTransformRotate(transform, degreesToRadians(10));
UIGraphicsBeginImageContext(bounds.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextConcatCTM(context, transform);
CGContextDrawImage(UIGraphicsGetCurrentContext(), CGRectMake(0,0,width,height), imgRef);
UIImage *imageCopy = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
img.image=imageCopy;
When I display the image, I see it under my 10 degree but the image is cut off on the edges.
Does somebody know how to fix this? Here's an image for clarification.
alt text http://img197.imageshack.us/img197/7077/problemkfr.png
I have been trying all day yesterday, but can't figure it out.
Another example image:
alt text http://img25.imageshack.us/img25/3370/afbeelding6b.png
CODE IN SliderAppViewController.h
#import <UIKit/UIKit.h>
#interface SliderAppViewController : UIViewController {
IBOutlet UIImageView *imgView;
IBOutlet UISlider *mySlider;
}
#property (nonatomic, retain) IBOutlet UISlider *mySlider;
- (IBAction) sliderValueChanged:(id)sender;
#end
CODE IN SliderApp.m
#import "SliderAppViewController.h"
#implementation SliderAppViewController
#synthesize mySlider;
- (IBAction) sliderValueChanged:(UISlider *)sender {
imgView.transform = CGAffineTransformMakeRotation(45);
}
- (void)viewDidLoad {
mySlider.minimumValue = 1.0;
mySlider.maximumValue = 100.0;
//other code
}
then make connection in IB
Your problem here is the anchor of your CALayer, I dont have example code right here with me, but if you play with the anchor property of the view CALayer you should be able to get the view to look like you want. I will try and post some sample code later
I solved it now, i have to change the width/height value of bounds and
the x/y in CGContextDrawImage
CGRect bounds = CGRectMake(0, 0, width+10, height+10);
CGContextDrawImage(UIGraphicsGetCurrentContext(), CGRectMake(10,0,width,height), imgRef);

How do I create/render a UIImage from a 3D transformed UIImageView?

After applying a 3d transform to an UIImageView.layer, I need to save the resulting "view" as a new UIImage... Seemed like a simple task at first :-) but no luck so far, and searching hasn't turned up any clues :-( so I'm hoping someone will be kind enough to point me in the right direction.
A very simple iPhone project is available here.
Thanks.
- (void)transformImage {
float degrees = 12.0;
float zDistance = 250;
CATransform3D transform3D = CATransform3DIdentity;
transform3D.m34 = 1.0 / zDistance; // the m34 cell of the matrix controls perspective, and zDistance affects the "sharpness" of the transform
transform3D = CATransform3DRotate(transform3D, DEGREES_TO_RADIANS(degrees), 1, 0, 0); // perspective transform on y-axis
imageView.layer.transform = transform3D;
}
/* FAIL : capturing layer contents doesn't get the transformed image -- just the original
CGImageRef newImageRef = (CGImageRef)imageView.layer.contents;
UIImage *image = [UIImage imageWithCGImage:newImageRef];
*/
/* FAIL : docs for renderInContext states that it does not render 3D transforms
UIGraphicsBeginImageContext(imageView.image.size);
[imageView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
*/
//
// header
//
#import <QuartzCore/QuartzCore.h>
#define DEGREES_TO_RADIANS(x) x * M_PI / 180
UIImageView *imageView;
#property (nonatomic, retain) IBOutlet UIImageView *imageView;
//
// code
//
#synthesize imageView;
- (void)transformImage {
float degrees = 12.0;
float zDistance = 250;
CATransform3D transform3D = CATransform3DIdentity;
transform3D.m34 = 1.0 / zDistance; // the m34 cell of the matrix controls perspective, and zDistance affects the "sharpness" of the transform
transform3D = CATransform3DRotate(transform3D, DEGREES_TO_RADIANS(degrees), 1, 0, 0); // perspective transform on y-axis
imageView.layer.transform = transform3D;
}
- (UIImage *)captureView:(UIImageView *)view {
UIGraphicsBeginImageContext(view.frame.size);
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
- (void)imageSavedToPhotosAlbum:(UIImage *)image didFinishSavingWithError:(NSError *)error contextInfo:(void *)contextInfo {
NSString *title = #"Save to Photo Album";
NSString *message = (error ? [error description] : #"Success!");
UIAlertView *alert = [[UIAlertView alloc] initWithTitle:title message:message delegate:nil cancelButtonTitle:#"OK" otherButtonTitles:nil];
[alert show];
[alert release];
}
- (IBAction)saveButtonClicked:(id)sender {
UIImage *newImage = [self captureView:imageView];
UIImageWriteToSavedPhotosAlbum(newImage, self, #selector(imageSavedToPhotosAlbum: didFinishSavingWithError: contextInfo:), nil);
}
I ended up creating a render method pixel per pixel on the CPU using the inverse of the view transform.
Basically, it renders the original UIImageView into a UIImage. Then every pixel in the UIImage is multiplied by the inverse transform matrix to generate the transformed UIImage.
RenderUIImageView.h
#import <UIKit/UIKit.h>
#import <QuartzCore/CATransform3D.h>
#import <QuartzCore/CALayer.h>
#interface RenderUIImageView : UIImageView
- (UIImage *)generateImage;
#end
RenderUIImageView.m
#import "RenderUIImageView.h"
#interface RenderUIImageView()
#property (assign) CATransform3D transform;
#property (assign) CGRect rect;
#property (assign) float denominatorx;
#property (assign) float denominatory;
#property (assign) float denominatorw;
#property (assign) float factor;
#end
#implementation RenderUIImageView
- (UIImage *)generateImage
{
_transform = self.layer.transform;
_denominatorx = _transform.m12 * _transform.m21 - _transform.m11 * _transform.m22 + _transform.m14 * _transform.m22 * _transform.m41 - _transform.m12 * _transform.m24 * _transform.m41 - _transform.m14 * _transform.m21 * _transform.m42 +
_transform.m11 * _transform.m24 * _transform.m42;
_denominatory = -_transform.m12 *_transform.m21 + _transform.m11 *_transform.m22 - _transform.m14 *_transform.m22 *_transform.m41 + _transform.m12 *_transform.m24 *_transform.m41 + _transform.m14 *_transform.m21 *_transform.m42 -
_transform.m11* _transform.m24 *_transform.m42;
_denominatorw = _transform.m12 *_transform.m21 - _transform.m11 *_transform.m22 + _transform.m14 *_transform.m22 *_transform.m41 - _transform.m12 *_transform.m24 *_transform.m41 - _transform.m14 *_transform.m21 *_transform.m42 +
_transform.m11 *_transform.m24 *_transform.m42;
_rect = self.bounds;
if (UIGraphicsBeginImageContextWithOptions != NULL) {
UIGraphicsBeginImageContextWithOptions(_rect.size, NO, 0.0);
} else {
UIGraphicsBeginImageContext(_rect.size);
}
if ([[UIScreen mainScreen] respondsToSelector:#selector(displayLinkWithTarget:selector:)] &&
([UIScreen mainScreen].scale == 2.0)) {
_factor = 2.0f;
} else {
_factor = 1.0f;
}
UIImageView *img = [[UIImageView alloc] initWithFrame:_rect];
img.image = self.image;
[img.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *source = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGContextRef ctx;
CGImageRef imageRef = [source CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *inputData = malloc(height * width * 4);
unsigned char *outputData = malloc(height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(inputData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
context = CGBitmapContextCreate(outputData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
for (int ii = 0 ; ii < width * height ; ++ii)
{
int x = ii % width;
int y = ii / width;
int indexOutput = 4 * x + 4 * width * y;
CGPoint p = [self modelToScreen:(x*2/_factor - _rect.size.width)/2.0 :(y*2/_factor - _rect.size.height)/2.0];
p.x *= _factor;
p.y *= _factor;
int indexInput = 4*(int)p.x + (4*width*(int)p.y);
if (p.x >= width || p.x < 0 || p.y >= height || p.y < 0 || indexInput > width * height *4)
{
outputData[indexOutput] = 0.0;
outputData[indexOutput+1] = 0.0;
outputData[indexOutput+2] = 0.0;
outputData[indexOutput+3] = 0.0;
}
else
{
outputData[indexOutput] = inputData[indexInput];
outputData[indexOutput+1] = inputData[indexInput + 1];
outputData[indexOutput+2] = inputData[indexInput + 2];
outputData[indexOutput+3] = 255.0;
}
}
ctx = CGBitmapContextCreate(outputData,CGImageGetWidth( imageRef ),CGImageGetHeight( imageRef ),8,CGImageGetBytesPerRow( imageRef ),CGImageGetColorSpace( imageRef ), kCGImageAlphaPremultipliedLast );
imageRef = CGBitmapContextCreateImage (ctx);
UIImage* rawImage = [UIImage imageWithCGImage:imageRef];
CGContextRelease(ctx);
free(inputData);
free(outputData);
return rawImage;
}
- (CGPoint) modelToScreen : (float) x: (float) y
{
float xp = (_transform.m22 *_transform.m41 - _transform.m21 *_transform.m42 - _transform.m22* x + _transform.m24 *_transform.m42 *x + _transform.m21* y - _transform.m24* _transform.m41* y) / _denominatorx;
float yp = (-_transform.m11 *_transform.m42 + _transform.m12 * (_transform.m41 - x) + _transform.m14 *_transform.m42 *x + _transform.m11 *y - _transform.m14 *_transform.m41* y) / _denominatory;
float wp = (_transform.m12 *_transform.m21 - _transform.m11 *_transform.m22 + _transform.m14*_transform.m22* x - _transform.m12 *_transform.m24* x - _transform.m14 *_transform.m21* y + _transform.m11 *_transform.m24 *y) / _denominatorw;
CGPoint result = CGPointMake(xp/wp, yp/wp);
return result;
}
#end
Theoretically, you could use the (now-allowed) undocumented call UIGetScreenImage() after quickly rendering it to the screen on a black background, but in practice this will be slow and ugly, so don't use it ;P.
I have the same problem with you and I found the solution!
I want to rotate the UIImageView, because I will have the animation.
And save the image, I use this method:
void CGContextConcatCTM(CGContextRef c, CGAffineTransform transform)
the transform param is the transform of your UIImageView!. So anything you have done to the imageView will be the same with image!.
And I have write a category method of UIImage.
-(UIImage *)imageRotateByTransform:(CGAffineTransform)transform{
// calculate the size of the rotated view's containing box for our drawing space
UIView *rotatedViewBox = [[UIView alloc] initWithFrame:CGRectMake(0,0,self.size.width, self.size.height)];
rotatedViewBox.transform = transform;
CGSize rotatedSize = rotatedViewBox.frame.size;
[rotatedViewBox release];
// Create the bitmap context
UIGraphicsBeginImageContext(rotatedSize);
CGContextRef bitmap = UIGraphicsGetCurrentContext();
// Move the origin to the middle of the image so we will rotate and scale around the center.
CGContextTranslateCTM(bitmap, rotatedSize.width/2, rotatedSize.height/2);
//Rotate the image context using tranform
CGContextConcatCTM(bitmap, transform);
// Now, draw the rotated/scaled image into the context
CGContextScaleCTM(bitmap, 1.0, -1.0);
CGContextDrawImage(bitmap, CGRectMake(-self.size.width / 2, -self.size.height / 2, self.size.width, self.size.height), [self CGImage]);
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
Hope this will help you.
Have you had a look at this? UIImage from UIView
I had the same problem, I was able to use UIView's drawViewHierarchyInRect:afterScreenUpdates: method, from iOS 7.0 -
(Documentation)
It draws the whole tree as it appears on the screen.
UIGraphicsBeginImageContextWithOptions(viewToRender.bounds.size, YES, 0);
[viewToRender drawViewHierarchyInRect:viewToRender.bounds afterScreenUpdates:YES];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Let say you have and UIImageView called imageView.
If you apply 3d transform and try to render this view with UIGraphicsImageRenderer transforms are ignored.
imageView.layer.transform = someTransform3d
but if you convert CATransform3d to CGAffine transform using CATransform3DGetAffineTransform and apply it to transform property of image view, it works.
imageView.transform = CATransform3DGetAffineTransform(someTransform3d)
And then, you can use the extension below to save it as UIImage
extension UIView {
func asImage() -> UIImage {
let renderer = UIGraphicsImageRenderer(bounds: bounds)
return renderer.image { rendererContext in
layer.render(in: rendererContext.cgContext)
}
}
}
And just call
let image = imageView.asImage()
In your captureView: method, try replacing this line:
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
with this:
[view.layer.superlayer renderInContext:UIGraphicsGetCurrentContext()];
You may have to adjust the size you use to create the image context.
I don't see anything in the API doc that says renderInContext: ignores 3D transformations. However, the transformations apply to the layer, not its contents, which is why you need to render the superlayer to see the transformation applied.
Note that calling drawRect: on the superview definitely won't work, as drawRect: does not draw subviews.
3D transform on UIImage / CGImageRef
I've improved Marcos Fuentes answer. You should be able to calculate the mapping of each pixel yourself.. Not perfect, but it does the trick...
It is available on this repository http://github.com/hfossli/AGGeometryKit/
The interesting files is
https://github.com/hfossli/AGGeometryKit/blob/master/Source/AGTransformPixelMapper.m
https://github.com/hfossli/AGGeometryKit/blob/master/Source/CGImageRef%2BCATransform3D.m
https://github.com/hfossli/AGGeometryKit/blob/master/Source/UIImage%2BCATransform3D.m
3D transform on UIView / UIImageView
https://stackoverflow.com/a/12820877/202451
Then you will have full control over each point in the quadrilateral. :)
A solution I found that at least worked in my case was to subclass CALayer. When a renderInContext: message is sent to a layer, that layer automatically forwards that message to all its sublayers. So all I had to do was to subclass CALayer and override the renderInContext: method and render what I needed to be rendered in the provided context.
For example, in my code I had a layer for which I was setting its contents to an image of an arrow:
UIImage *image = [UIImage imageNamed:#"arrow.png"];
MYLayer *myLayer = [[CALayer alloc] init];
[myLayer setContents:(__bridge id)[image CGImage]];
[self.mainLayer addSublayer:myLayer];
Now when I was applying a 3D 180 degree rotation over the Y-axis on the arrow and was trying to do a [self.mainLayer renderInContext:context] afterwards I was still getting the un-rotated image.
So in my subclass MyLayer I overrode renderInContext: and used an already rotated image to draw in provided context:
- (void)renderInContext:(CGContextRef)ctx
{
NSLog(#"Rendered in context");
UIImage *image = [UIImage imageNamed:#"arrow_rotated.png"];
CGContextDrawImage(ctx, self.bounds, image.CGImage);
}
This worked in my case, however I can see that if you are doing lots of 3D transforms you may not be able to have an image ready for every possible scenario. In many other cases though it should be possible to render the result of 3D transform using 2D transforms in the passed context. For example in my case instead of using a different image arrow_rotated.png I could use the arrow.png image and mirror it and draw it in the context.