Image drawing on PDF fails to draw image with transformation and outputs wrong - iphone

I've a problem with drawing images on PDF. I do apply scaling, rotation, move, etc. to an image and draws that image on the PDF. But drawing is not outputting correctly. When I do rotate image, it just scales and doesn't rotate.
Here, I explain in more detail:
I place an image on UIWebView to make fake effect of image exactly on PDF. Then, I do draw image on PDF which is on the UIWebView while making PDF.
But when I go and prepare PDF with modified image, which has been applied lots of transformations, image scales not rotates.
Here, img_copyright is a Custom class inheriting UIImageView.
CGFloat orgY = (newY-whiteSpace)+img_copyright.frame.size.height;
CGFloat modY = pageRect.size.height-orgY;
CGFloat orgX = img_copyright.frame.origin.x-PDF_PAGE_PADDING_WIDTH;
CGFloat modX = orgX;
//We're getting X,Y of the image here to decide where to put the image on PDF.
CGRect drawRect = CGRectMake (modX, modY, img_copyright.frame.size.width, img_copyright.frame.size.height);
//Preparing the rectangle in which image is to be drawn.
CGContextDrawImage(pdfContext, drawRect, [img_copyright.image CGImage]);
[img_copyright release];
// Actually drawing the image.
But when i see the PDF image is not properly drawn into it.
What would you suggest, is it the problem due to image drawing based on its X,Y?
How could we decide where to put the image on PDF if we don't depend on X, Y?
What is the exact method of drawing image on PDF with rotation, scale?
The Image when I do insert the image onto UIWebView's scrollbar.
http://www.freeimagehosting.net/">
The Image when I do draw the image onto PDF.
http://www.freeimagehosting.net/">

CGFloat angleInRadians =-1*Angle* (M_PI / 180.0);
CGAffineTransform transform=CGAffineTransformIdentity;
transform = CGAffineTransformMakeRotation(angleInRadians);
//transform = CGAffineTransformMakeScale(1, -1);
//transform =CGAffineTransformMakeTranslation(0,80);
CGRect rotatedRect = CGRectApplyAffineTransform(CGRectMake(0,0,Image.size.width,Image.size.height), transform);
UIGraphicsBeginImageContext(rotatedRect.size);
//[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
CGContextTranslateCTM(UIGraphicsGetCurrentContext(), 0,rotatedRect.size.height);
CGContextScaleCTM(UIGraphicsGetCurrentContext(),1, -1);
//CGContextTranslateCTM(UIGraphicsGetCurrentContext(), +(rotatedRect.size.width/2),+(rotatedRect.size.height/2));
CGContextTranslateCTM(UIGraphicsGetCurrentContext(), (rotatedRect.origin.x)*-1,(rotatedRect.origin.y)*-1);
CGContextRotateCTM(UIGraphicsGetCurrentContext(), angleInRadians);
//CGContextTranslateCTM(UIGraphicsGetCurrentContext(), -(rotatedRect.size.width/2),-(rotatedRect.size.height/2));
CGImageRef temp = [Image CGImage];
CGContextDrawImage(UIGraphicsGetCurrentContext(), CGRectMake(0, 0, Image.size.width,Image.size.height), temp);
//CGContextRotateCTM(UIGraphicsGetCurrentContext(), -angleInRadians);
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return viewImage;

Related

divide image into two parts using divider

I'm working on one app where I need to divide a image into two part using a red line.
left part for labels
right part for prices
Question 1.
How can I draw a red line on image?
Question 2.
How can I divide image to two parts using red line ?( red line position is not fixed. user can move the position wherever it want)
Question 3.
How can I get line current position and how can I use that position two divide image
Thanks in advance
I would approach this in somewhat the same manner as koray was suggesting:
1) I am assuming that your above image/view is going to be managed by a view controller, which I will call ImageSeperatorViewController from here on.
Inside of ImageSeperatorViewController, insert koray's code in the -(void) viewDidLoad{} method. Make sure you change the imageToSplit variable to be an UIImageView instead of a plain UIView.
2) Next, I assume that you know how to detect user gestures. You will detect these gestures, and determine if the user has selected the view (i.e. bar in koray's code). Once you have determined if the user has selected bar, just update its origin's X position with the touch position.
CGRect barFrame = bar.frame;
barFrame.origin.x = *X location of the users touch*
bar.frame = barFrame;
3) For cropping, I would not use github.com/bilalmughal/NLImageCropper, it will not do what you need to do.
Try this on for size:
Header:
#interface UIImage (ImageDivider)
- (UIImage*)imageWithDividerAt:(CGFloat)position width:(CGFloat)width color:(UIColor*)color;
- (UIImage*)imageWithDividerAt:(CGFloat)position patternImage:(UIImage*)patternImage;
- (NSArray*)imagesBySlicingAt:(CGFloat)position;
#end
Implementation:
#implementation UIImage (ImageDivider)
- (UIImage*)imageWithDividerAt:(CGFloat)position patternImage:(UIImage*)patternImage
{
//pattern image
UIColor *patternColor = [UIColor colorWithPatternImage:patternImage];
CGFloat width = patternImage.size.width;
//set up context
UIGraphicsBeginImageContext(self.size);
CGContextRef context = UIGraphicsGetCurrentContext();
//draw the existing image into the context
[self drawAtPoint:CGPointZero];
//set the fill color from the pattern image color
CGContextSetFillColorWithColor(context, patternColor.CGColor);
//this is your divider's area
CGRect dividerRect = CGRectMake(position - (width / 2.0f), 0, width, self.size.height);
//the joy of image color patterns being based on 0,0 origin! must set phase
CGContextSetPatternPhase(context, CGSizeMake(dividerRect.origin.x, 0));
//fill the divider rect with the repeating pattern from the image
CGContextFillRect(context, dividerRect);
//get your new image and viola!
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
- (UIImage*)imageWithDividerAt:(CGFloat)position width:(CGFloat)width color:(UIColor *)color
{
//set up context
UIGraphicsBeginImageContext(self.size);
CGContextRef context = UIGraphicsGetCurrentContext();
//draw the existing image into the context
[self drawAtPoint:CGPointZero];
//set the fill color for your divider
CGContextSetFillColorWithColor(context, color.CGColor);
//this is your divider's area
CGRect dividerRect = CGRectMake(position - (width / 2.0f), 0, width, self.size.height);
//fill the divider's rect with the provided color
CGContextFillRect(context, dividerRect);
//get your new image and viola!
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
- (NSArray*)imagesBySlicingAt:(CGFloat)position
{
NSMutableArray *slices = [NSMutableArray array];
//first image
{
//context!
UIGraphicsBeginImageContext(CGSizeMake(position, self.size.height));
//draw the existing image into the context
[self drawAtPoint:CGPointZero];
//get your new image and viola!
[slices addObject:UIGraphicsGetImageFromCurrentImageContext()];
UIGraphicsEndImageContext();
}
//second
{
//context!
UIGraphicsBeginImageContext(CGSizeMake(self.size.width - position, self.size.height));
//draw the existing image into the context
[self drawAtPoint:CGPointMake(-position, 0)];
//get your new image and viola!
[slices addObject:UIGraphicsGetImageFromCurrentImageContext()];
UIGraphicsEndImageContext();
}
return slices;
}
The concept is simple - you want an image with the divider drawn over it. You could just overlay a view, or override drawRect:, or any number of any solutions. I'd rather give you this category. It just uses some quick Core Graphics calls to generate an image with your desired divider, be it pattern image or color, at the specified position. If you want support for horizontal dividers as well, it is rather trivial to modify this as such. Bonus: You can use a tiled image as your divider!
Now to answer your primary question. Using the category is rather self explanatory - just call one of the two methods on your source background to generate one with the divider, and then apply that image rather than the original source image.
Now, the second question is simple - when the divider has been moved, regenerate the image based on the new divider position. This is actually a relatively inefficient way of doing it, but this ought to be lightweight enough for your purposes as well as only being an issue when moving the divider. Premature optimization is just as much a sin.
Third question is also simple - call imagesBySlicingAt: - it will return an array of two images, as generated by slicing through the image at the provided position. Use them as you wish.
This code has been tested to be functional. I strongly suggest that you fiddle around with it, not for any purpose of utility, but to better understand the mechanisms used so that next time, you can be on the answering side of things
For Crop you can try this,
UIImage *image = [UIImage imageNamed:#"yourImage.png"];
CGImageRef tmpImgRef = image.CGImage;
CGImageRef topImgRef = CGImageCreateWithImageInRect(tmpImgRef, CGRectMake(0, 0, image.size.width, image.size.height / 2.0));
UIImage *topImage = [UIImage imageWithCGImage:topImgRef];
CGImageRelease(topImgRef);
CGImageRef bottomImgRef = CGImageCreateWithImageInRect(tmpImgRef, CGRectMake(0, image.size.height / 2.0, image.size.width, image.size.height / 2.0));
UIImage *bottomImage = [UIImage imageWithCGImage:bottomImgRef];
CGImageRelease(bottomImgRef);
hope this can help you, :)
if you want to draw a line you could just use a UIView with red background and make the height the size of your image and the width around 5 pixels.
UIView *imageToSplit; //the image im trying to split using a red bar
CGRect i = imageToSplit.frame;
int x = i.origin.x + i.size.width/2;
int y = i.origin.y;
int width = 5;
int height = i.size.height;
UIView *bar = [[[UIView alloc] initWithFrame:CGRectMake(x, y, width, height)] autorelease];
bar.backgroundColor = [UIColor redColor];
[self.view addSubview:bar];

Scale and Save UIImage from Photo Library in iPhone?

i am trying to scale down a image i get from photo library on touchesmoved by user in a similar way like when we take picture with camera using UIImagepicker setEditing to Yes method(or like the camera app).
I am trying to use the following method passing in some parameters based on touchesmoved but i am not getting the desired effect? what am i possibly doing wrong??
-(UIImage*)scaleToSize:(UIImage *)img:(CGSize)size
{
// Create a bitmap graphics context
// This will also set it as the current context
UIGraphicsBeginImageContext(size);
// Draw the scaled image in the current context
[img drawInRect:CGRectMake(0, 0, size.width, size.height)];
// Create a new image from current context
UIImage* scaledImage = UIGraphicsGetImageFromCurrentImageContext();
// Pop the current context from the stack
UIGraphicsEndImageContext();
// Return our new scaled image
return scaledImage;
}
-(void)ccTouchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
UIImage *img = [self scaleToSize:imgView.image:CGSizeMake(touch1.x,touch1.y)];
imgView.image=img;
}
also how can i save the scaled image once somehow i scale it??
Building on your comment, the image will distort as it will draw the image into the rectangle specified and if the new dimensions are not the same aspect ratio (the width / height) as the original image then it will appear distorted.
You need some logic to ensure that your new width and height have the same aspect ratio, for example:
CGFloat newHeight = imageView.frame.size.height * size.width / imageView.frame.size.width;
If you make your Graphics context size.width and newHeight and then draw your image into this rect it will maintain the aspect ratio.
You will likely want to put some extra logic in there to either create a new width given the height or new height given the width, depending on which dimension was the largest change.
Hope this helps,
Dave

iPhone SDK: Problem saving one image over another

basically I am making an app that involves a user taking a photo, or selecting one already on their device, and then placing an overlay onto the image.
So, I seem to have coded everything fine, apart from one thing, after the user has selected the overlay and positioned it, when saved the size of the overlay has changed, whereas the x and y values seem correct.
And so this is the code I use to add the overlay ("image" being the users photo):
float wid = (overlay.image.size.width);
float hei = (overlay.image.size.height);
overlay.frame = CGRectMake(0, 0, wid, hei);
[image addSubview:overlay];
And this is the code used to save the resulting image:
UIGraphicsBeginImageContext(image.image.size);
// Draw the users photo
[image.image drawInRect:CGRectMake(0, 0, image.image.size.width, image.image.size.height)];
// Draw the overlay
float xx = (overlay.center.x);
float yy = (overlay.center.y);
CGRect aaFrame = overlay.frame;
float width = aaFrame.size.width;
float height = aaFrame.size.height;
[overlay.image drawInRect:CGRectMake(xx, yy, width, height)];
UIGraphicsEndImageContext();
Any help? Thanks
The problem is that you are using image's size rather than the image view's frame size. Image seems to be much larger than its image view so when you use the image's size the other image's size ends up being much smaller in comparison although it is still the correct size. You can modify your snippet to this –
UIGraphicsBeginImageContext(image.frame.size);
// Draw the users photo
[image.image drawInRect:CGRectMake(0, 0, image.frame.size.width, image.frame.size.height)];
[overlay.image drawInRect:overlay.frame];
UIImage * resultingImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Avoiding loss of quality
While the above method leads to loss of resolution, trying to draw the parent image in its proper resolution might have an unwanted effect on its child image i.e. if the overlay wasn't of high resolution itself then it can end being stretchy. However you can try this code to draw it in the parent image's resolution (untested, let me know if you've problems ) –
float verticalScale = image.image.size.height / image.frame.size.height;
float horizontalScale = image.image.size.width / image.frame.size.width;
CGRect overlayFrame = overlay.frame;
overlayFrame.origin.x *= horizontalScale;
overlayFrame.origin.y *= verticalScale;
overlayFrame.size.width *= horizontalScale;
overlayFrame.size.height *= verticalScale;
UIGraphicsBeginImageContext(image.image.size);
// Draw the users photo
[image.image drawInRect:CGRectMake(0, 0, image.image.size.width, image.image.size.height)];
[overlay.image drawInRect:overlayFrame];
UIImage * resultingImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

How to I rotate UIImageView by 90 degrees inside a UIScrollView with correct image size and scrolling?

I have an image inside an UIImageView which is within a UIScrollView. What I want to do is rotate this image 90 degrees so that it is in landscape by default, and set the initial zoom of the image so that the entire image fits into the scrollview and then allow it to be zoomed up to 100% and back down to minimum zoom again.
This is what I have so far:
self.imageView.transform = CGAffineTransformMakeRotation(-M_PI/2);
float minimumScale = scrollView.frame.size.width / self.imageView.frame.size.width;
scrollView.minimumZoomScale = minimumScale;
scrollView.zoomScale = minimumScale;
scrollView.contentSize = CGSizeMake(self.imageView.frame.size.height,self.imageView.frame.size.width);
The problem is that if I set the transform, nothing shows up in the scrollview. However if I commented out the transform, everything works except the image is not in the landscape orientation that I want it to be!
If I apply the transform and remove the code that sets the minimumZoomScale and zoomScale properties, then the image shows up in the correct orientation, however with the incorrect zoomScale and seems like the contentSize property isn't set correctly either - since the doesn't scroll to the edge of the image in the left/right direction, however does top and bottom but much over the edge.
NB: image is being loaded from a URL
Maybe rotating the image itself fits your needs:
UIImage* rotateUIImage(const UIImage* src, float angleDegrees) {
UIView* rotatedViewBox = [[UIView alloc] initWithFrame: CGRectMake(0, 0, src.size.width, src.size.height)];
float angleRadians = angleDegrees * ((float)M_PI / 180.0f);
CGAffineTransform t = CGAffineTransformMakeRotation(angleRadians);
rotatedViewBox.transform = t;
CGSize rotatedSize = rotatedViewBox.frame.size;
[rotatedViewBox release];
UIGraphicsBeginImageContext(rotatedSize);
CGContextRef bitmap = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(bitmap, rotatedSize.width/2, rotatedSize.height/2);
CGContextRotateCTM(bitmap, angleRadians);
CGContextScaleCTM(bitmap, 1.0, -1.0);
CGContextDrawImage(bitmap, CGRectMake(-src.size.width / 2, -src.size.height / 2, src.size.width, src.size.height), [src CGImage]);
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
I believe the easiest way (and thread safe too) is to do:
//assume that the image is loaded in landscape mode from disk
UIImage * LandscapeImage = [UIImage imageNamed: imgname];
UIImage * PortraitImage = [[UIImage alloc] initWithCGImage: LandscapeImage.CGImage
scale: 1.0
orientation: UIImageOrientationLeft];
Any calculations that you do based on the imageView's frame should probably be done before you apply any transformations to it. But I would actually suggest doing those calculations based on the size of the UIImage, not the UIImageView. Then set both the UIImageView's frame and the UIScrollView's contentSize based on that.
Max's suggestion is a good one, although with a larger image it could be a performance killer. Are you displaying this image from your app's resources? If so, why not just rotate the images before you even build the app?
There's a much easier solution that is also faster, just do this:
- (void) imageRotateTapped:(id)sender
{
[UIView animateWithDuration:0.33f animations:^()
{
self.imageView.transform = CGAffineTransformMakeRotation(RADIANS(self.rotateDegrees += 90.0f));
self.imageView.frame = self.imageView.superview.bounds; // change this to whatever rect you want
}];
}
When the user is done, you will need to actually create a new rotated image, but that is very easy to do.
I was using the accepted answer for a while until we noticed that non-square rotations based on images taken directly from the camera seemed stretched (they were rotated as desired, just the frame width/height wasn't adjusted).
Great explanation/post here from Trevor: http://vocaro.com/trevor/blog/2009/10/12/resize-a-uiimage-the-right-way/
In the end, it was a very simple import of Trevor's code which uses categories to add a resizedImage:interpoationQuality method to UIImage. So yeah, user beware, if it still works for you, great. But if it doesn't, I'd take a look at the library instead.

Save a UIImage from a UIImageView with CGAffineTransform

I have a UIImageView within a UIScrollView which I have enabled the user to perform any number of flip and rotation operations on. I have this all working which allows the user to zoom, pan, flip and rotate. Now I want to be able to save the final image out to a png.
however it is doing my head in trying to work this out...
I have seen quite a few other posts similar to this but most only require applying a single transform such as a rotation eg Creating a UIImage from a rotated UIImageView
I would like to apply any transform that the user has "created" which will be a series of flip and rotations concatenated togethers
As the user is applying various rotations, flips etc, I store the concatenated transform using CGAffineTransformConcat. For example when they rotate I do:
CGAffineTransform newTransform = CGAffineTransformMakeRotation(angle);
self.theFullTransform = CGAffineTransformConcat(self.theFullTransform, newTransform);
self.fullPhotoImageView.transform = self.theFullTransform;
The following method is the best I have gotten so far for creating a UIImage with the full transform however the image is always translated in the wrong place. Eg the image is "offset". Which my guess is either related to using the wrong bounds being set in in CGAffineTransformTranslate or CGContextDrawImage.
Does anyone have any ideas? This seems a lot harder that I thought it should be...
- (UIImage *) translateImageFromImageView: (UIImageView *) imageView withTransform:(CGAffineTransform) aTransform
{
UIImage *rotatedImage;
// Get image width, height of the bounding rectangle
CGRect boundingRect = CGRectApplyAffineTransform(imageView.bounds, aTransform);
// Create a graphics context the size of the bounding rectangle
UIGraphicsBeginImageContext(boundingRect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGAffineTransform transform = CGAffineTransformIdentity;
//I think this translaton is the problem?
transform = CGAffineTransformTranslate(transform, boundingRect.size.width/2, boundingRect.size.height/2);
transform = CGAffineTransformScale(transform, 1.0, -1.0);
transform = CGAffineTransformConcat(transform, aTransform);
CGContextConcatCTM(context, transform);
// Draw the image into the context
// or the boundingRect is incorrect here?
CGContextDrawImage(context, boundingRect, imageView.image.CGImage);
// Get an image from the context
rotatedImage = [UIImage imageWithCGImage: CGBitmapContextCreateImage(context)];
// Clean up
UIGraphicsEndImageContext();
return rotatedImage;
}
Is the offset predictable, like always half the image, or does it depend on aTransform?
struct CGAffineTransform {
CGFloat a, b, c, d;
CGFloat tx, ty;
};
If the latter, set tx and ty to zero in aTransform before using it.