Draw image constrain aspects - iphone

I have an imageview, the image within this imageview is for you to choose from the photoroll. I also have a button, when you click this button, there's an image added to a view with the addSubview code. This piece of image is draggable, resizeable and rotatable.
One problem, when I finish the image I use the method drawInRect. This draws all the layers onto eachother and creates an image. However the layers are on the wrong place and are the wrong size. It's also never rotated. I don't know how to fix this, the piece of code is beneath this text. Is it possible to keep the original image size and still have the layers drawn on the same place I drag them onto the imageview, if not how do I create a new size for this and have the result I want. And how do I draw the image rotated.
UIGraphicsBeginImageContext(imageView2.image.size);
// Draw image1
[imageView2.image drawInRect:CGRectMake(0, 0, imageView2.image.size.width, imageView2.image.size.height)];
// Draw image2
for(UIImageView *viewsSub in [self.imageViewer subviews])
{
[viewsSub.image drawInRect:CGRectMake(viewsSub.frame.origin.x, viewsSub.frame.origin.y, viewsSub.frame.size.width, viewsSub.frame.size.height)];
}
UIImage *resultingImage = UIGraphicsGetImageFromCurrentImageContext();
pld.imageChosen2 = resultingImage;
UIGraphicsEndImageContext();

So you want something like taking a "screenshot" of your actual imageview (with subviews included), don't you?
I used this piece of code to do something similar, but don't know if would work for you.
- (UIImage *)screenshot {
CGFloat scale = [UIScreen screenScale];
if(scale > 1.5) {
UIGraphicsBeginImageContextWithOptions(self.frame.size, NO, scale);
} else {
UIGraphicsBeginImageContext(self.frame.size);
}
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *screenshot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return screenshot;
}
You should add this method inside your imageview (the one that contains all the subviews you're adding).

Related

UIIMAGE rotation in iPhone without rotating UIIMAGEVIEW On Button Click In Clockwise Or Anticlock wise

I have a UIImageView with is filled with image either taken by camera or picked from the liabray after I assign UIImage to UIImageView I want to rotate it with either clockwise or anticlock wise using 2 button respectively for both on every click image should rotate but without rotationg in superview UIImageView then after I want so save the image in the final position user saved that image...
If there is any method or procedure then please share as i m searching on this query from last many days but not got any acurate solution with proper details and working.
Rotation routine ( found this routine on another post but I forget where):
-(UIImage *)rotateImage:(UIImage *)image angleInRadians:(float)angleInRadians {
UIGraphicsBeginImageContext(image.size);
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextRotateCTM(ctx, angleInRadians);
CGContextDrawImage(ctx, (CGRect){{}, image.size}, image);
UIImage *imageOut = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return imageOut;
}
Angle in raadians could be M_PI/2 or -M_PI/2 to change landscape to portrait or viceversa

Rendering views, scaled #2x, renderInContext, iPhone

I have a view (called outPutView) that contains graphics, like uIImageViews and labels. I need to render an image of the outPutView and it's sub-views. I am using renderInContext:UIGraphicsGetCurrentContext() to do so. Works fine, except that I need to scale the views. I am using a transform on the outPutView. This successfully scales the view and it's sub-views, but the transform does not render. While the views are scaled onscreen. the final render displays the vies at their original size, while the render context is at the target size (here #2x iPhone view size).
Thanks for reading!!
[outPutView setTransform:CGAffineTransformMake(2, 0, 0, 2, 0, 0)];
CGSize renderSize = CGSizeMake(self.view.bounds.size.width*2, self.view.bounds.size.height*2);
UIGraphicsBeginImageContext(renderSize);
[[outPutView layer] renderInContext:UIGraphicsGetCurrentContext()];
renderedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
I've just made this work, although in the opposite direction (scaling down). Here's a summary of the relevant code:
// destination size is half of self (self is a UIView)
float rescale = 0.5;
CGSize resize = CGSizeMake(self.width * rescale, self.height * rescale);
// make the destination context
UIGraphicsBeginImageContextWithOptions(resize, YES, 0);
// apply the scale to dest context
CGContextScaleCTM(UIGraphicsGetCurrentContext(), rescale, rescale);
// render self into dest context
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
// grab the resulting UIImage
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
To scale up instead of down, it should be fine to have rescale = 2.
I solved this by re-ordering my views. Actually adding another view between the output view: the view that the rendered context is taken from, and the view that is scaled via transform. It worked, but I have no idea why at this point. Any thoughts on this would be appreciated. Thanks for reading.

Capturing 3d object with its Background view in iPhone

I am new in Open GLES.
I got stuck at capturing the 3d object with its Background image.
I am adding the CAEAGLLayer view as subview in my view
and i am able to take the image of 3d object but it is coming with black background but i want to take the image of whole view on which i am showing my 3d object.
So please help me to resolve this issue.
You need to give more information. What is the background?
You use 1 view, it is EAGLView the I guess you should get the correct result.
You use EAGLView as subview of your background then you need to capture 2 images then combine them into one.
Call [CALayer drawInContext:viewContext] to get image of your background view.
Combine 2 images
+ (UIImage *) imageFromImage:(UIImage *)img1 andImage:(UIImage*) img2 {
UIGraphicsBeginImageContext(img1.size);
[img1 drawAtPoint:CGPointMake(0, 0)];
[img2 drawAtPoint:CGPointMake(0, 0)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}

Overlaying UIImageview over UIImageview save

I'm trying to merge two UIImageViews. The first UIImageView (theimageView) is the background, and the second UIImageView (Birdie) is an image overlaying the first UIImageView. You can load the first UIImageView from a map or take a picture. After this you can drag, rotate and scale the second UIImageView over the first one. I want the output (saved image) to look the same as what I see on the screen.
I got that working, but I get borders and the quality and size are bad. I want the size to be the same as that of the image which is chosen, and the quality to be good. Also I get a crash if I save it a second time, right after the first time.
Here is my current code:
//save actual design in photo library
- (void)captureScreen {
UIImage *myImage = [self addImage:theImageView ToImage:Birdie];
[myImage retain];
UIImageWriteToSavedPhotosAlbum(myImage, self, #selector(imageSavedToPhotosAlbum:didFinishSavingWithError:contextInfo:), self);
}
- (UIImage*) addImage:(UIImage*)theimageView toImage:(UIImage*)Birdie{
CGSize size = CGSizeMake(theimageView.size.height, theimageView.size.width);
UIGraphicsBeginImageContext(size);
CGPoint pointImg1 = CGPointMake(0,0);
[theimageView drawAtPoint:pointImg1 ];
CGPoint pointImage2 = CGPointMake(0, 0);
[Birdie drawAtPoint:pointImage2 ];
UIImage* result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return result;
}
But I only get errors with this code!
Thanks in advanced!
Take a look at Drawing a PNG Image Into a Graphics Context for Blending Mode Manipulation

Uiimage from UIView: higher than on-screen resolution?

I've got a UIView which I'm rendering to a UIImage via the typical UIGraphicsBeginImageContextWithOptions method, using a scale of 2.0 so the image output will always be the "retina display" version of what would show up onscreen, regardless of the user's actual screen resolution.
The UIView I'm rendering contains both images and text (UIImages and UILabels).  The image is appearing in the rendered UIImage at its full resolution, and looks great.  But the UILabels appear to have been rasterized at a 1.0 scale and then upscaled to 2.0, resulting in blurry text.
Is there something I'm doing wrong, or is there some way to get the text to render nice and crisp at the higher scale level?  Or is there some way to do this other than using the scaling parameter of UIGraphicsBeginImageContextWithOptions that would have better results?   Thanks!
The solution is to change the labels's contentsScale to 2 before you draw it, then set it back immediately thereafter. I just coded up a project to verify it, and its working just fine making a 2x image in a normal retina phone (simulator). [If you have a public place I can put it let me know.]
EDIT: the extended code walks the subviews and any container UIViews to set/unset the scale
- (IBAction)snapShot:(id)sender
{
[self changeScaleforView:snapView scale:2];
UIGraphicsBeginImageContextWithOptions(snapView.bounds.size, snapView.opaque, 2);
[snapView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
imageDisplay.image = img; // contentsScale
imageDisplay.contentMode = UIViewContentModeScaleAspectFit;
[self changeScaleforView:snapView scale:1];
}
- (void)changeScaleforView:(UIView *)aView scale:(CGFloat)scale
{
[aView.subviews enumerateObjectsUsingBlock:^void(UIView *v, NSUInteger idx, BOOL *stop)
{
if([v isKindOfClass:[UILabel class]]) {
v.layer.contentsScale = scale;
} else
if([v isKindOfClass:[UIImageView class]]) {
// labels and images
// v.layer.contentsScale = scale; won't work
// if the image is not "#2x", you could subclass UIImageView and set the name of the #2x
// on it as a property, then here you would set this imageNamed as the image, then undo it later
} else
if([v isMemberOfClass:[UIView class]]) {
// container view
[self changeScaleforView:v scale:scale];
}
} ];
}
Try rendering to an image with double size, and then create the scaled image:
UIGraphicsBeginImageContextWithOptions(size, NO, 1.0);
// Do stuff
UImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
newImage=[UIImage imageWithCGImage:[newImage CGImage] scale:2.0 orientation:UIImageOrientationUp];
Where:
size = realSize * scale;
I have been struggling with much the same oddities in the context of textview to PDF rendering. I found out that there are some documented properties on the CALayer objects which make up the view. Maybe setting the rasterizationScale of the relevant (sub)layer(s) helps.