Overlaying UIImageview over UIImageview save - iphone

I'm trying to merge two UIImageViews. The first UIImageView (theimageView) is the background, and the second UIImageView (Birdie) is an image overlaying the first UIImageView. You can load the first UIImageView from a map or take a picture. After this you can drag, rotate and scale the second UIImageView over the first one. I want the output (saved image) to look the same as what I see on the screen.
I got that working, but I get borders and the quality and size are bad. I want the size to be the same as that of the image which is chosen, and the quality to be good. Also I get a crash if I save it a second time, right after the first time.
Here is my current code:
//save actual design in photo library
- (void)captureScreen {
UIImage *myImage = [self addImage:theImageView ToImage:Birdie];
[myImage retain];
UIImageWriteToSavedPhotosAlbum(myImage, self, #selector(imageSavedToPhotosAlbum:didFinishSavingWithError:contextInfo:), self);
}
- (UIImage*) addImage:(UIImage*)theimageView toImage:(UIImage*)Birdie{
CGSize size = CGSizeMake(theimageView.size.height, theimageView.size.width);
UIGraphicsBeginImageContext(size);
CGPoint pointImg1 = CGPointMake(0,0);
[theimageView drawAtPoint:pointImg1 ];
CGPoint pointImage2 = CGPointMake(0, 0);
[Birdie drawAtPoint:pointImage2 ];
UIImage* result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return result;
}
But I only get errors with this code!
Thanks in advanced!

Take a look at Drawing a PNG Image Into a Graphics Context for Blending Mode Manipulation

Related

UIIMAGE rotation in iPhone without rotating UIIMAGEVIEW On Button Click In Clockwise Or Anticlock wise

I have a UIImageView with is filled with image either taken by camera or picked from the liabray after I assign UIImage to UIImageView I want to rotate it with either clockwise or anticlock wise using 2 button respectively for both on every click image should rotate but without rotationg in superview UIImageView then after I want so save the image in the final position user saved that image...
If there is any method or procedure then please share as i m searching on this query from last many days but not got any acurate solution with proper details and working.
Rotation routine ( found this routine on another post but I forget where):
-(UIImage *)rotateImage:(UIImage *)image angleInRadians:(float)angleInRadians {
UIGraphicsBeginImageContext(image.size);
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextRotateCTM(ctx, angleInRadians);
CGContextDrawImage(ctx, (CGRect){{}, image.size}, image);
UIImage *imageOut = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return imageOut;
}
Angle in raadians could be M_PI/2 or -M_PI/2 to change landscape to portrait or viceversa

Draw image constrain aspects

I have an imageview, the image within this imageview is for you to choose from the photoroll. I also have a button, when you click this button, there's an image added to a view with the addSubview code. This piece of image is draggable, resizeable and rotatable.
One problem, when I finish the image I use the method drawInRect. This draws all the layers onto eachother and creates an image. However the layers are on the wrong place and are the wrong size. It's also never rotated. I don't know how to fix this, the piece of code is beneath this text. Is it possible to keep the original image size and still have the layers drawn on the same place I drag them onto the imageview, if not how do I create a new size for this and have the result I want. And how do I draw the image rotated.
UIGraphicsBeginImageContext(imageView2.image.size);
// Draw image1
[imageView2.image drawInRect:CGRectMake(0, 0, imageView2.image.size.width, imageView2.image.size.height)];
// Draw image2
for(UIImageView *viewsSub in [self.imageViewer subviews])
{
[viewsSub.image drawInRect:CGRectMake(viewsSub.frame.origin.x, viewsSub.frame.origin.y, viewsSub.frame.size.width, viewsSub.frame.size.height)];
}
UIImage *resultingImage = UIGraphicsGetImageFromCurrentImageContext();
pld.imageChosen2 = resultingImage;
UIGraphicsEndImageContext();
So you want something like taking a "screenshot" of your actual imageview (with subviews included), don't you?
I used this piece of code to do something similar, but don't know if would work for you.
- (UIImage *)screenshot {
CGFloat scale = [UIScreen screenScale];
if(scale > 1.5) {
UIGraphicsBeginImageContextWithOptions(self.frame.size, NO, scale);
} else {
UIGraphicsBeginImageContext(self.frame.size);
}
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *screenshot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return screenshot;
}
You should add this method inside your imageview (the one that contains all the subviews you're adding).

Snap shot of screen with transformation

I m trying to take snap shot for ipad screen having 2 - 3 views.
I m able to do that.
Now whats my problem is when I m in landscape mode, I just transformrotate one view with
CGAffineTransform transform;
transform = CGAffineTransformMakeRotation(M_PI/2);
imgPlayBoard.transform = transform;
Now, when I take the snap shot, then the image in image view appears portrait.
Whats happening I am not able to get it. I am using following function to take a snapshot.
-(UIImage *)saveImage{
UIGraphicsBeginImageContext(imgPlayBoard.frame.size);
[imgPlayBoard.layer renderInContext:UIGraphicsGetCurrentContext()];
[imagesView.layer renderInContext:UIGraphicsGetCurrentContext()];
[drawBoard.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage * resultingImage = UIGraphicsGetImageFromCurrentImageContext();
UIImageWriteToSavedPhotosAlbum(resultingImage, nil, nil, nil);
UIGraphicsEndImageContext();
return resultingImage;
}
And this is the image I m getting. The size of the image is same in the snap, but all apears is white.
Ok..
I found the solution myself.
All was working fine. But Since I was transforming an image view, and then rendering it to the context, it was taking the original image of the imageview discarding the transformation. So What I did is all the three views I wanted in an image, I added them to an UIView and then took a snap shot of that uiview.
This solved my issue. :)
So now my function appears as below
-(UIImage *)saveImage{
UIGraphicsBeginImageContext(imgPlayBoard.frame.size);
[containerView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage * resultingImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return resultingImage;
}

Capturing 3d object with its Background view in iPhone

I am new in Open GLES.
I got stuck at capturing the 3d object with its Background image.
I am adding the CAEAGLLayer view as subview in my view
and i am able to take the image of 3d object but it is coming with black background but i want to take the image of whole view on which i am showing my 3d object.
So please help me to resolve this issue.
You need to give more information. What is the background?
You use 1 view, it is EAGLView the I guess you should get the correct result.
You use EAGLView as subview of your background then you need to capture 2 images then combine them into one.
Call [CALayer drawInContext:viewContext] to get image of your background view.
Combine 2 images
+ (UIImage *) imageFromImage:(UIImage *)img1 andImage:(UIImage*) img2 {
UIGraphicsBeginImageContext(img1.size);
[img1 drawAtPoint:CGPointMake(0, 0)];
[img2 drawAtPoint:CGPointMake(0, 0)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}

Uiimage from UIView: higher than on-screen resolution?

I've got a UIView which I'm rendering to a UIImage via the typical UIGraphicsBeginImageContextWithOptions method, using a scale of 2.0 so the image output will always be the "retina display" version of what would show up onscreen, regardless of the user's actual screen resolution.
The UIView I'm rendering contains both images and text (UIImages and UILabels).  The image is appearing in the rendered UIImage at its full resolution, and looks great.  But the UILabels appear to have been rasterized at a 1.0 scale and then upscaled to 2.0, resulting in blurry text.
Is there something I'm doing wrong, or is there some way to get the text to render nice and crisp at the higher scale level?  Or is there some way to do this other than using the scaling parameter of UIGraphicsBeginImageContextWithOptions that would have better results?   Thanks!
The solution is to change the labels's contentsScale to 2 before you draw it, then set it back immediately thereafter. I just coded up a project to verify it, and its working just fine making a 2x image in a normal retina phone (simulator). [If you have a public place I can put it let me know.]
EDIT: the extended code walks the subviews and any container UIViews to set/unset the scale
- (IBAction)snapShot:(id)sender
{
[self changeScaleforView:snapView scale:2];
UIGraphicsBeginImageContextWithOptions(snapView.bounds.size, snapView.opaque, 2);
[snapView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
imageDisplay.image = img; // contentsScale
imageDisplay.contentMode = UIViewContentModeScaleAspectFit;
[self changeScaleforView:snapView scale:1];
}
- (void)changeScaleforView:(UIView *)aView scale:(CGFloat)scale
{
[aView.subviews enumerateObjectsUsingBlock:^void(UIView *v, NSUInteger idx, BOOL *stop)
{
if([v isKindOfClass:[UILabel class]]) {
v.layer.contentsScale = scale;
} else
if([v isKindOfClass:[UIImageView class]]) {
// labels and images
// v.layer.contentsScale = scale; won't work
// if the image is not "#2x", you could subclass UIImageView and set the name of the #2x
// on it as a property, then here you would set this imageNamed as the image, then undo it later
} else
if([v isMemberOfClass:[UIView class]]) {
// container view
[self changeScaleforView:v scale:scale];
}
} ];
}
Try rendering to an image with double size, and then create the scaled image:
UIGraphicsBeginImageContextWithOptions(size, NO, 1.0);
// Do stuff
UImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
newImage=[UIImage imageWithCGImage:[newImage CGImage] scale:2.0 orientation:UIImageOrientationUp];
Where:
size = realSize * scale;
I have been struggling with much the same oddities in the context of textview to PDF rendering. I found out that there are some documented properties on the CALayer objects which make up the view. Maybe setting the rasterizationScale of the relevant (sub)layer(s) helps.