ios create fixed size image collage with background - iphone

I am working on an application whose job is to build an image(jpeg) that is a collage of selected images from gallery. I can crop the gallery images to needed size using the technique specified in the question here.
However, I want to create a collage that is 2400x1600 (configurable) pixels and arrange cropped images on white background.
I couldn't find a right example to create a canvas and set its background color. I believe I need to create a core graphics context, create a canvas, set background to white, save as image and work on that image object. However am not able to find the right way to do it. Appreciate any help.
Edit:
Found this code to save view to image. Now the problem is reduced to creating a view that has a canvas of 2400x1600.
-(UIImage*) makeImage {
UIGraphicsBeginImageContext(self.view.bounds.size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return viewImage;
}

You should look up the methods in your example code. self.view.bounds.size is a CGSize, so if you replace the call to UIGraphicsBeginImageContext with the following, it'll get you an image of the size you want:
UIGraphicsBeginImageContext(CGSizeMake(2400.0,1600.0));
Good luck!

Related

Merging UIImageViews to save as 1 image - Transparent areas shows previous images

My app lets users choose or take a picture then add other objects or text on top and rotate/resize them.
For saving i'm just taking a screenshot of the iPhone screen because after trying for hours and hours I just couldn't figure out how to save the original image with the added objects being placed at the right spots, with the right rotation/resized/etc... (If anyone knows a good example/tutorial of how to do this it would be incredibly helpful!)
I have a UIView with a size of 320x366. When user chooses an image I load it inside that UIView and it gets sized to fit properly with it's aspect ratio. When the user is done adding/editing objects on his image he can then save it.
-(UIImage *)createMergedImage
{
UIGraphicsBeginImageContextWithOptions(CGSizeMake(contentView.frame.size.width, contentView.frame.size.height), NO, 0.0f);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextClearRect(context, CGRectMake(0, 0, contentView.frame.size.width,contentView.frame.size.height));
//contentView is the 320x366 view with all the images
[contentView.layer renderInContext:context];
UIImage *screenShot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return screenShot;
}
That's the code i'm using to save the UIView as a picture. Fitting the images with their correct shrunk aspect ratio, there's either transparent border at the top/bottom or left/right.
Now it's saving and works, when I open the image it's exactly what i'd expect. The problem is when i'm looking at the preview image, it shows other images (that i've previously seen on my iPhone) in the transparent part of the picture. As you can see in this following image.
When I go in the Camera Roll the transparent part looks black (like it should) as seen in this second image.
Also when i'm scrolling through my Camera Roll when I get to the image that my app saved, i'll see those extra random images in the transparent area for 0-1 secs before it disappears and becomes black (leaving the correct image the way it should be).
I'm hoping someone has seen something like this before and knows how to fix it.
Thanks!

how to remove cropped image from original image

I am developing an image processing application in ios5. In this application I want to select part of image and then delete that part from the original image.
I tried cropping the image. Cropping is giving me the image that I have selected. But it is not deleting that image from original image.
Any idea how can i achieve it?
Well this is the piece of code that i have used in one of my application and it works perfectly
UIGraphicsBeginImageContext(someImageView.frame.size);
[someImageView.image drawInRect:someImageView.frame];
CGRect rect = CGRectMake(somex, somey, somewidth, someheight);
CGContextClearRect(UIGraphicsGetCurrentContext(), rect);
someImageView.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
For this to work with your code, you need to create a imageview if you dont have one and then replace the someimageview variable with your imageview.
You need to provide the values for somex, somey, somewidth and someheight.

How to merge two images into a single image in iPhone Apps?

In my apps i designed a frame for camera viewer when i click the capture button i should get a single image along with my frame merged over the captured image.
What I did to achieve something like this was basically take a programatic screenshot of the area. You could maybe take the picture first and then apply the frame over it and then use the following code to take a screenshot. Make sure both the image and the frame are subViews of a UIView. In the example both of them would need to be part of "saveView".
UIGraphicsBeginImageContext(saveView.bounds.size);
[saveView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *finalImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

How to create a single image from an overlayed image and a picture taken with iPhone?

There are countless apps out there that do this ... but I'm curious as to what suggested way(s) exists for producing the highest quality image.
Example of what I'm looking to do:
Be able to overlay an image of a mustache on top of the iPhone's camera.
Optional be able to resize/rotate that image.
Take a picture and superimpose the overlayed image (the mustache in the case) on the picture so a single image is produced.
Thanks much.
Here is an article on overlaying an image on the camera. http://mobile-augmented-reality.blogspot.com/2009/09/overlaying-views-on-uiimagepickercontro.html. Also, for rotating and resizing the mustache look at this http://icodeblog.com/2010/10/14/working-with-uigesturerecognizers/. After that, you can use the resulting UIImage from the code below for whatever you need. Change self.view to the camera view.
UIGraphicsBeginImageContext(self.view.bounds.size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *screenShot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

iPhone SDK - How to draw a UIImage onto another UIImage?

I have several rectangular images (in landscape and portrait mode) and want to draw them onto a transparent square image, so that all images became the same size without cropping them. How would I create a transparent UIImage and draw another on top?
Thanks for any hints.
Create a bitmap graphic context with CGBitmapContextCreate. You'll need to determine what the size of the resulting composite image here. You can think of this as a sort of canvas.
Draw the images using CGContextDrawImage. This will draw images onto the same context.
Once you're done drawing all the images into the same context, create an image from that context with CGBitmapContextCreateImage.
Convert the Core Graphics image from step #3 into a UIImage with [UIImage imageWithCGIImage:].
Code examples can be found here.