I have several rectangular images (in landscape and portrait mode) and want to draw them onto a transparent square image, so that all images became the same size without cropping them. How would I create a transparent UIImage and draw another on top?
Thanks for any hints.
Create a bitmap graphic context with CGBitmapContextCreate. You'll need to determine what the size of the resulting composite image here. You can think of this as a sort of canvas.
Draw the images using CGContextDrawImage. This will draw images onto the same context.
Once you're done drawing all the images into the same context, create an image from that context with CGBitmapContextCreateImage.
Convert the Core Graphics image from step #3 into a UIImage with [UIImage imageWithCGIImage:].
Code examples can be found here.
Related
My app lets users choose or take a picture then add other objects or text on top and rotate/resize them.
For saving i'm just taking a screenshot of the iPhone screen because after trying for hours and hours I just couldn't figure out how to save the original image with the added objects being placed at the right spots, with the right rotation/resized/etc... (If anyone knows a good example/tutorial of how to do this it would be incredibly helpful!)
I have a UIView with a size of 320x366. When user chooses an image I load it inside that UIView and it gets sized to fit properly with it's aspect ratio. When the user is done adding/editing objects on his image he can then save it.
-(UIImage *)createMergedImage
{
UIGraphicsBeginImageContextWithOptions(CGSizeMake(contentView.frame.size.width, contentView.frame.size.height), NO, 0.0f);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextClearRect(context, CGRectMake(0, 0, contentView.frame.size.width,contentView.frame.size.height));
//contentView is the 320x366 view with all the images
[contentView.layer renderInContext:context];
UIImage *screenShot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return screenShot;
}
That's the code i'm using to save the UIView as a picture. Fitting the images with their correct shrunk aspect ratio, there's either transparent border at the top/bottom or left/right.
Now it's saving and works, when I open the image it's exactly what i'd expect. The problem is when i'm looking at the preview image, it shows other images (that i've previously seen on my iPhone) in the transparent part of the picture. As you can see in this following image.
When I go in the Camera Roll the transparent part looks black (like it should) as seen in this second image.
Also when i'm scrolling through my Camera Roll when I get to the image that my app saved, i'll see those extra random images in the transparent area for 0-1 secs before it disappears and becomes black (leaving the correct image the way it should be).
I'm hoping someone has seen something like this before and knows how to fix it.
Thanks!
I am working on an application whose job is to build an image(jpeg) that is a collage of selected images from gallery. I can crop the gallery images to needed size using the technique specified in the question here.
However, I want to create a collage that is 2400x1600 (configurable) pixels and arrange cropped images on white background.
I couldn't find a right example to create a canvas and set its background color. I believe I need to create a core graphics context, create a canvas, set background to white, save as image and work on that image object. However am not able to find the right way to do it. Appreciate any help.
Edit:
Found this code to save view to image. Now the problem is reduced to creating a view that has a canvas of 2400x1600.
-(UIImage*) makeImage {
UIGraphicsBeginImageContext(self.view.bounds.size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return viewImage;
}
You should look up the methods in your example code. self.view.bounds.size is a CGSize, so if you replace the call to UIGraphicsBeginImageContext with the following, it'll get you an image of the size you want:
UIGraphicsBeginImageContext(CGSizeMake(2400.0,1600.0));
Good luck!
There are countless apps out there that do this ... but I'm curious as to what suggested way(s) exists for producing the highest quality image.
Example of what I'm looking to do:
Be able to overlay an image of a mustache on top of the iPhone's camera.
Optional be able to resize/rotate that image.
Take a picture and superimpose the overlayed image (the mustache in the case) on the picture so a single image is produced.
Thanks much.
Here is an article on overlaying an image on the camera. http://mobile-augmented-reality.blogspot.com/2009/09/overlaying-views-on-uiimagepickercontro.html. Also, for rotating and resizing the mustache look at this http://icodeblog.com/2010/10/14/working-with-uigesturerecognizers/. After that, you can use the resulting UIImage from the code below for whatever you need. Change self.view to the camera view.
UIGraphicsBeginImageContext(self.view.bounds.size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *screenShot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
I have a UIImage that is displaying a grayscale button. Is there a way to "shade" the image with a color when i draw it? So i could effectively have 1 grayscale button image, but have any color button? I would also like to have the transparent pixels stay transparent if possible.
Thanks!
To do this, you'll need to use Quartz 2D with the following steps:
Create a CGImage for the PNG background with CGImageCreateWithPNGDataProvider, or use UIImage as usual and grab a reference to the Quartz image data using the CGImage property of the UIImage.
Set the blend mode on the image with CGContextSetBlendMode. The mode you're after is probably kCGBlendModeOverlay.
Create a new single-tone CGImage for the shading layer.
Use CGContextDrawImage to composite the shading layer over the background layer.
Detailed instructions on how to do this are available in the Quartz 2D Programming Guide, in the section titled "Using Blend Modes with Images".
I have a 320x480 PNG that I would like to texture map/manipulate on the iPhone but these dimensions are obviously not power of 2. I already tested my texture map manipulation algorithm on a 512x512 PNG that is a black background with a 320x480 image superimposed on it, centered at the origin (lower left corner (0,0)) where the 320x480 area is properly oriented/centered/scaled on the iPhone screen.
What I would like to do now is progress to the point where I can take 320x480 source images and apply them to a blank/black background 512x512 texture generated in code so that the two would combine as one texture so that I can apply the vertices and texture coordinates I used in my 512x512 test. This will be eventually used for camera captured and camera roll sourced images, etc.
Any thoughts? (must be for OpenGL ES 1.1 without use of GL util toolkit, etc.).
Thanks,
Ari
One method I've found to work is to simply draw both images into the current context and then extract the resulting combined image. Is there another way that is more geared towards OpenGL that may be more efficent?
// CGImageRef for background image
// CGImageRef for foreground image
// CGSize for current context
// Define CGContextRef for current context
// UIGraphicsBeginImageContext using CGSize
// Get value for current context with UIGraphicsGetCurrentContext()
// Define 2 rectangles, one for the background and one for the foreground images
// CGContextDrawImage(currentContext, backgroundRect, backgroundImage);
// CGContextDrawImage(currentContext, foregroundRect, foregroundImage);
// UIImage *finalImage = UIGraphicsGetImageFromCurrentImageContext();
// spriteImage = finalImage.CGImage();
// UIGraphicsEndImageContext();
At this point you can proceed to use spriteImage as the image source for the texture and it will be a combination of a blank 512x512 PNG with a 320x480 PNG for example.
I'll replace the 512x512 blank PNG with an image generated in code but this does work.