Computing a UIImage to be saved to the photo album - iphone

I basically want to automatically create a tiled image from a bunch of source images, and then save that to the user's photo album. I'm not having any success drawing a bunch of small UIImage's into one big UIImage. What's the best way to accomplish this? Currently I'm using UIGraphicsBeginImageContext() and [UIImage drawAtPoint], etc. All I ever end up with is a 512x512 black square. How should I be doing this? I'm looking to CGLayer's, etc. seems there are a lot of options but none that work particularly easily.
Let me actually put my code in:
CGSize size = CGSizeMake(512, 512);
UIGraphicsBeginImageContext(size);
UIGraphicsPushContext(UIGraphicsGetCurrentContext());
for (int i = 0; i < 4; i++)
{
for (int j = 0; j < 4; j++)
{
UIImage *image = [self getImageAt:i:j];
[image drawAtPoint:CGPointMake(i*128,j*128)];
}
}
UIGraphicsPopContext();
UIImage *result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(result, nil, nil, nil);
I should note that actually the above is not what happens in my code exactly. What really happens is that I call every line before and including to UIGraphicsPushContext, then in an animation timer I slowly increment the drawing and draw to the context. Then after it's all done I call everything from UIGraphicsPopContext onward.

Oh, then you can just save the onscreen view after it has been rendered on screen:
UIGraphicsBeginImageContext(myBigView.bounds.size);
[view drawRect:myBigView.bounds];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
are you storing it back to an image?
UIImage *myBigImage = UIGraphicsGetImageFromCurrentImageContext();

To do exactly what I wanted to do...
Make your GLView as big as the total thing you want. Also make sure glOrtho and your viewport have the right size. Then just draw whatever you want wherever you want, and take a single OpenGL screenshot. Then you don't need to worry about combining to a single UIImage over multiple OpenGL rendering passes, which is no doubt what was causing my issue.

Related

Resize an ALAsset Photo takes a long time. Any way around this?

I have a blog application that I'm making. To compose a new entry, there is a "Compose Entry" view where the user can select a photo and input text. For the photo, there is a UIImageView placeholder and upon clicking this, a custom ImagePicker comes up where the user can select up to 3 photos.
This is where the problem comes in. I don't need the full resolution photo from the ALAsset, but at the same time, the thumbnail is too low resolution for me to use.
So what I'm doing at this point is resizing the fullResolution photos to a smaller size. However, this takes some time, especially when resizing up to 3 photos to a smaller size.
Here is a code snipped to show what I'm doing:
ALAssetRepresentation *rep = [[dict objectForKey:#"assetObject"] defaultRepresentation];
CGImageRef iref = [rep fullResolutionImage];
if (iref)
{
CGRect screenBounds = [[UIScreen mainScreen] bounds];
UIImage *previewImage;
UIImage *largeImage;
if([rep orientation] == ALAssetOrientationUp) //landscape image
{
largeImage = [[UIImage imageWithCGImage:iref] scaledToWidth:screenBounds.size.width];
previewImage = [[UIImage imageWithCGImage:iref] scaledToWidth:300];
}
else // portrait image
{
previewImage = [[[UIImage imageWithCGImage:iref] scaledToHeight:300] imageRotatedByDegrees:90];
largeImage = [[[UIImage imageWithCGImage:iref] scaledToHeight:screenBounds.size.height] imageRotatedByDegrees:90];
}
}
Here, from the fullresolution image, I am creating two images: a preview image (max 300px on the long end) and a large image (max 960px or 640px on the long end). The preview image is what is shown on the app itself in the "new entry" preview. The large image is what will be used when uploading to the server.
The actual code I'm using to resize, I grabbed somewhere from here:
-(UIImage*)scaledToWidth:(float)i_width
{
float oldWidth = self.size.width;
float scaleFactor = i_width / oldWidth;
float newHeight = self.size.height * scaleFactor;
float newWidth = oldWidth * scaleFactor;
UIGraphicsBeginImageContext(CGSizeMake(newWidth, newHeight));
[self drawInRect:CGRectMake(0, 0, newWidth, newHeight)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
Am I doing things wrong here? As it stands, the ALAsset thumbnail is too low clarity, and at the same time, I dont need the entire full resolution. It's all working now, but the resizing takes some time. Is this just a necessary consequence?
Thanks!
It is a necessary consequence of resizing your image that it will take some amount of time. How much depends on the device, the resolution of the asset and the format of the asset. But you don't have any control over that. But you do have control over where the resizing takes place. I suspect that right now you are resizing the image in your main thread, which will cause the UI to grind to a halt while you are doing the resizing. Do enough images, and your app will appear hung for long enough that the user will just go off and do something else (perhaps check out competing apps in the App Store).
What you should be doing is performing the resizing off the main thread. With iOS 4 and later, this has become much simpler because you can use Grand Central Dispatch to do the resizing. You can take your original block of code from above and wrap it in a block like this:
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_LOW, 0), ^{
ALAssetRepresentation *rep = [[dict objectForKey:#"assetObject"] defaultRepresentation];
CGImageRef iref = [rep fullResolutionImage];
if (iref)
{
CGRect screenBounds = [[UIScreen mainScreen] bounds];
__block UIImage *previewImage;
__block UIImage *largeImage;
if([rep orientation] == ALAssetOrientationUp) //landscape image
{
largeImage = [[UIImage imageWithCGImage:iref] scaledToWidth:screenBounds.size.width];
previewImage = [[UIImage imageWithCGImage:iref] scaledToWidth:300];
}
else // portrait image
{
previewImage = [[[UIImage imageWithCGImage:iref] scaledToHeight:300] imageRotatedByDegrees:90];
largeImage = [[[UIImage imageWithCGImage:iref] scaledToHeight:screenBounds.size.height] imageRotatedByDegrees:90];
}
dispatch_async(dispatch_get_main_queue(), ^{
// do what ever you need to do in the main thread here once your image is resized.
// this is going to be things like setting the UIImageViews to show your new images
// or adding new views to your view hierarchy
});
}
});
You'll have to think about things a little differently this way. For example, you've now broken up what used to be a single step into multiple steps now. Code that was running after this will end up running before the image resize is complete or before you actually do anything with the images, so you need to make sure that you didn't have any dependencies on those images or you'll likely crash.
A late answer, but for those stumbling on this question, you might want to consider using the fullScreenImage rather than the fullResolutionImage of the defaultRepresentation. It's usually much smaller, but still large enough to maintain good quality for larger thumbnails.

Add 2 UIImages into One UIImage

I am adding 2 images to each other and wanted to know if this is a good way to do this? This code works and looked to be powerful.
So, my question really is, It this good or is there a better way?
PS: Warning code written by a designer.
Call the function:
- (IBAction) {
UIImage *MyFirstImage = UIImage imageNamed: #"Image.png"];
UIImage *MyTopImage = UIImage imageNamed: #"Image2.png"];
CGFloat yFloat = 50;
CGFloat xFloat = 50;
UIImage *newImage = [self placeImageOnImage:MyFirstImage imageOver:MyTopImage x:&xFloat y:&yFloat];
}
The Function:
- (UIImage*) placeImageOnImage:(UIImage *)image topImage:(UIImage *)topImage x:(CGFloat *)x y:(CGFloat *)y {
// if you want the image to be added next to the image make this CGSize bigger.
CGSize newSize = CGSizeMake(image.size.width,image.size.height);
UIGraphicsBeginImageContext( newSize );
[topImage drawInRect:CGRectMake(*x,*y,topImage.size.width,topImage.size.height)];
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height) blendMode:kCGBlendModeDestinationOver alpha:1];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
Looks OK. Perhaps you don't really need the CGFloat pointers, but that's fine, too.
The main idea is correct. There is no better way to do what you want.
Minuses:
1) Consider UIGraphicsBeginImageContextWithOptions method. UIGraphicsBeginImageContext isn't good for retina.
2) Don't pass floats as pointers. Use x:(CGFloat)x y:(CGFloat)y instead
You should use the begin context version, UIGraphicsBeginImageContextWithOptions, that allows you to specify options for scale (and pass 0 as the scale) do you don't lose any quality on retina displays.
If you want one image drawn on top of another image, just draw the one in back, then the one in front, exactly as if you were using paint. There is no need to use blend modes.
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
[topImage drawInRect:CGRectMake(*x,*y,topImage.size.width,topImage.size.height)];

Averaging multiple UIImages

I have been searching for this answer for a while. But I haven't been able to find it.
I would like to average the pixels of 30 UIimages. To do so, I would like to do it using Quartz2D instead of going over all the pixels of all the images. It ocurred to me that, in order to paint 30 images together I should just adjust the alpha channel of each of them to 1/30. Then, after painting one in top of the other I would get the desired effect.
the desired formula should be: Dest Px = (img[0].px+....img[29].px)/30
I have tried to achieve it using an imageContext and blending the images together with no luck:
UIGraphicsBeginImageContext(CGSizeMake(sz.width, sz.height));
for (int i=0; i<30; i++) {
UIImage* img = [self.delegate requestImage:self at:i];
CGPoint coord = [self.delegate requestTranslation:self at:i];
[img drawAtPoint:coord blendMode:kCGBlendModeNormal alpha:1/30];
}
UIImage* im = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
How could I get an averaged image of many UIimages?
I have also tried adding an image with many sublayers, but I also get washed out images.
Thanks!
Try changing the following:
[img drawAtPoint:coord blendMode:kCGBlendModeNormal alpha:1/30];
to
[img drawAtPoint:coord blendMode:kCGBlendModeNormal alpha:1.0/30.0];
1/30 (using integer values) == 0, so you'll be drawing the images completely transparent. By adding the .0, you clarify that you want a CGFloat.

Overlaying UIImageview over UIImageview save

I'm trying to merge two UIImageViews. The first UIImageView (theimageView) is the background, and the second UIImageView (Birdie) is an image overlaying the first UIImageView. You can load the first UIImageView from a map or take a picture. After this you can drag, rotate and scale the second UIImageView over the first one. I want the output (saved image) to look the same as what I see on the screen.
I got that working, but I get borders and the quality and size are bad. I want the size to be the same as that of the image which is chosen, and the quality to be good. Also I get a crash if I save it a second time, right after the first time.
Here is my current code:
//save actual design in photo library
- (void)captureScreen {
UIImage *myImage = [self addImage:theImageView ToImage:Birdie];
[myImage retain];
UIImageWriteToSavedPhotosAlbum(myImage, self, #selector(imageSavedToPhotosAlbum:didFinishSavingWithError:contextInfo:), self);
}
- (UIImage*) addImage:(UIImage*)theimageView toImage:(UIImage*)Birdie{
CGSize size = CGSizeMake(theimageView.size.height, theimageView.size.width);
UIGraphicsBeginImageContext(size);
CGPoint pointImg1 = CGPointMake(0,0);
[theimageView drawAtPoint:pointImg1 ];
CGPoint pointImage2 = CGPointMake(0, 0);
[Birdie drawAtPoint:pointImage2 ];
UIImage* result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return result;
}
But I only get errors with this code!
Thanks in advanced!
Take a look at Drawing a PNG Image Into a Graphics Context for Blending Mode Manipulation

Help on capturing screen

i want to take screenshots in landscape mode.
currently my below code takes screenshots in PORTRAIT mode.
also i want to store the images into the given location not in photo library..
how can i attain this...
thanks for any help
below is my code
UIGraphicsBeginImageContext(self.view.bounds.size) ;
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(viewImage, self, nil, nil);
As far as I know there is no way to take screenshots in landscape mode. Images taken with the iPhone contain an information called imageOrientation which is then used to rotate an UIImageView displaying that image.
But you should be able to take an image in portrait mode and rotate it by 90 degree before saving it. I can't try it right now but the following should work:
Create a method for rotation and pass the UIImage as an argument.
CGSize size = sizeOfImage;
UIGraphicsBeginImageContext(size);
CGContextRotateCTM(ctx, angleInRadians); // (M_PI/2) or (3M_PI/2) depending on left/right rotation
CGContextDrawImage(UIGraphicsGetCurrentContext(),
CGRectMake(0,0,size.width, size.height),
image);
UIImage *copy = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return copy;
To store into a given location you can use NSData as I have answered on your other question ( Saving images to a given location )