Averaging multiple UIImages - iphone

I have been searching for this answer for a while. But I haven't been able to find it.
I would like to average the pixels of 30 UIimages. To do so, I would like to do it using Quartz2D instead of going over all the pixels of all the images. It ocurred to me that, in order to paint 30 images together I should just adjust the alpha channel of each of them to 1/30. Then, after painting one in top of the other I would get the desired effect.
the desired formula should be: Dest Px = (img[0].px+....img[29].px)/30
I have tried to achieve it using an imageContext and blending the images together with no luck:
UIGraphicsBeginImageContext(CGSizeMake(sz.width, sz.height));
for (int i=0; i<30; i++) {
UIImage* img = [self.delegate requestImage:self at:i];
CGPoint coord = [self.delegate requestTranslation:self at:i];
[img drawAtPoint:coord blendMode:kCGBlendModeNormal alpha:1/30];
}
UIImage* im = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
How could I get an averaged image of many UIimages?
I have also tried adding an image with many sublayers, but I also get washed out images.
Thanks!

Try changing the following:
[img drawAtPoint:coord blendMode:kCGBlendModeNormal alpha:1/30];
to
[img drawAtPoint:coord blendMode:kCGBlendModeNormal alpha:1.0/30.0];
1/30 (using integer values) == 0, so you'll be drawing the images completely transparent. By adding the .0, you clarify that you want a CGFloat.

Related

divide image into two parts using divider

I'm working on one app where I need to divide a image into two part using a red line.
left part for labels
right part for prices
Question 1.
How can I draw a red line on image?
Question 2.
How can I divide image to two parts using red line ?( red line position is not fixed. user can move the position wherever it want)
Question 3.
How can I get line current position and how can I use that position two divide image
Thanks in advance
I would approach this in somewhat the same manner as koray was suggesting:
1) I am assuming that your above image/view is going to be managed by a view controller, which I will call ImageSeperatorViewController from here on.
Inside of ImageSeperatorViewController, insert koray's code in the -(void) viewDidLoad{} method. Make sure you change the imageToSplit variable to be an UIImageView instead of a plain UIView.
2) Next, I assume that you know how to detect user gestures. You will detect these gestures, and determine if the user has selected the view (i.e. bar in koray's code). Once you have determined if the user has selected bar, just update its origin's X position with the touch position.
CGRect barFrame = bar.frame;
barFrame.origin.x = *X location of the users touch*
bar.frame = barFrame;
3) For cropping, I would not use github.com/bilalmughal/NLImageCropper, it will not do what you need to do.
Try this on for size:
Header:
#interface UIImage (ImageDivider)
- (UIImage*)imageWithDividerAt:(CGFloat)position width:(CGFloat)width color:(UIColor*)color;
- (UIImage*)imageWithDividerAt:(CGFloat)position patternImage:(UIImage*)patternImage;
- (NSArray*)imagesBySlicingAt:(CGFloat)position;
#end
Implementation:
#implementation UIImage (ImageDivider)
- (UIImage*)imageWithDividerAt:(CGFloat)position patternImage:(UIImage*)patternImage
{
//pattern image
UIColor *patternColor = [UIColor colorWithPatternImage:patternImage];
CGFloat width = patternImage.size.width;
//set up context
UIGraphicsBeginImageContext(self.size);
CGContextRef context = UIGraphicsGetCurrentContext();
//draw the existing image into the context
[self drawAtPoint:CGPointZero];
//set the fill color from the pattern image color
CGContextSetFillColorWithColor(context, patternColor.CGColor);
//this is your divider's area
CGRect dividerRect = CGRectMake(position - (width / 2.0f), 0, width, self.size.height);
//the joy of image color patterns being based on 0,0 origin! must set phase
CGContextSetPatternPhase(context, CGSizeMake(dividerRect.origin.x, 0));
//fill the divider rect with the repeating pattern from the image
CGContextFillRect(context, dividerRect);
//get your new image and viola!
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
- (UIImage*)imageWithDividerAt:(CGFloat)position width:(CGFloat)width color:(UIColor *)color
{
//set up context
UIGraphicsBeginImageContext(self.size);
CGContextRef context = UIGraphicsGetCurrentContext();
//draw the existing image into the context
[self drawAtPoint:CGPointZero];
//set the fill color for your divider
CGContextSetFillColorWithColor(context, color.CGColor);
//this is your divider's area
CGRect dividerRect = CGRectMake(position - (width / 2.0f), 0, width, self.size.height);
//fill the divider's rect with the provided color
CGContextFillRect(context, dividerRect);
//get your new image and viola!
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
- (NSArray*)imagesBySlicingAt:(CGFloat)position
{
NSMutableArray *slices = [NSMutableArray array];
//first image
{
//context!
UIGraphicsBeginImageContext(CGSizeMake(position, self.size.height));
//draw the existing image into the context
[self drawAtPoint:CGPointZero];
//get your new image and viola!
[slices addObject:UIGraphicsGetImageFromCurrentImageContext()];
UIGraphicsEndImageContext();
}
//second
{
//context!
UIGraphicsBeginImageContext(CGSizeMake(self.size.width - position, self.size.height));
//draw the existing image into the context
[self drawAtPoint:CGPointMake(-position, 0)];
//get your new image and viola!
[slices addObject:UIGraphicsGetImageFromCurrentImageContext()];
UIGraphicsEndImageContext();
}
return slices;
}
The concept is simple - you want an image with the divider drawn over it. You could just overlay a view, or override drawRect:, or any number of any solutions. I'd rather give you this category. It just uses some quick Core Graphics calls to generate an image with your desired divider, be it pattern image or color, at the specified position. If you want support for horizontal dividers as well, it is rather trivial to modify this as such. Bonus: You can use a tiled image as your divider!
Now to answer your primary question. Using the category is rather self explanatory - just call one of the two methods on your source background to generate one with the divider, and then apply that image rather than the original source image.
Now, the second question is simple - when the divider has been moved, regenerate the image based on the new divider position. This is actually a relatively inefficient way of doing it, but this ought to be lightweight enough for your purposes as well as only being an issue when moving the divider. Premature optimization is just as much a sin.
Third question is also simple - call imagesBySlicingAt: - it will return an array of two images, as generated by slicing through the image at the provided position. Use them as you wish.
This code has been tested to be functional. I strongly suggest that you fiddle around with it, not for any purpose of utility, but to better understand the mechanisms used so that next time, you can be on the answering side of things
For Crop you can try this,
UIImage *image = [UIImage imageNamed:#"yourImage.png"];
CGImageRef tmpImgRef = image.CGImage;
CGImageRef topImgRef = CGImageCreateWithImageInRect(tmpImgRef, CGRectMake(0, 0, image.size.width, image.size.height / 2.0));
UIImage *topImage = [UIImage imageWithCGImage:topImgRef];
CGImageRelease(topImgRef);
CGImageRef bottomImgRef = CGImageCreateWithImageInRect(tmpImgRef, CGRectMake(0, image.size.height / 2.0, image.size.width, image.size.height / 2.0));
UIImage *bottomImage = [UIImage imageWithCGImage:bottomImgRef];
CGImageRelease(bottomImgRef);
hope this can help you, :)
if you want to draw a line you could just use a UIView with red background and make the height the size of your image and the width around 5 pixels.
UIView *imageToSplit; //the image im trying to split using a red bar
CGRect i = imageToSplit.frame;
int x = i.origin.x + i.size.width/2;
int y = i.origin.y;
int width = 5;
int height = i.size.height;
UIView *bar = [[[UIView alloc] initWithFrame:CGRectMake(x, y, width, height)] autorelease];
bar.backgroundColor = [UIColor redColor];
[self.view addSubview:bar];

High Quality Round Corner Image in iPhone

In my app, I want to high quality Image. Image is loaded from Facebook friend list. When that image is loaded in smaller size (50 * 50), its quality is fine. But when I try to get that image in bigger size(280 *280) quality of image diminished.
For round corner m doing like :
self.mImageView.layer.cornerRadius = 10.0;
self.mImageView.layer.borderColor = [UIColor blackColor].CGColor;
self.mImageView.layer.borderWidth = 1.0;
self.mImageView.layer.masksToBounds = YES;
For getting image m using following code :
self.mImageView.image = [self imageWithImage:profileImage scaledToSize:CGSizeMake(280, 280)];
- (UIImage *)imageWithImage:(UIImage *)image scaledToSize:(CGSize)newSize {
UIGraphicsBeginImageContextWithOptions(newSize, YES,0.0);
CGContextRef context = CGContextRetain(UIGraphicsGetCurrentContext());
CGContextTranslateCTM(context, 0.0, newSize.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextSetInterpolationQuality(context, kCGInterpolationLow);
CGContextSetAllowsAntialiasing (context, TRUE);
CGContextSetShouldAntialias(context, TRUE);
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, newSize.width, newSize.height),image.CGImage);
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
I have checked my code several times, but could not figure out how to make image perfect. So, guys how this quality of image will be improved?
Thanx in advance...
…quality of image diminished.
The 'quality' of the image is still present. (Technically, you are introducing a small amount of error by resizing it, but that's not the real problem…)
So, you want to scale a 50x50px image to 280x280px? The information/detail does not exist in the source signal. Ideally, you would download a more appropriately sized image, for the size you want to display at.
If that's not an option, you could reduce pixelation by means of proper resampling and/or interpolation. This would simply smooth out the pixels your program magnifies by 5.6 -- the image would then look like a cross between pixelated and blurred (see CGContextSetAllowsAntialiasing, CGContextSetShouldAntialias, CGContextSetInterpolationQuality and related APIs to accomplish this using quartz).

Core graphics RGB data issue

I am trying to to pixel-by-pixel image filters using Core Graphics (breaking a CGImage into unsigned integers using CFData)
When I try to create an imaged with the processed data, however, the resulting image comes out with significantly different colors.
I commented out the entire loop where I actually alter the pixels' rgb values and nothing changes, either.
When I initialize the UIImage I am using in the filter; I do a resize using drawInRect with UIGraphicsBeginContext(); on an image taken from the camera.
When I remove the resize step and set my image directly from the camera; the filters seem to work just fine. Here's the code where I initialize the image I am using (from inside didFinishPickingImage)
self.editingImage is a UIImageView and self.editingUIImage is a UIImage
-(void)imagePickerController:(UIImagePickerController *)picker
didFinishPickingImage : (UIImage *)image
editingInfo:(NSDictionary *)editingInfo
{
self.didAskForImage = YES;
UIGraphicsBeginImageContext(self.editingImage.frame.size);
float prop = image.size.width / image.size.height;
float left, top, width, height;
if(prop < 1){
height = self.editingImage.frame.size.height;
width = (height / image.size.height) * image.size.width;
left = (self.editingImage.frame.size.width - width)/2;
top = 0;
}else{
width = self.editingImage.frame.size.width;
height = (width / image.size.width) * image.size.height;
top = (self.editingImage.frame.size.height - height)/2;
left = 0;
}
[image drawInRect:CGRectMake(left, top, width, height)];
self.editingUIImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
self.editingImage.image = self.editingUIImage;
[self.contrastSlider addTarget:self action:#selector(doImageFilter:) forControlEvents:UIControlEventValueChanged];
[self.brightnessSlider addTarget:self action:#selector(doImageFilter:) forControlEvents:UIControlEventValueChanged];
[picker dismissModalViewControllerAnimated:YES];
picker = nil;
}
The resizes the image just the way I need it as far as position;
Here's the image filtering function, I've taken the actual loop contents out because they're irrelevant.
- (void) doImageFilter:(id)sender{
CGImageRef src = self.editingUIImage.CGImage;
CFDataRef dta;
dta = CGDataProviderCopyData(CGImageGetDataProvider(src));
UInt8 *pixData = (UInt8 *) CFDataGetBytePtr(dta);
int dtaLen = CFDataGetLength(dta);
for (int i = 0; i < dtaLen; i += 3) {
//the loop
}
CGContextRef ctx;
ctx = CGBitmapContextCreate(pixData, CGImageGetWidth(src), CGImageGetHeight(src), 8, CGImageGetBytesPerRow(src), CGImageGetColorSpace(src), kCGImageAlphaPremultipliedLast);
CGImageRef newCG = CGBitmapContextCreateImage(ctx);
UIImage *new = [UIImage imageWithCGImage:newCG];
CGContextRelease(ctx);
CFRelease(dta);
CGImageRelease(newCG);
self.editingImage.image = new;
}
The image looks like this at first
and then after doing doImageFilter...
As mentioned before, this only happens when I use the resize method shown above.
Really stumped on this one, been researching it all day... any help very appreciated!
Cheers
Update: I've examined all the image objects' color spaces and they're all kCGColorSpaceDeviceRGB. Pretty stumped on this one guys, I'm pretty some something is going wrong when I break the image into unsigned integers, but I'm not sure what.. Anyone?
Your problem is on the last line:
ctx = CGBitmapContextCreate(pixData,
CGImageGetWidth(src),
CGImageGetHeight(src),
8,
CGImageGetBytesPerRow(src),
CGImageGetColorSpace(src),
kCGImageAlphaPremultipliedLast);
You're making an assumption about the alpha and the component ordering of the data of the source image, which is apparently not correct. You should get that from the source image via CGImageGetBitmapInfo(src).
To avoid issues like this one, if you're starting with an arbitrary CGImage and you want to manipulate the bytes of the bitmap directly, it is best to make a CGBitmapContext in a format that you specify yourself (not directly taken from the source image). Then, draw your source image into the bitmap context; CG will convert the image's data into your bitmap context's format, if necessary. Then get the data from the bitmap context and manipulate it.

Overlaying UIImageview over UIImageview save

I'm trying to merge two UIImageViews. The first UIImageView (theimageView) is the background, and the second UIImageView (Birdie) is an image overlaying the first UIImageView. You can load the first UIImageView from a map or take a picture. After this you can drag, rotate and scale the second UIImageView over the first one. I want the output (saved image) to look the same as what I see on the screen.
I got that working, but I get borders and the quality and size are bad. I want the size to be the same as that of the image which is chosen, and the quality to be good. Also I get a crash if I save it a second time, right after the first time.
Here is my current code:
//save actual design in photo library
- (void)captureScreen {
UIImage *myImage = [self addImage:theImageView ToImage:Birdie];
[myImage retain];
UIImageWriteToSavedPhotosAlbum(myImage, self, #selector(imageSavedToPhotosAlbum:didFinishSavingWithError:contextInfo:), self);
}
- (UIImage*) addImage:(UIImage*)theimageView toImage:(UIImage*)Birdie{
CGSize size = CGSizeMake(theimageView.size.height, theimageView.size.width);
UIGraphicsBeginImageContext(size);
CGPoint pointImg1 = CGPointMake(0,0);
[theimageView drawAtPoint:pointImg1 ];
CGPoint pointImage2 = CGPointMake(0, 0);
[Birdie drawAtPoint:pointImage2 ];
UIImage* result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return result;
}
But I only get errors with this code!
Thanks in advanced!
Take a look at Drawing a PNG Image Into a Graphics Context for Blending Mode Manipulation

Computing a UIImage to be saved to the photo album

I basically want to automatically create a tiled image from a bunch of source images, and then save that to the user's photo album. I'm not having any success drawing a bunch of small UIImage's into one big UIImage. What's the best way to accomplish this? Currently I'm using UIGraphicsBeginImageContext() and [UIImage drawAtPoint], etc. All I ever end up with is a 512x512 black square. How should I be doing this? I'm looking to CGLayer's, etc. seems there are a lot of options but none that work particularly easily.
Let me actually put my code in:
CGSize size = CGSizeMake(512, 512);
UIGraphicsBeginImageContext(size);
UIGraphicsPushContext(UIGraphicsGetCurrentContext());
for (int i = 0; i < 4; i++)
{
for (int j = 0; j < 4; j++)
{
UIImage *image = [self getImageAt:i:j];
[image drawAtPoint:CGPointMake(i*128,j*128)];
}
}
UIGraphicsPopContext();
UIImage *result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(result, nil, nil, nil);
I should note that actually the above is not what happens in my code exactly. What really happens is that I call every line before and including to UIGraphicsPushContext, then in an animation timer I slowly increment the drawing and draw to the context. Then after it's all done I call everything from UIGraphicsPopContext onward.
Oh, then you can just save the onscreen view after it has been rendered on screen:
UIGraphicsBeginImageContext(myBigView.bounds.size);
[view drawRect:myBigView.bounds];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
are you storing it back to an image?
UIImage *myBigImage = UIGraphicsGetImageFromCurrentImageContext();
To do exactly what I wanted to do...
Make your GLView as big as the total thing you want. Also make sure glOrtho and your viewport have the right size. Then just draw whatever you want wherever you want, and take a single OpenGL screenshot. Then you don't need to worry about combining to a single UIImage over multiple OpenGL rendering passes, which is no doubt what was causing my issue.