How do I save a section on my screen to the users images (in swift)? - swift

I want my user to be able to upload some images into a little square, and then I want all of them to be saved into one image on the user's iPhone.
I'm basically making an app the combines the users pictures beside each other (there's a ton of apps like that but I want to learn how they work), and then saves the total thing as an image on their phone.

Save all your images in an array(arrImage) and use the following method to merge images
- (UIImage *) mergeImages:(NSArray*)arrImage{
float width = 2024;//set the width of merged image
float height = 2024;//set the height of merged image
CGSize mergedImageSize = CGSizeMake(width, height);
float x = 0;
float y= 0;
UIGraphicsBeginImageContext(mergedImageSize);
for(UIImage *img in arrImages){
CGRect rect = CGRectMake(x, y, width/arrimage.count, height/arrImage.count);
[img drawInRect:rect];
x=x+(width/arrimage.count);
y=y+(height/arrImage.count))
}
 
UIImage* mergedImage = UIGraphicsGetImageFromCurrentImageContext();// it will return an image based on the contents of the current bitmap-based graphics context.
UIGraphicsEndImageContext();
return mergedImage;
}

Related

divide image into two parts using divider

I'm working on one app where I need to divide a image into two part using a red line.
left part for labels
right part for prices
Question 1.
How can I draw a red line on image?
Question 2.
How can I divide image to two parts using red line ?( red line position is not fixed. user can move the position wherever it want)
Question 3.
How can I get line current position and how can I use that position two divide image
Thanks in advance
I would approach this in somewhat the same manner as koray was suggesting:
1) I am assuming that your above image/view is going to be managed by a view controller, which I will call ImageSeperatorViewController from here on.
Inside of ImageSeperatorViewController, insert koray's code in the -(void) viewDidLoad{} method. Make sure you change the imageToSplit variable to be an UIImageView instead of a plain UIView.
2) Next, I assume that you know how to detect user gestures. You will detect these gestures, and determine if the user has selected the view (i.e. bar in koray's code). Once you have determined if the user has selected bar, just update its origin's X position with the touch position.
CGRect barFrame = bar.frame;
barFrame.origin.x = *X location of the users touch*
bar.frame = barFrame;
3) For cropping, I would not use github.com/bilalmughal/NLImageCropper, it will not do what you need to do.
Try this on for size:
Header:
#interface UIImage (ImageDivider)
- (UIImage*)imageWithDividerAt:(CGFloat)position width:(CGFloat)width color:(UIColor*)color;
- (UIImage*)imageWithDividerAt:(CGFloat)position patternImage:(UIImage*)patternImage;
- (NSArray*)imagesBySlicingAt:(CGFloat)position;
#end
Implementation:
#implementation UIImage (ImageDivider)
- (UIImage*)imageWithDividerAt:(CGFloat)position patternImage:(UIImage*)patternImage
{
//pattern image
UIColor *patternColor = [UIColor colorWithPatternImage:patternImage];
CGFloat width = patternImage.size.width;
//set up context
UIGraphicsBeginImageContext(self.size);
CGContextRef context = UIGraphicsGetCurrentContext();
//draw the existing image into the context
[self drawAtPoint:CGPointZero];
//set the fill color from the pattern image color
CGContextSetFillColorWithColor(context, patternColor.CGColor);
//this is your divider's area
CGRect dividerRect = CGRectMake(position - (width / 2.0f), 0, width, self.size.height);
//the joy of image color patterns being based on 0,0 origin! must set phase
CGContextSetPatternPhase(context, CGSizeMake(dividerRect.origin.x, 0));
//fill the divider rect with the repeating pattern from the image
CGContextFillRect(context, dividerRect);
//get your new image and viola!
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
- (UIImage*)imageWithDividerAt:(CGFloat)position width:(CGFloat)width color:(UIColor *)color
{
//set up context
UIGraphicsBeginImageContext(self.size);
CGContextRef context = UIGraphicsGetCurrentContext();
//draw the existing image into the context
[self drawAtPoint:CGPointZero];
//set the fill color for your divider
CGContextSetFillColorWithColor(context, color.CGColor);
//this is your divider's area
CGRect dividerRect = CGRectMake(position - (width / 2.0f), 0, width, self.size.height);
//fill the divider's rect with the provided color
CGContextFillRect(context, dividerRect);
//get your new image and viola!
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
- (NSArray*)imagesBySlicingAt:(CGFloat)position
{
NSMutableArray *slices = [NSMutableArray array];
//first image
{
//context!
UIGraphicsBeginImageContext(CGSizeMake(position, self.size.height));
//draw the existing image into the context
[self drawAtPoint:CGPointZero];
//get your new image and viola!
[slices addObject:UIGraphicsGetImageFromCurrentImageContext()];
UIGraphicsEndImageContext();
}
//second
{
//context!
UIGraphicsBeginImageContext(CGSizeMake(self.size.width - position, self.size.height));
//draw the existing image into the context
[self drawAtPoint:CGPointMake(-position, 0)];
//get your new image and viola!
[slices addObject:UIGraphicsGetImageFromCurrentImageContext()];
UIGraphicsEndImageContext();
}
return slices;
}
The concept is simple - you want an image with the divider drawn over it. You could just overlay a view, or override drawRect:, or any number of any solutions. I'd rather give you this category. It just uses some quick Core Graphics calls to generate an image with your desired divider, be it pattern image or color, at the specified position. If you want support for horizontal dividers as well, it is rather trivial to modify this as such. Bonus: You can use a tiled image as your divider!
Now to answer your primary question. Using the category is rather self explanatory - just call one of the two methods on your source background to generate one with the divider, and then apply that image rather than the original source image.
Now, the second question is simple - when the divider has been moved, regenerate the image based on the new divider position. This is actually a relatively inefficient way of doing it, but this ought to be lightweight enough for your purposes as well as only being an issue when moving the divider. Premature optimization is just as much a sin.
Third question is also simple - call imagesBySlicingAt: - it will return an array of two images, as generated by slicing through the image at the provided position. Use them as you wish.
This code has been tested to be functional. I strongly suggest that you fiddle around with it, not for any purpose of utility, but to better understand the mechanisms used so that next time, you can be on the answering side of things
For Crop you can try this,
UIImage *image = [UIImage imageNamed:#"yourImage.png"];
CGImageRef tmpImgRef = image.CGImage;
CGImageRef topImgRef = CGImageCreateWithImageInRect(tmpImgRef, CGRectMake(0, 0, image.size.width, image.size.height / 2.0));
UIImage *topImage = [UIImage imageWithCGImage:topImgRef];
CGImageRelease(topImgRef);
CGImageRef bottomImgRef = CGImageCreateWithImageInRect(tmpImgRef, CGRectMake(0, image.size.height / 2.0, image.size.width, image.size.height / 2.0));
UIImage *bottomImage = [UIImage imageWithCGImage:bottomImgRef];
CGImageRelease(bottomImgRef);
hope this can help you, :)
if you want to draw a line you could just use a UIView with red background and make the height the size of your image and the width around 5 pixels.
UIView *imageToSplit; //the image im trying to split using a red bar
CGRect i = imageToSplit.frame;
int x = i.origin.x + i.size.width/2;
int y = i.origin.y;
int width = 5;
int height = i.size.height;
UIView *bar = [[[UIView alloc] initWithFrame:CGRectMake(x, y, width, height)] autorelease];
bar.backgroundColor = [UIColor redColor];
[self.view addSubview:bar];

image cropping in UIimage view in ios [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Cropping a UIImage
I have one UIImage view,In that i need to crop some space in that UIImage view and need to store as Uiimage.And the pass to another view that saved image.
there are quite a few ways to crop images using the Core Graphics functions, the most basic one would be:
CGRect cropRect = CGRectMake(0, 0, 100, 100);
CGImageRef cropped_img = CGImageCreateWithImageInRect(yourUIImage.CGImage, cropRect)
Also please check below tutorial
How to Crop an Image (UIImage) On iOS
This might help. It's mostly copy-paste from my code.
I have a big UIImageView with the original image - mImageViewCropper,
Then I have a semitransparent view,
then a smaller UIImageView overlayed - mImageViewCropperSmallWindow. (it looks like the cropper in Instragram)
On pinchGesture and panGesture, I resize the small imageView, and then I call the following function, which loads into the small imageView the correspondant cropped image from the big picture.
-(void)refreshImageInMImageViewCropperSmallWindow
{
double imageBeginX = 0; //we need to set these because of possible ratio mismatch (black stripes)
double imageEndX = 0;
double imageBeginY = 0;
double imageEndY = 0;
CGSize imageSize = mImageViewCropper.image.size;
imageBeginX = mImageViewCropper.frame.size.width /2 - imageSize.width/2;
imageEndX = mImageViewCropper.frame.size.width /2 + imageSize.width/2;
imageBeginY = mImageViewCropper.frame.size.height /2 - imageSize.height/2;
imageEndY = mImageViewCropper.frame.size.height /2 + imageSize.height/2;
CGRect smallFrame = mImageViewCropperSmallWindow.frame;
UIImage *croppedImage = [mImageViewCropper.image crop: CGRectMake(smallFrame.origin.x - imageBeginX, smallFrame.origin.y - imageBeginY,
smallFrame.size.width, smallFrame.size.height)];
mImageViewCropperSmallWindow.image = croppedImage;
}
-the method is far from perfect, but it's a starting point

Core graphics RGB data issue

I am trying to to pixel-by-pixel image filters using Core Graphics (breaking a CGImage into unsigned integers using CFData)
When I try to create an imaged with the processed data, however, the resulting image comes out with significantly different colors.
I commented out the entire loop where I actually alter the pixels' rgb values and nothing changes, either.
When I initialize the UIImage I am using in the filter; I do a resize using drawInRect with UIGraphicsBeginContext(); on an image taken from the camera.
When I remove the resize step and set my image directly from the camera; the filters seem to work just fine. Here's the code where I initialize the image I am using (from inside didFinishPickingImage)
self.editingImage is a UIImageView and self.editingUIImage is a UIImage
-(void)imagePickerController:(UIImagePickerController *)picker
didFinishPickingImage : (UIImage *)image
editingInfo:(NSDictionary *)editingInfo
{
self.didAskForImage = YES;
UIGraphicsBeginImageContext(self.editingImage.frame.size);
float prop = image.size.width / image.size.height;
float left, top, width, height;
if(prop < 1){
height = self.editingImage.frame.size.height;
width = (height / image.size.height) * image.size.width;
left = (self.editingImage.frame.size.width - width)/2;
top = 0;
}else{
width = self.editingImage.frame.size.width;
height = (width / image.size.width) * image.size.height;
top = (self.editingImage.frame.size.height - height)/2;
left = 0;
}
[image drawInRect:CGRectMake(left, top, width, height)];
self.editingUIImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
self.editingImage.image = self.editingUIImage;
[self.contrastSlider addTarget:self action:#selector(doImageFilter:) forControlEvents:UIControlEventValueChanged];
[self.brightnessSlider addTarget:self action:#selector(doImageFilter:) forControlEvents:UIControlEventValueChanged];
[picker dismissModalViewControllerAnimated:YES];
picker = nil;
}
The resizes the image just the way I need it as far as position;
Here's the image filtering function, I've taken the actual loop contents out because they're irrelevant.
- (void) doImageFilter:(id)sender{
CGImageRef src = self.editingUIImage.CGImage;
CFDataRef dta;
dta = CGDataProviderCopyData(CGImageGetDataProvider(src));
UInt8 *pixData = (UInt8 *) CFDataGetBytePtr(dta);
int dtaLen = CFDataGetLength(dta);
for (int i = 0; i < dtaLen; i += 3) {
//the loop
}
CGContextRef ctx;
ctx = CGBitmapContextCreate(pixData, CGImageGetWidth(src), CGImageGetHeight(src), 8, CGImageGetBytesPerRow(src), CGImageGetColorSpace(src), kCGImageAlphaPremultipliedLast);
CGImageRef newCG = CGBitmapContextCreateImage(ctx);
UIImage *new = [UIImage imageWithCGImage:newCG];
CGContextRelease(ctx);
CFRelease(dta);
CGImageRelease(newCG);
self.editingImage.image = new;
}
The image looks like this at first
and then after doing doImageFilter...
As mentioned before, this only happens when I use the resize method shown above.
Really stumped on this one, been researching it all day... any help very appreciated!
Cheers
Update: I've examined all the image objects' color spaces and they're all kCGColorSpaceDeviceRGB. Pretty stumped on this one guys, I'm pretty some something is going wrong when I break the image into unsigned integers, but I'm not sure what.. Anyone?
Your problem is on the last line:
ctx = CGBitmapContextCreate(pixData,
CGImageGetWidth(src),
CGImageGetHeight(src),
8,
CGImageGetBytesPerRow(src),
CGImageGetColorSpace(src),
kCGImageAlphaPremultipliedLast);
You're making an assumption about the alpha and the component ordering of the data of the source image, which is apparently not correct. You should get that from the source image via CGImageGetBitmapInfo(src).
To avoid issues like this one, if you're starting with an arbitrary CGImage and you want to manipulate the bytes of the bitmap directly, it is best to make a CGBitmapContext in a format that you specify yourself (not directly taken from the source image). Then, draw your source image into the bitmap context; CG will convert the image's data into your bitmap context's format, if necessary. Then get the data from the bitmap context and manipulate it.

iPhone SDK: Problem saving one image over another

basically I am making an app that involves a user taking a photo, or selecting one already on their device, and then placing an overlay onto the image.
So, I seem to have coded everything fine, apart from one thing, after the user has selected the overlay and positioned it, when saved the size of the overlay has changed, whereas the x and y values seem correct.
And so this is the code I use to add the overlay ("image" being the users photo):
float wid = (overlay.image.size.width);
float hei = (overlay.image.size.height);
overlay.frame = CGRectMake(0, 0, wid, hei);
[image addSubview:overlay];
And this is the code used to save the resulting image:
UIGraphicsBeginImageContext(image.image.size);
// Draw the users photo
[image.image drawInRect:CGRectMake(0, 0, image.image.size.width, image.image.size.height)];
// Draw the overlay
float xx = (overlay.center.x);
float yy = (overlay.center.y);
CGRect aaFrame = overlay.frame;
float width = aaFrame.size.width;
float height = aaFrame.size.height;
[overlay.image drawInRect:CGRectMake(xx, yy, width, height)];
UIGraphicsEndImageContext();
Any help? Thanks
The problem is that you are using image's size rather than the image view's frame size. Image seems to be much larger than its image view so when you use the image's size the other image's size ends up being much smaller in comparison although it is still the correct size. You can modify your snippet to this –
UIGraphicsBeginImageContext(image.frame.size);
// Draw the users photo
[image.image drawInRect:CGRectMake(0, 0, image.frame.size.width, image.frame.size.height)];
[overlay.image drawInRect:overlay.frame];
UIImage * resultingImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Avoiding loss of quality
While the above method leads to loss of resolution, trying to draw the parent image in its proper resolution might have an unwanted effect on its child image i.e. if the overlay wasn't of high resolution itself then it can end being stretchy. However you can try this code to draw it in the parent image's resolution (untested, let me know if you've problems ) –
float verticalScale = image.image.size.height / image.frame.size.height;
float horizontalScale = image.image.size.width / image.frame.size.width;
CGRect overlayFrame = overlay.frame;
overlayFrame.origin.x *= horizontalScale;
overlayFrame.origin.y *= verticalScale;
overlayFrame.size.width *= horizontalScale;
overlayFrame.size.height *= verticalScale;
UIGraphicsBeginImageContext(image.image.size);
// Draw the users photo
[image.image drawInRect:CGRectMake(0, 0, image.image.size.width, image.image.size.height)];
[overlay.image drawInRect:overlayFrame];
UIImage * resultingImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

Retina display of images-iPhone 3 to 4

I have developed an application of tile game for iPhone 3.
In which I took an image from my resource and divided it into number of tiles using CGImageCreateWithImageInRect ( originalImage.CGImage, frame ); function.
It works great on all iPhones but now I want it to work on Retina Displays also.
So as per this link I have taken anothe image with its size double the current images size and rename it by adding suffix #2x. But the problem is it takes the upper half part of the retina display image only. I think thats because of the frame I have set while using CGImageCreateWithImageInRect. So What shall be done in respect to make this work.
Any kind of help will be really appreciated.
Thanks in advance...
The problem is likely that the #2x image scale is only automatically set up properly for certain initializers of UIImage... Try loading your UIImages using code like this from Tasty Pixel. The entry at that link talks more about this issue.
Using the UIImage+TPAdditions category from the link, you'll implement it like so (after making sure that the images and their #2x counterparts are in your project):
NSString *baseImagePath = [NSHomeDirectory() stringByAppendingPathComponent:#"Documents"];
NSString *myImagePath = [baseImagePath stringByAppendingPathComponent:#"myImage.png"]; // note no need to add #2x.png here
UIImage *myImage = [UIImage imageWithContentsOfResolutionIndependentFile:myImagePath];
Then you should be able to use CGImageCreateWithImageInRect(myImage.CGImage, frame);
Here's how I got it to work in an app I did:
//this is a method that takes a UIImage and slices it into 16 tiles (GridSize * GridSize)
#define GridSize 4
- (void) sliceImage:(UIImage *)image {
CGSize imageSize = [image size];
CGSize square = CGSizeMake(imageSize.width/GridSize, imageSize.height/GridSize);
CGFloat scaleMultiplier = [image scale];
square.width *= scaleMultiplier;
square.height *= scaleMultiplier;
CGFloat scale = ([self frame].size.width/GridSize)/square.width;
CGImageRef source = [image CGImage];
if (source != NULL) {
for (int r = 0; r < GridSize; ++r) {
for (int c = 0; c < GridSize; ++c) {
CGRect slice = CGRectMake(c*square.width, r*square.height, square.width, square.height);
CGImageRef sliceImage = CGImageCreateWithImageInRect(source, slice);
if (sliceImage) {
//we have a tile (as a CGImageRef) from the source image
//do something with it
CFRelease(sliceImage);
}
}
}
}
}
The trick is using the -[UIImage scale] property to figure out how big of a rect you should be slicing.