Return a subimage from a UIImage - iphone

I want to grab a subimage from a UIImage. I've looked around for a similar question, to no avail.
I know the range of pixels I want to grab - how can I return this subimage, from an existing image?

This should help: http://iphonedevelopment.blogspot.com/2010/11/drawing-part-of-uiimage.html
This code snippet is creating a category of UIImage but the code should be easily modified to work without it being a category.
A shorter way of doing the same thing is the following:
CGRect fromRect = CGRectMake(0, 0, 320, 480); // or whatever rectangle
CGImageRef drawImage = CGImageCreateWithImageInRect(image.CGImage, fromRect);
UIImage *newImage = [UIImage imageWithCGImage:drawImage];
CGImageRelease(drawImage);
Hope this helps!

Updated #donkim answer for swift 3:
let fromRect=CGRect(x:0,y:0,width:320,height:480)
let drawImage = image.cgImage!.cropping(to: fromRect)
let bimage = UIImage(cgImage: drawImage!)

In Swift 4, taking into account screen scale (otherwise your new image will be too large):
let img = UIImage(named: "existingImage")!
let scale = UIScreen.main.scale
let dy: CGFloat = 6 * scale // say you want 6pt from bottom
let area = CGRect(x: 0, y: img.size.height * scale - dy, width: img.size.width * scale, height: dy)
let crop = img.cgImage!.cropping(to: area)!
let subImage = UIImage(cgImage: crop, scale: scale, orientation:.up)

Related

Creating thumbnail -> ugly quality (Swift - preparingThumbnail)

This is how I create a thumbnail from Data:
let image = UIImage(data: data)!
.preparingThumbnail(of: .init(width: size, height: size))!
try image.pngData()!.write(to: url)
The data variable contains the original image. That looks good, but I want to create thumbnails from lists.
The size variable holds a value which is the same height as my Image in SwiftUI. The problem is, it looks horrible:
Thumbnail:
Original:
The 'thumbnail' is the same size as the image above, it really looks that bad on the device, it is not stretched out. What is the correct way to create a thumbnail of the same quality in iOS 15.0>?
Have you tried to consider the aspect ratio as well instead of just the size? Pass in the data (your let image = UIImage(data: data)! and see if that works)
func resizeImageWithAspect(image: UIImage,scaledToMaxWidth width:CGFloat,maxHeight height :CGFloat)->UIImage? {
let oldWidth = image.size.width;
let oldHeight = image.size.height;
let scaledBy = (oldWidth > oldHeight) ? width / oldWidth : height / oldHeight;
let newHeight = oldHeight * scaledBy;
let newWidth = oldWidth * scaledBy;
let newSize = CGSize(width: newWidth, height: newHeight)
UIGraphicsBeginImageContextWithOptions(newSize,false,UIScreen.main.scale);
image.draw(in: CGRect(x: 0, y: 0, width: newSize.width, height: newSize.height));
let newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage
}

Getting masked layer as UIImage on Swift on top of UIImageView

I'm trying to get the UIImage of the mask that I applied to a UIImageView.
I'm adding the mask using UIBezierPath and want the actual masked layer as UIImage, not the whole image. Think of it as a crop feature.
I'm cropping the image using:
func cropImage() {
shapeLayer.fillColor = UIColor.black.cgColor
viewSource.imageView.layer.mask = shapeLayer
viewSource.imageView.layer.masksToBounds = true
UIGraphicsBeginImageContextWithOptions(viewSource.imageView.bounds.size, false, 1)
viewSource.imageView.layer.render(in: UIGraphicsGetCurrentContext()!)
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
self.completionObservable.onNext(newImage)
}
This eventually gives me the masked image on top of the old dimensions (the initial imageView width and height). But I want to have only the masked image, excluding the white background around them.
The screens are as shown:
I know what you mean now. Here is the answer, just update the size of imageContext.
UIGraphicsBeginImageContextWithOptions((shapeLayer.path?.boundingBoxOfPath)!.size, false, 1)
If it's not so simple, can try CIImage pipeline to achieve.
let context = CIContext()
let m1 = newImage?.cgImage
let m = CIImage.init(cgImage: m1!)
let bounds = imageView.layer.bounds
let cgImage = context.createCGImage(m, from: CGRect.init(x: 0, y: bounds.size.height, width: bounds.size.width, height: bounds.size.height))
let newUIImage = UIImage.init(cgImage: cgImage!)
You may need to adjust transform.

How to save a UIImage to Parse

I've taken an image and converted it to the jpeg format as such
let jpgImage = UIImageJPEGRepresentation(image1, 0.5)
Next I want to upload this image to Parse as a object part of a PFUser. When ever I try this I get the error
[Error]: The object is too large -- should be less than 128 kB
I don't know how to fix this error and just sign up the user. Thanks for any help or advice!
Objects are limited to 128kb as the error states, if your image is larger than that as it is you can choose to shrink it by using the function below.
func ResizeImage(image: UIImage, targetSize: CGSize) -> UIImage {
let size = image.size
let widthRatio = targetSize.width / image.size.width
let heightRatio = targetSize.height / image.size.height
// Figure out what our orientation is, and use that to form the rectangle
var newSize: CGSize
if(widthRatio > heightRatio) {
newSize = CGSizeMake(size.width * heightRatio, size.height * heightRatio)
} else {
newSize = CGSizeMake(size.width * widthRatio, size.height * widthRatio)
}
// This is the rect that we've calculated out and this is what is actually used below
let rect = CGRectMake(0, 0, newSize.width, newSize.height)
// Actually do the resizing to the rect using the ImageContext stuff
UIGraphicsBeginImageContextWithOptions(newSize, false, 1.0)
image.drawInRect(rect)
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage
}
Use of the above function, you can resize your image as dynamics
dimension.Here as above code resize image of 200*200. By calling the
below function.
self.ResizeImage(UIImage(named: "yourImageName")!, targetSize: CGSizeMake(200.0, 200.0))
Source code from here.
So start by checking the image size by:
let sizeOfImage = image?.size
if let image = image {
let sizeOfImage = image.size
// Check the size here and resize if needed
}

iPhone, how does one overlay one image onto another to create a new image to save? (watermark)

Basically I want to take an image that the user chooses from their photo library and then apply a watermark, a triangle in the lower right that has the app name on it. I have the second image already made with a transparent layer in photoshop.
I tried a function, which I can't remember the exact name, but it involved CGIImages and masks. This combines the two images, but as a mask, which made the image darker where the transparent layer was and the images were not merged per se, just masked.
How would I get the watermark image to merge with another image, to make a UIImage, without displaying the images on the screen?
Thank you.
It's pretty easy:
UIImage *backgroundImage = [UIImage imageNamed:#"image.png"];
UIImage *watermarkImage = [UIImage imageNamed:#"watermark.png"];
UIGraphicsBeginImageContext(backgroundImage.size);
[backgroundImage drawInRect:CGRectMake(0, 0, backgroundImage.size.width, backgroundImage.size.height)];
[watermarkImage drawInRect:CGRectMake(backgroundImage.size.width - watermarkImage.size.width, backgroundImage.size.height - watermarkImage.size.height, watermarkImage.size.width, watermarkImage.size.height)];
UIImage *result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
If you want the background and watermark to be of the same size then use this code
...
[backgroundImage drawInRect:CGRectMake(0, 0, backgroundImage.size.width, backgroundImage.size.height)];
[watermarkImage drawInRect:CGRectMake(0, 0, backgroundImage.size.width, backgroundImage.size.height)];
...
The solution provided by omz also works in Swift, like so:
let backgroundImage = UIImage(named: "image.png")!
let watermarkImage = UIImage(named: "watermark.png")!
UIGraphicsBeginImageContextWithOptions(backgroundImage.size, false, 0.0)
backgroundImage.draw(in: CGRect(x: 0.0, y: 0.0, width: backgroundImage.size.width, height: backgroundImage.size.height))
watermarkImage.draw(in: CGRect(x: backgroundImage.size.width - watermarkImage.size.width, y: backgroundImage.size.height - watermarkImage.size.height, width: watermarkImage.size.width, height: watermarkImage.size.height))
let result = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
SWIFT 4
let backgroundImage = imageData!
let watermarkImage = #imageLiteral(resourceName: "jodi_url_icon")
UIGraphicsBeginImageContextWithOptions(backgroundImage.size, false, 0.0)
backgroundImage.draw(in: CGRect(x: 0.0, y: 0.0, width: backgroundImage.size.width, height: backgroundImage.size.height))
watermarkImage.draw(in: CGRect(x: 10, y: 10, width: watermarkImage.size.width, height: backgroundImage.size.height - 40))
let result = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
self.imgaeView.image = result
Use result to UIImageView, tested.
You can use this method, which is very dynamic and you can specify the starting position of the second image and total size of the image.
-(UIImage *) addImageToImage:(UIImage *)img withImage2:(UIImage *)img2 andRect:(CGRect)cropRect withImageWidth:(int) width{
CGSize size = CGSizeMake(width,40);
UIGraphicsBeginImageContext(size);
CGPoint pointImg1 = CGPointMake(0,0);
[img drawAtPoint:pointImg1];
CGPoint pointImg2 = cropRect.origin;
[img2 drawAtPoint: pointImg2];
UIImage* result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return result;
}
SWIFT 5 Function:
func addWaterMark(image: UIImage) -> UIImage {
let backgroundImage = image//UIImage(named: "image.png")
let watermarkImage = UIImage(named: "waterMark.png")
UIGraphicsBeginImageContextWithOptions(backgroundImage.size, false, 0.0)
backgroundImage.draw(in: CGRect(x: 0.0, y: 0.0, width: backgroundImage.size.width, height: backgroundImage.size.height))
watermarkImage!.draw(in: CGRect(x: backgroundImage.size.width - watermarkImage!.size.width, y: backgroundImage.size.height - watermarkImage!.size.height, width: watermarkImage!.size.width, height: watermarkImage!.size.height))
let result = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return result!
}

Get zoomed image from UIImageView

I am using a UIImageview inside a UIScrollView to let users pinch and zoom the image. I now want to get the image which the user has modified (small or zoomed in). How can I do this?
Thanks.
OK, am trying something new now. Can I get a thumbnail from an image from somewhere in between?
I can get the thumbnail from (0,0) for a fixed size. Can I get it from say (10, 20) or (30, 30)? Thanks again.
I tried using CGImageCreateWithImageInRect with a rect having these values for x and y. But it gives me an image from point 0,0
When you zoom UIImageView object inside UIScrollView, you change only UIImageView frame, but UIImage in image property remains unchanged.
if you only want to show this image scaled in some other UIImageView, you should not change UIImage size, just set your new UIImageView frame same as frame of zoomed UIImageView from UIScrollView and use same image:
UIImageView * ImageViewOnScroll;
//set image and zoom it
...
UIImageView * newImageView = [[UIImageView alloc] initWithImage:ImageViewOnScroll.image];
newImageView.frame = ImageViewOnScroll.frame;
if for some reason you want to create new UIImage with changed size you can do it for example with following simple method:
UIImage * resizeImage(UIImage * img, CGSize newSize){
UIGraphicsBeginImageContext(newSize);
//or other CGInterpolationQuality value
CGContextSetInterpolationQuality(UIGraphicsGetCurrentContext(), kCGInterpolationDefault);
[img drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage * newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
I solved it by creating a tiny UIImage extension (Swift 5.2 compatible) that works perfectly when you have UIImageView embedded in UIScrollView:
extension UIImage {
func crop(from scrollView: UIScrollView) -> UIImage? {
let zoom: CGFloat = 1.0 / scrollView.zoomScale
let xOffset: CGFloat = (size.width / scrollView.contentSize.width) * scrollView.contentOffset.x
let yOffset: CGFloat = (size.height / scrollView.contentSize.height) * scrollView.contentOffset.y
let cropRect = CGRect(xOffset * scale, yOffset * scale, size.width * zoom * scale, size.height * zoom * scale)
guard let croppedImageRef = cgImage?.cropping(to: cropRect) else { return nil }
let croppedImage = UIImage(cgImage: croppedImageRef, scale: scale, orientation: imageOrientation)
return croppedImage
}
}
Use it to get zoomed part of the image:
let croppedImage = sourceImage.crop(from: scrollView)