Cropping CGRect from AVCapturePhotoOutput (resizeAspectFill) - swift

I have found the following problem and unfortunatly other posts have not helped me to a working solution.
I have a simple app that shows the camera preview (AVCaptureVideoPreviewLayer) where the video gravity has been set to resizeAspectFill (videoGravity = .resizeAspectFill).
From my understanding this only streches the image in the width to make to fill the screen.
On my preview layer I also have applied a CGRect as a mask with fixed x, y, width and height.
Now once I take a photo i'm trying to crop that exact rectangle out of the image. For my understanding i'm supposed to use some kind of math to convert the CGRect to the same aspect ratio as the image that I get from the AVCapturePhotoOutput method but it never seems to crop correctly in the width.
private func cropImage(image: UIImage) {
let rect = CGRect(x: 25, y: 150, width: 325, height: 230)
let scale = CGAffineTransform(scaleX: 1/self.view.frame.width, y: 1/self.view.frame.height)
let flip = CGAffineTransform(scaleX: 1, y: -1).translatedBy(x: 0, y: -1)
let bounds = rect.applying(scale).applying(flip)
let topLeft = bounds.topLeft.scaled(to: image.size)
let topRight = bounds.topRight.scaled(to: image.size)
let bottomLeft = bounds.bottomLeft.scaled(to: image.size)
let bottomRight = bounds.bottomRight.scaled(to: image.size)
var ciImage = CIImage(image: image.forceSameOrientation())!
ciImage = ciImage.applyingFilter("CIPerspectiveCorrection", parameters: [
"inputTopLeft": CIVector(cgPoint: bottomLeft),
"inputTopRight": CIVector(cgPoint: bottomRight),
"inputBottomLeft": CIVector(cgPoint: topLeft),
"inputBottomRight": CIVector(cgPoint: topRight)
])
let context = CIContext()
let cgImage = context.createCGImage(ciImage, from: ciImage.extent)
let output = UIImage(cgImage: cgImage!)
let vc = PreviewViewController()
vc.imageView.image = output
self.present(vc, animated: true, completion: nil)
}
So again, basically it does crop at the correct height but its only the width that does not seem to go well.
Image example of what I would want to capture.
https://imgur.com/a/8GryEgX
As you can see the bounding box in the top left stops after the "Q" button.
Result:
https://imgur.com/FwKRWxK
As you can see in this image, it does crop correctly in the height however if we take a look at the top left it also includes half of the button to the left of the "Q" (Tab button)
Any help towards the solution would be appreciated!

I managed to solve the issue with this code.
private func cropToPreviewLayer(from originalImage: UIImage, toSizeOf rect: CGRect) -> UIImage? {
guard let cgImage = originalImage.cgImage else { return nil }
// This previewLayer is the AVCaptureVideoPreviewLayer which the resizeAspectFill and videoOrientation portrait has been set.
let outputRect = previewLayer.metadataOutputRectConverted(fromLayerRect: rect)
let width = CGFloat(cgImage.width)
let height = CGFloat(cgImage.height)
let cropRect = CGRect(x: (outputRect.origin.x * width), y: (outputRect.origin.y * height), width: (outputRect.size.width * width), height: (outputRect.size.height * height))
if let croppedCGImage = cgImage.cropping(to: cropRect) {
return UIImage(cgImage: croppedCGImage, scale: 1.0, orientation: originalImage.imageOrientation)
}
return nil
}
usage of the piece of code for my case:
let rect = CGRect(x: 25, y: 150, width: 325, height: 230)
let croppedImage = self.cropToPreviewLayer(from: image, toSizeOf: rect)
self.imageView.image = croppedImage

Related

How to combine two UIImages into a single Image in Swift?

I've been trying to merge two images where one is on top and the other at the bottom. This code below doesn't seem to work. The x coordinator is correct but the y doesn't seem right and it crops the top image when I alter it. What am I doing wrong?
func combine(bottomImage: Data, topImage: Data) -> UIImage {
let bottomImage = UIImage(data: topImage)
let topImage = UIImage(data: bottomImage)
let size = CGSize(width: bottomImage!.size.width, height: bottomImage!.size.height + topImage!.size.height)
UIGraphicsBeginImageContext(size)
let areaSizeb = CGRect(x: 0, y: 0, width: bottomImage!.size.width, height: bottomImage!.size.height)
let areaSize = CGRect(x: 0, y: 0, width: topImage!.size.width, height: topImage!.size.height)
bottomImage!.draw(in: areaSizeb)
topImage!.draw(in: areaSize)
let newImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return newImage
}
You are drawing both images into the same rect. You also should not use force-unwrapping. That causes your app to crash if anything goes wrong.
There are also various other small mistakes.
Change your function like this:
// Return an Optional so we can return nil if something goes wrong
func combine(bottomImage: Data, topImage: Data) -> UIImage? {
// Use a guard statement to make sure
// the data can be converted to images
guard
let bottomImage = UIImage(data: bottomImage),
let topImage = UIImage(data: topImage) else {
return nil
}
// Use a width wide enough for the widest image
let width = max(bottomImage.size.width, topImage.size.width)
// Make the height tall enough to stack the images on top of each other.
let size = CGSize(width: width, height: bottomImage.size.height + topImage.size.height)
UIGraphicsBeginImageContext(size)
let bottomRect = CGRect(
x: 0,
y: 0,
width: bottomImage.size.width,
height: bottomImage.size.height)
// Position the bottom image under the top image.
let topRect = CGRect(
x: 0,
y: bottomImage.size.height,
width: topImage.size.width,
height: topImage.size.height)
bottomImage.draw(in: bottomRect)
topImage!.draw(in: topRect)
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage
}
(And you should really be using a UIGraphicsImageRenderer rather than than calling UIGraphicsBeginImageContext()/ UIGraphicsEndImageContext().)
Edit:
Note that if the 2 images are different widths the above code will leave a "dead space" on the right of the narrower image. You could also make the code center the narrower image, or scale it up to be the same width. (If you do scale it up I suggest scaling it up in both dimensions to preserve the original aspect ratio. Otherwise it will look stretched and unnatural.)

How to take a snapshot from UIImage of a UIImageView that is scaleAspectFit mode?

I have an image in a UIImageView:
imageView.contentMode = .scaleAspectFit
imageView.backgroundColor = .red
because of .scaleAspectFit the image view has some red borders and thats OK:
User can added some UIView like label or images over the imageView.
In final step I used the following code to save edited image and user can share it or save it to photo library:
private func generateImage() -> UIImage? {
var finalImage: UIImage?
UIGraphicsBeginImageContextWithOptions(CGSize(width: imageView.frame.size.width, height: imageView.frame.size.height), true, 0)
imageView.drawHierarchy(in: CGRect(x: 0, y: 0, width: imageView.frame.size.width, height: imageView.frame.size.height), afterScreenUpdates: true)
finalImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
The problem is that the finalImage still has the red borders from imageView.
You can get CGRect of the UIImage displayed in the UIImageView in AspectFit content mode. Please create extension of UIImageView like this,
extension UIImageView {
var contentClippingRect: CGRect {
guard let image = image else { return bounds }
guard contentMode == .scaleAspectFit else { return bounds }
guard image.size.width > 0 && image.size.height > 0 else { return bounds }
let scale: CGFloat
if image.size.width > image.size.height {
scale = bounds.width / image.size.width
} else {
scale = bounds.height / image.size.height
}
let size = CGSize(width: image.size.width * scale, height: image.size.height * scale)
let x = (bounds.width - size.width) / 2.0
let y = (bounds.height - size.height) / 2.0
return CGRect(x: x, y: y, width: size.width, height: size.height)
}
}
You can now use imageView.contentClippingRect to read how read the position and size of the image inside.
You have to do minor changes in your method, call your function with appropriate bounds as contentClippingRect.
Let me know in case of any queries.
UPDATE
Please try this UIImageView+Extension, this might help you. It is in Objective-C code, convert it in Swift.
You can try this as well,
let image = #imageLiteral(resourceName: "Cat03")
let x: CGRect = AVMakeRect(aspectRatio: image.size, insideRect: imageView1.frame)
print(x)
Above code gives you size perfectly.

Image in NSTextAttachment too big and blurry

I'm trying to place an icon (in form of an image) next to a text in a UILabel. The icons are imported into the assets in al three sizes and are not blurry at all when I simply place them in a normal UIImageView.
However, within the NSTextAttachment they suddenly become extremely blurry and are too big, as well.
I already tried several things on my own and also tried nearly every snippet I could find online - nothing helps. This is what I'm left over with:
func updateWinnableCoins(coins: Int){
let attachImg = NSTextAttachment()
attachImg.image = resizeImage(image: #imageLiteral(resourceName: "geld"), targetSize: CGSize(width: 17.0, height: 17.0))
attachImg.setImageHeight(height: 17.0)
let imageOffsetY:CGFloat = -3.0;
attachImg.bounds = CGRect(x: 0, y: imageOffsetY, width: attachImg.image!.size.width, height: attachImg.image!.size.height)
let attchStr = NSAttributedString(attachment: attachImg)
let completeText = NSMutableAttributedString(string: "")
let tempText = NSMutableAttributedString(string: "You can win " + String(coins) + " ")
completeText.append(tempText)
completeText.append(attchStr)
self.lblWinnableCoins.textAlignment = .left;
self.lblWinnableCoins.attributedText = completeText;
}
func resizeImage(image: UIImage, targetSize: CGSize) -> (UIImage) {
let newRect = CGRect(x: 0, y: 0, width: targetSize.width, height: targetSize.height).integral
UIGraphicsBeginImageContextWithOptions(targetSize, false, 0)
let context = UIGraphicsGetCurrentContext()
// Set the quality level to use when rescaling
context!.interpolationQuality = CGInterpolationQuality.default
let flipVertical = CGAffineTransform(a: 1, b: 0, c: 0, d: -1, tx: 0, ty: targetSize.height)
context!.concatenate(flipVertical)
// Draw into the context; this scales the image
context?.draw(image.cgImage!, in: CGRect(x: 0.0,y: 0.0, width: newRect.width, height: newRect.height))
let newImageRef = context!.makeImage()! as CGImage
let newImage = UIImage(cgImage: newImageRef)
// Get the resized image from the context and a UIImage
UIGraphicsEndImageContext()
return newImage
}
extension NSTextAttachment {
func setImageHeight(height: CGFloat) {
guard let image = image else { return }
let ratio = image.size.width / image.size.height
bounds = CGRect(x: bounds.origin.x, y: bounds.origin.y, width: ratio * height, height: height)
}
}
And this is how it looks:
The font size of the UILabel is 17, so I set the text attachment to be 17 big, too. When I set it to 9, it fits, but it's still very blurry.
What can I do about that?

Cropping an image from the top in Swift

I am trying to crop this image, which is a SKSpriteNode:
I am trying to crop this image from the top, so that I maintain the bottom semi circle of this shape. For instance, it'd be cropped to this:
So I use these two methods to accomplish this task:
func recalculateScore() {
currentScore -= decreaseRate
let image = UIImage(cgImage: (vial.texture?.cgImage())!)
vial.texture = SKTexture(image: cropBottomImage(image: image))
}
func cropBottomImage(image: UIImage) -> UIImage {
let height = CGFloat(image.size.height / 3)
let rect = CGRect(x: 0, y: image.size.height - height, width: image.size.width, height: height)
return cropImage(image: image, toRect: rect)
}
func cropImage(image:UIImage, toRect rect:CGRect) -> UIImage {
let imageRef:CGImage = image.cgImage!.cropping(to: rect)!
let croppedImage:UIImage = UIImage(cgImage:imageRef)
return croppedImage
}
However, this leads to this result:
It is as if it was being compressed. I think my issue might be in this line:
let rect = CGRect(x: 0, y: image.size.height - height, width: image.size.width, height: height)
Does the CGRect coordinate of (0,0) lie within the top most left corner? I am a bit confused on what the x and y parameters for the CGRect mean?
Resize your sprite, what is happening is the cropped texture is stretching to fill the sprite, and since you only crop vertically, it will only stretch vertically
func recalculateScore() {
currentScore -= decreaseRate
let image = UIImage(cgImage: (vial.texture?.cgImage())!)
vial.texture = SKTexture(image: cropBottomImage(image: image))
vial.size = vial.texture.size
}

Crop/Mask circular image node in sprite kit gives jagged edges

Is it possible give a circular mask/crop to an image node without jagged edges?
Following this example from Apple (https://developer.apple.com/reference/spritekit/skcropnode), the result is not ideal. You can click on the link to see.
let shapeNode = SKShapeNode()
shapeNode.physicsBody = SKPhysicsBody(circleOfRadius: radius)
shapeNode.physicsBody?.allowsRotation = false
shapeNode.strokeColor = SKColor.clearColor()
// Add a crop node to mask the profile image
// profile images (start off with place holder)
let scale = 1.0
let profileImageNode = SKSpriteNode(imageNamed: "PlaceholderUser")
profileImageNode.setScale(CGFloat(scale))
let circlePath = CGPathCreateWithEllipseInRect(CGRectMake(-radius, -radius, radius*2, radius*2), nil)
let circleMaskNode = SKShapeNode()
circleMaskNode.path = circlePath
circleMaskNode.zPosition = 12
circleMaskNode.name = "connection_node"
circleMaskNode.fillColor = SKColor.whiteColor()
circleMaskNode.strokeColor = SKColor.clearColor()
let zoom = SKAction.fadeInWithDuration(0.25)
circleMaskNode.runAction(zoom)
let cropNode = SKCropNode()
cropNode.maskNode = circleMaskNode
cropNode.addChild(profileImageNode)
cropNode.position = shapeNode.position
shapeNode.addChild(cropNode)
self.addChild(shapeNode)
UPDATE:
Ok, so here's one solution I came up. Not super ideal but it works perfectly. Essentially, I have a to size/scale, and cut the image exactly the way it would go on the SKSpriteNode so I would not have to use SKCropNode or some variation of SKShapeNode.
I used these UIImage extensions by Leo Dabus to resize/shape the image exactly as needed. Cut a UIImage into a circle Swift(iOS)
var circle: UIImage? {
let square = CGSize(width: min(size.width, size.height), height: min(size.width, size.height))
let imageView = UIImageView(frame: CGRect(origin: CGPoint(x: 0, y: 0), size: square))
imageView.contentMode = .ScaleAspectFill
imageView.image = self
imageView.layer.cornerRadius = square.width/2
imageView.layer.masksToBounds = true
UIGraphicsBeginImageContextWithOptions(imageView.bounds.size, false, scale)
guard let context = UIGraphicsGetCurrentContext() else { return nil }
imageView.layer.renderInContext(context)
let result = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return result
}
func resizedImageWithinRect(rectSize: CGSize) -> UIImage {
let widthFactor = size.width / rectSize.width
let heightFactor = size.height / rectSize.height
var resizeFactor = widthFactor
if size.height > size.width {
resizeFactor = heightFactor
}
let newSize = CGSizeMake(size.width/resizeFactor, size.height/resizeFactor)
let resized = resizedImage(newSize)
return resized
}
The final codes look like this:
//create/shape image
let image = UIImage(named: "TestImage")
let scaledImage = image?.resizedImageWithinRect(CGSize(width: 100, height: 100))
let circleImage = scaledImage?.circle
//create sprite
let sprite = SKSpriteNode(texture: SKTexture(image: circleImage!))
sprite.position = CGPoint(x: view.frame.width/2, y: view.frame.height/2)
//set texture/image
sprite.texture = SKTexture(image: circleImage!)
sprite.physicsBody = SKPhysicsBody(texture: SKTexture(image: circleImage!), size: CGSizeMake(100, 100))
if let physics = sprite.physicsBody {
//add the physic properties
}
//scale node
sprite.setScale(1.0)
addChild(sprite)
So if you have a perfectly scaled asset/image, then you probably dont need to do all this work, but I'm getting images from the backend that could come in any sizes.
There are two different techniques that can be combined to reduce the aliasing of edges created from cropping.
Create bigger images than you need, and then scale them down. Both the target (to be cropped) and the mask. Perform the cropping action, then scale down to required size.
Use very subtle blurring of the cropping shape, to soften its edges. This is best done in Photoshop or a similar editing program, to taste and need.
When these two techniques are combined, the results can be very good.
Let the stroke color to be displayed. Also, you can make line width a little thicker and the jagged edges will dissapear.
circleMaskNode.strokeColor = SKColor.whiteColor()
All you have to do is change the SKShapeNode's lineWidth property to be twice the radius of the circle:
func circularCropNode(radius: CGFloat, add: SKNode) {
let cropper = SKCropNode.init()
cropper.addChild(add)
addChild(cropper)
let circleMask = SKShapeNode.init(circleOfRadius: radius/2)
circleMask.lineWidth = radius
cropper.maskNode = circleMask
}