Crop NSImage (macOS) Square without distorting - swift

I am using the code below to resize an image in Swift on macOS. This is working but if the image is not square to begin with, the resizing squashes the image.
How can I resize the image but draw it in the center and keeping the ratio, preventing the squashing if the image is not square to begin with?
func resize(image: NSImage, w: CGFloat, h: CGFloat) -> NSImage {
var destSize = NSMakeSize(CGFloat(w), CGFloat(h))
var newImage = NSImage(size: destSize)
newImage.lockFocus()
image.draw(in: NSMakeRect(0, 0, destSize.width, destSize.height), from: NSMakeRect(0, 0, image.size.width, image.size.height), operation: NSCompositingOperation.sourceOver, fraction: CGFloat(1))
newImage.unlockFocus()
newImage.size = destSize
return NSImage(data: newImage.tiffRepresentation!)!
}
Thank you

The following line of code is the problem:
image.draw(in: NSMakeRect(0, 0, destSize.width, destSize.height), from: NSMakeRect(0, 0, image.size.width, image.size.height), operation: NSCompositingOperation.sourceOver, fraction: CGFloat(1))
You are drawing from the whole rectangle of the source into the whole rectangle of the destination. This will therefore scale the image to fit the destination, whereas you want to maintain aspect ratio. You need to decide how you want the final image to appear and adjust the source or destination rectangles accordingly.
For example, if you want to scale the result so that the whole image appears, you need to adjust either the width or height of the destination rectangle so they're in the same aspect ratio as the source.
Alternatively, if you want to crop the result, you need to adjust the width or height of the source rectangle, once again maintaining the same aspect ratio. You will also have to adjust the origin of the source rectangle if you want to crop (for example) the top and bottom.
Some of the other functions like draw(in:from:operation:fraction:) handle scaling for you, so depending on your needs, they may be better.

Assuming that you want to fit the entire source image into the destination image space(or size), maintaining the aspect ratio of the original image and adding some white space where the image doesn't fit into the destination.
There are three situations to consider.
The aspect ratio of the source and destination are the same.
No problem, just resize.
The aspect ratio destination is wider than the source
Height is the driving value (because it is smaller).
Figure out the height ratio from source to destination.
Use that ratio to calculate the destination width from the source width.
The aspect ratio destination is taller than the source
Width is the driving value.
Figure out the width ratio from source to destination.
Use that ratio to calculate the destination height from the source height.
I created this function to calculate the size you need.
func calcNewSize(source: NSSize, destination: NSSize) -> NSSize {
let widthRatio: Float = Float(destination.width) / Float(source.width)
let heightRatio: Float = Float(destination.height) / Float(source.height)
var newSize = NSSize()
print("widthRatio \(widthRatio) heightRatio \(heightRatio)")
if widthRatio == heightRatio {
print("use same ratio")
newSize = destination
}
else if widthRatio > heightRatio {
print("use height ratio")
newSize.height = source.height * CGFloat(heightRatio)
newSize.width = source.width * CGFloat(heightRatio)
}
else {
print("use width ratio")
newSize.height = source.height * CGFloat(widthRatio)
newSize.width = source.width * CGFloat(widthRatio)
}
return newSize
}

Related

How do you get the aspect fit size of a uiimage in a uimageview?

When print(uiimage.size) is called it only gives the width and the height of the original image before it was scaled up or down. Is there anyway to get the dimensions of the aspect fitted image?
Actually, there is a function in AVFoundation that can calculate this for you:
import AVFoundation
let fitRect = AVMakeRect(aspectRatio: image.size, insideRect: imageView.bounds)
now fitRect.size is the size inside the imageView bounds by maintaining the original aspect ratio.
You're going to need to calculate the resulting image size in Points yourself*.
* It turns out you don't. See Alladinian's answer. I'm going to
leave this answer here to explain what the library function is doing.
Here's the math:
let imageAspectRatio = image.size.width / image.size.height
let viewAspectRatio = imageView.frame.width / imageView.frame.height
var fitWidth: CGFloat // scaled width in points
var fitHeight: CGFloat // scaled height in points
var offsetX: CGFloat // horizontal gap between image and frame
var offsetY: CGFloat // vertical gap between image and frame
if imageAspectRatio <= viewAspectRatio {
// Image is narrower than view so with aspectFit, it will touch
// the top and bottom of the view, but not the sides
fitHeight = imageView.frame.height
fitWidth = fitHeight * imageAspectRatio
offsetY = 0
offsetX = (imageView.frame.width - fitWidth) / 2
} else {
// Image is wider than view so with aspectFit, it will touch
// the sides of the view but not the top and bottom
fitWidth = imageView.frame.width
fitHeight = fitWidth / imageAspectRatio
offsetX = 0
offsetY = (imageView.frame.height - fitHeight) / 2
}
Explanation:
It helps to draw the pictures. Draw a rectangle that represents the
imageView. Then draw a rectangle that is narrow but extends from the
top of the image view to the bottom. That is the first case. Then draw
one where the image is short but extends to the two side of the image
view. That is the second case. At that point, we know one of the
dimensions. The other is just that value multiplied or divided by the
image's aspect ratio because we know that the .aspectFit keeps the
image's original aspect ratio.
A note about frame vs. bounds. The frame is in the coordinate system of the view's superview. The bounds are in the coordinate system of the view itself. I chose to use the frame in this example, because the OP was interested in how far to move the imageView in it's superview's coordinates. For a standard imageView that has not been rotated or scaled further, the width and height of the frame will match the width and height of the bounds. Things get interesting though when a rotation is applied to an imageView. The frame expands to show the whole imageView, but the bounds remain the same.

Swift: Retain floating point decimal in autolayout

this is a follow on question from my post here. I am attempting to fix my heightConstraint of imageView dynamically depending on the size of my image, and then add a drawing layer over it. However, I noticed that the imageView size loses precision due to the division of floating point and this resulted in a blurred image when I start to draw on my image, ie the placement of the bitmap is not exactly at the position overlaying my original image.
My code as such, with the print statements to observe the decimal places.
func setupViews() {
view.backgroundColor = .black
view.addSubview(canvasImageView)
canvasImageView.centerYAnchor.constraint(equalTo: view.centerYAnchor).isActive = true
canvasImageView.leadingAnchor.constraint(equalTo: view.safeAreaLayoutGuide.leadingAnchor).isActive = true
canvasImageView.trailingAnchor.constraint(equalTo: view.safeAreaLayoutGuide.trailingAnchor).isActive = true
let aspectRatio = getImageAspectRatio(image: image)
let screenWidth = UIScreen.main.bounds.width
let height = CGFloat(1.0) / CGFloat(aspectRatio) * CGFloat(screenWidth)
canvasImageView.heightAnchor.constraint(equalToConstant: height).isActive = true
print("ImageSize", image.size.height)
print("Aspect Ratio image at start:", aspectRatio)
print("Calculated height:", height)
DispatchQueue.main.async {
print("Aspect Ratio imageView at start:", self.getAspectRatio(frame: self.canvasImageView.frame))
print("ImageViewSize", self.canvasImageView.frame.height)
}
}
My print statements are here:
ImageSize 3000.0 2002.0
Aspect Ratio image at start: 1.4985014985015
Calculated height: 213.546666666667
Aspect Ratio imageView at start: 1.49532710280374
ImageViewSize 320.0 214.0
As you can see, my attempt to do the heightAnchor.constraint(equalToConstant: height) actually loses the precision and resulting in a lost of resolution. Is there a way to eliminate this? An image of the outcome is here.

I need help integrating a specific UIImage resizing extension into my current draw CGRect function

I found this extension online, it allows me to have images adhere to aspect fit/fill even when drawn inside dynamically growing/shrinking image views (currently when image is saved to camera roll after my draw function the image reverts to "scale fill" regardless of what the content mode of the image view is. I suspect the reasoning for this is because I have it drawing the image to size/bounds of the image view, but since the image view is dynamic, i don't see any way around this without using this extension):
// MARK: - Image Scaling.
extension UIImage {
/// Scales an image to fit within a bounds with a size governed by the passed size. Also keeps the aspect ratio.
/// Switch MIN to MAX for aspect fill instead of fit.
///
/// - parameter newSize: newSize the size of the bounds the image must fit within.
///
/// - returns: a new scaled image.
func scaleImageToSize(newSize: CGSize) -> UIImage {
var scaledImageRect = CGRect.zero
let aspectWidth = newSize.width/size.width
let aspectheight = newSize.height/size.height
let aspectRatio = max(aspectWidth, aspectheight)
scaledImageRect.size.width = size.width * aspectRatio;
scaledImageRect.size.height = size.height * aspectRatio;
scaledImageRect.origin.x = (newSize.width - scaledImageRect.size.width) / 2.0;
scaledImageRect.origin.y = (newSize.height - scaledImageRect.size.height) / 2.0;
UIGraphicsBeginImageContext(newSize)
draw(in: scaledImageRect)
let scaledImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return scaledImage!
}
}
This is my current function I'm using for drawing the image on screen to be able to save it to camera roll (this function combines two images, a frame and an image from camera roll:
func drawImagesAndText() {
let renderer = UIGraphicsImageRenderer(size: CGSize(width: imageView.bounds.size.width, height: imageView.bounds.size.height))
img = renderer.image { ctx in
// var newSize = currentImage.scaleImageToSize
let bgImage = currentImage
bgImage?.draw(in: CGRect(x: 0, y: 0, width: imageView.bounds.size.width, height: imageView.bounds.size.height))
frames = UIImage(named: framesAr)
frames?.draw(in: CGRect(x: 0, y: 0, width: imageView.bounds.size.width, height: imageView.bounds.size.height))
}
}
All the tutorials I've found on how to use extensions don't cover how to pass in and out variables like this one requires. Any insight would be greatly appreciated.
I understand that you don't know how to use the extension, is that correct? Since it just adds a function to every UIImage, you can simply call it on your image like this: currentImage.scaleImageToSize(newSize: someSize) and pass the size you want the image to fit into.
Dorian Roy was telling me to use that call in place of using just "currentImage", and that's what worked!
(I commented on his initial answer saying I was having issues because I was trying to use the return value from the extension itself in place of "currentImage")

Image downsizing while loop, can't get below 400kB no matter how many iterations are run

func downsizeImage(image: UIImage) -> Data{
var imagePointer = image
let targetDataSize: CGFloat = 256.0 * 256
var imageData = UIImageJPEGRepresentation(image, CGFloat(1.0))!
while (CGFloat(imageData.count) > targetDataSize){
var newProportion = targetDataSize / CGFloat(imageData.count)
print("image data size is \(imageData.count)\n")
imageData = UIImageJPEGRepresentation(imagePointer, CGFloat(newProportion))!
imagePointer = UIImage(data: imageData)!
}
return imageData
}
image data size is 6581432
image data size is 391167
image data size is 394974
image data size is 394915
image data size is 394845
Any clue what my problem is?
The quality parameter to UIImageJPEGRepresentation is not a proportion of image size. Rather it specifies who lossy the JPG compression can be. Quality of zero represents the lossiest and smallest size file the algorithm can generate for your data, but it has no guarantee that its going to be a certain size. The actual size depends on the size and complexity of the image, in terms of color/spatial frequency since thats what the compression algorithm is discarding. If you need to make your image a certain size on disk I suggest you down sample it to a lower resolution and then use a JPG quality setting between 0.4 and 0.6. Lowe values tend to produce a bunch of artifacts, depending on the type of image you are compressing.
EDIT: add code for downsampling extension to UIImage:
extension UIImage {
func resize(to proportion: CGFloat) -> UIImage {
let newSize = CGSize(width: size.width * proportion, height: size.height * proportion)
return UIGraphicsImageRenderer(size: newSize).image { context in
self.draw(in: CGRect(origin: .zero, size: newSize))
}
}
}

Image Cropping grabbing the wrong portion of UIImage during crop

I've been working on making a view controller that will crop an image down to a specific size with some draggable control points and the background image outside of the crop zone dimmed.
For some reason whenever the image is cropped, it is grabbing the wrong reference. I've looked at just about every other post on this to deal with cropping.
Here is my setup for the Storyboard:
I've asked a few other people including a tutor and mentor from a course that I'm taking, but we all seem to be stumped.
I can select a frame by dragging the UL UR DL DR corners around the view controller like this:
But when I press the button and use the crop function I've written, I get something that is not the correct crop based on the framed selection.
I also get this error message during the cropping proceedure:
2016-09-07 23:36:38.962 ImageCropView[33133:1056024]
<UIView: 0x7f9cfa42c730; frame = (0 0; 414 736); autoresize = W+H; layer = <CALayer: 0x7f9cfa408400>>'s window
is not equal to <ImageCropView.CroppedImageViewController: 0x7f9cfa43f9b0>'s view's window!
The offending part of the code must be somewhere in one of the functions below.
Here is the cropping function:
func cropImage(image: UIImage, toRect rect: CGRect) -> UIImage {
func rad(deg: CGFloat) -> CGFloat {
return deg / 180.0 * CGFloat(M_PI)
}
// determine the orientation of the image and apply a transformation to the crop rectangle to shift it to the correct position
var rectTransform: CGAffineTransform
switch image.imageOrientation {
case .Left:
rectTransform = CGAffineTransformTranslate(CGAffineTransformMakeRotation(rad(90)), 0, -image.size.height)
case .Right:
rectTransform = CGAffineTransformTranslate(CGAffineTransformMakeRotation(rad(-90)), -image.size.width, 0)
case .Down:
rectTransform = CGAffineTransformTranslate(CGAffineTransformMakeRotation(rad(-180)), -image.size.width, -image.size.height)
default:
rectTransform = CGAffineTransformIdentity
}
// adjust the transformation scale based on the image scale
rectTransform = CGAffineTransformScale(rectTransform, UIScreen.mainScreen().scale, UIScreen.mainScreen().scale)
// apply the transformation to the rect to create a new, shifted rect
let transformedCropSquare = CGRectApplyAffineTransform(rect, rectTransform)
// use the rect to crop the image
let imageRef = CGImageCreateWithImageInRect(image.CGImage, transformedCropSquare)
// create a new UIImage and set the scale and orientation appropriately
let result = UIImage(CGImage: imageRef!, scale: image.scale, orientation: image.imageOrientation)
return result
}
Here are the functions to set and translate the mask view
func setTopMask(){
let path = CGPathCreateWithRect(cropViewMask.frame, nil)
topMaskLayer.path = path
topImageView.layer.mask = topMaskLayer
}
func translateMask(sender: UIPanGestureRecognizer) {
let translation = sender.translationInView(self.view)
sender.view!.center = CGPointMake(sender.view!.center.x + translation.x, sender.view!.center.y + translation.y)
// print(sender.translationInView(self.view))
sender.setTranslation(CGPointZero, inView: self.view)
// print("panned mask")
if sender.state == .Ended {
printFrames()
}
}
func setCropMaskFrame() {
let x = ulCorner.center.x
let y = ulCorner.center.y
let width = urCorner.center.x - ulCorner.center.x
let height = blCorner.center.y - ulCorner.center.y
cropViewMask.frame = CGRectMake(x, y, width, height)
setTopMask()
}
I know this was long time ago...Just a thought, I ran into similar problem and what I found is that the frames for cropping are most probably correct. The problem lies in the actual size of the picture you're trying to crop. I solved the issue by aligning sizes of my view which holds the picture, with the actual picture size (in points). Then the cropping area cropped what was selected. I know this is probably not a solution, just sharing my experience, hope it helps to turn on some lightbulbs :)