Swift - OSX - resize NSImage data - swift

I am trying to resize a NSImage to use it with parse as a PFFile and I need to change its dimension to reduce its data size it but it is not working. The resized image resizes the width and height inside Swift but it keeps the same dimensions from the original image.
The image is a JPG with 2560 x 1706 and 721 KB.
When I pass it to data it keeps the same dimensions and gos to 5.7 MB
Here is my code:
let fotoNSImage = NSImage(byReferencing: url)
let fotoNSImageRedim = redimensionaNSImage(imagem: fotoNSImage, tamanho: NSSize(width: 200, height: 200))
let fotoNSImageRep = NSBitmapImageRep(data: (fotoNSImageRedim.tiffRepresentation)!)
let fotoNSImagePng = fotoNSImageRep!.representation(using: NSBitmapImageRep.FileType.png, properties: [:])
let fotoProduto = FotoProduto(foto: PFFile(data: fotoNSImagePng!)!, categorias: [])
The method to resize de image:
static func redimensionaNSImage(imagem: NSImage, tamanho: NSSize) -> NSImage {
var ratio: Float = 0.0
let imageWidth = Float(imagem.size.width)
let imageHeight = Float(imagem.size.height)
let maxWidth = Float(tamanho.width)
let maxHeight = Float(tamanho.height)
if (imageWidth > imageHeight) {
// Landscape
ratio = maxWidth / imageWidth;
}
else {
// Portrait
ratio = maxHeight / imageHeight;
}
// Calculate new size based on the ratio
let newWidth = imageWidth * ratio
let newHeight = imageHeight * ratio
// Create a new NSSize object with the newly calculated size
let newSize: NSSize = NSSize(width: Int(newWidth), height: Int(newHeight))
// Cast the NSImage to a CGImage
var imageRect: CGRect = CGRect(x: 0, y: 0, width: Int(newWidth), height: Int(newHeight))
let imageRef = imagem.cgImage(forProposedRect: &imageRect, context: nil, hints: nil)
// Create NSImage from the CGImage using the new size
let imageWithNewSize = NSImage(cgImage: imageRef!, size: newSize)
// Return the new image
return imageWithNewSize
}
I have already tried the following approaches without success.
1 - Change the pixelWide and pixelHigh directly on fotoNSImagePng:
fotoNSImageRep?.pixelsWide = 200
fotoNSImageRep?.pixelsHigh = 133
2 - Creating a new NSBitmapImageRep and replace the image representation with it:
let rep = NSBitmapImageRep(bitmapDataPlanes: nil,
pixelsWide: 200,
pixelsHigh: 133,
bitsPerSample: 8,
samplesPerPixel: 4,
hasAlpha: true,
isPlanar: false,
colorSpaceName: NSDeviceRGBColorSpace,
bytesPerRow: Int(newSize.width * 4),
bitsPerPixel: 32)
3 - Follow this approach:
How to resize - resample - change file size - NSImage
4 - Change the NSBitmapImageRep size values:
fotoNSImageRep?.size.width = 200
fotoNSImageRep?.size.height = 133
Nothing has worked so far.

I have modified the method that resizes the image
static func redimensionaNSImage(imagem: NSImage, tamanho: NSSize) -> NSImage {
var ratio: Float = 0.0
let imageWidth = Float(imagem.size.width)
let imageHeight = Float(imagem.size.height)
let maxWidth = Float(tamanho.width)
let maxHeight = Float(tamanho.height)
// Get ratio (landscape or portrait)
if (imageWidth > imageHeight) {
// Landscape
ratio = maxWidth / imageWidth;
}
else {
// Portrait
ratio = maxHeight / imageHeight;
}
// Calculate new size based on the ratio
let newWidth = imageWidth * ratio
let newHeight = imageHeight * ratio
let imageSo = CGImageSourceCreateWithData(imagem.tiffRepresentation! as CFData, nil)
let options: [NSString: NSObject] = [
kCGImageSourceThumbnailMaxPixelSize: max(imageWidth, imageHeight) * ratio as NSObject,
kCGImageSourceCreateThumbnailFromImageAlways: true as NSObject
]
let size1 = NSSize(width: Int(newWidth), height: Int(newHeight))
let scaledImage = CGImageSourceCreateThumbnailAtIndex(imageSo!, 0, options as CFDictionary).flatMap {
NSImage(cgImage: $0, size: size1)
}
return scaledImage!
}

Related

Incorrect saving of transparent UIImage to Photo Library as png with UIImageWriteToSavedPhotosAlbum

I have a function cropAlpha() that trims the extra space defined by the transparency.
func cropAlpha() -> UIImage {
let cgImage = self.cgImage!
let width = cgImage.width
let height = cgImage.height
let colorSpace = CGColorSpaceCreateDeviceRGB()
let bytesPerPixel:Int = 4
let bytesPerRow = bytesPerPixel * width
let bitsPerComponent = 8
let bitmapInfo: UInt32 = CGImageAlphaInfo.premultipliedLast.rawValue | CGBitmapInfo.byteOrder32Big.rawValue
guard let context = CGContext(data: nil, width: width, height: height, bitsPerComponent: bitsPerComponent, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo),
let ptr = context.data?.assumingMemoryBound(to: UInt8.self)
else { return self }
context.draw(self.cgImage!, in: CGRect(x: 0, y: 0, width: width, height: height))
var minX = width
var minY = height
var maxX: Int = 0
var maxY: Int = 0
for x in 1 ..< width {
for y in 1 ..< height {
let i = bytesPerRow * Int(y) + bytesPerPixel * Int(x)
let a = CGFloat(ptr[i + 3]) / 255.0
if a == 1 {
if (x < minX) { minX = x }
if (x > maxX) { maxX = x }
if (y < minY) { minY = y }
if (y > maxY) { maxY = y }
}
}
}
let rect = CGRect(x: CGFloat(minX),y: CGFloat(minY), width: CGFloat(maxX - minX), height: CGFloat(maxY-minY))
let croppedImage = self.cgImage!.cropping(to: rect)!
let ret = UIImage(cgImage: croppedImage)
return ret
}
The image returned by this function has transparent elements and I put it in the ImageView: presenterImageView.image = imagePNG. It works as it should. But when I try to save UIImage to Photo Gallery, transparent background turns white.
let image = maskedImage?.cropAlpha()
let imagePNGData = image!.pngData()
let imagePNG = UIImage(data: imagePNGData!)
UIImageWriteToSavedPhotosAlbum(imagePNG!, nil, nil, nil)
If I don't use that function, I get the result I want, but the image has too much wasted space. I don't understand what could be the reason. Any ideas?
The problem is that UIImageWriteToSavedPhotosAlbum does not handle properly saving a UIImage with premultiplied alpha (or at least the result of saving such image is not what you expect) and your cropping method uses premultipliedLast format. You also can't just simply change CGImageAlphaInfo to a non-premultiplied format because it is not supported there (you will see an error CGBitmapContextCreate: unsupported parameter combination if you try that). But what you can do is convert the cropped image to CIImage, unpremultiply alpha and convert back to UIImage. To do that your saving code could look like below (however I recommend removing force unwrapping from this code if you plan to use it in final app):
let image = maskedImage?.cropAlpha()
let ciImage = CIImage(image: image!)!.unpremultiplyingAlpha()
let uiImage = UIImage(ciImage: ciImage)
let imagePNGData = uiImage.pngData()
let imagePNG = UIImage(data: imagePNGData!)
UIImageWriteToSavedPhotosAlbum(imagePNG!, nil, nil, nil)

How to increase quality of rendered image for UIImageView?

I have an UIImageView with contentMode = .aspectFit. I have an image in imageView, which dimension is bigger than size of imageView. User can draw some lines and save them as sublayer. After that I need to save the edited image. But quality of saved image is worse than quality of image which I load.
What am I doing wrong? I tried to use transform, but it didn't work.
import UIKit
extension UIImageView {
var contentClippingRect: CGRect {
let imgViewSize = self.frame.size
let imgSize = self.image?.size ?? .zero
let scaleW = imgViewSize.width / imgSize.width
let scaleH = imgViewSize.height / imgSize.height
let aspect = fmin(scaleW, scaleH)
let width = imgSize.width * aspect
let height = imgSize.height * aspect
let imageRect = CGRect(x: (imgViewSize.width-width)/2 + self.frame.origin.x, y: (imgViewSize.height-height)/2 + self.frame.origin.y, width: width, height: height)
return imageRect
}
func asImage() -> UIImage {
let imageRect = self.contentClippingRect
let renderer = UIGraphicsImageRenderer(bounds: imageRect)
let renderedImage = renderer.image { rendererContext in
layer.render(in: rendererContext.cgContext)
}
return renderedImage
}
}
You can use UIGraphicsImageRendererFormat().scale it will increase quality of the rendered image a bit.
import UIKit
extension UIImageView {
var contentClippingRect: CGRect {
let imgViewSize = self.frame.size
let imgSize = self.image?.size ?? .zero
let scaleW = imgViewSize.width / imgSize.width
let scaleH = imgViewSize.height / imgSize.height
let aspect = fmin(scaleW, scaleH)
let width = imgSize.width * aspect
let height = imgSize.height * aspect
let imageRect = CGRect(x: (imgViewSize.width-width)/2 + self.frame.origin.x, y: (imgViewSize.height-height)/2 + self.frame.origin.y, width: width, height: height)
return imageRect
}
func asImage() -> UIImage {
let imageRect = self.contentClippingRect
//add this
let format = UIGraphicsImageRendererFormat()
format.scale = 2
let renderer = UIGraphicsImageRenderer(bounds: imageRect, format: format)
let renderedImage = renderer.image { rendererContext in
layer.render(in: rendererContext.cgContext)
}
return renderedImage
}
}

How to convert pixel dimension to CG Size in Swift?

I have large images uploaded by users in Swift and I need to resize them all to 100x100px to create thumbnails to store in my server. So far I have found that this resizes an image given a CGSize:
func resizedImage(image: UIImage, size: CGSize) -> UIImage? {
let renderer = UIGraphicsImageRenderer(size: size)
return renderer.image { (context) in
image.draw(in: CGRect(origin: .zero, size: size))
}
}
Is there any way to create a CGSize knowing that my target size is strictly 100x100px?
Got this to work:
extension UIImage {
func resizedImage(pixelSize: (width: Int, height: Int)) -> UIImage? {
var size = CGSize(width: CGFloat(pixelSize.width) / UIScreen.main.scale, height: CGFloat(pixelSize.height) / UIScreen.main.scale)
let rect = AVMakeRect(aspectRatio: self.size, insideRect: CGRect(x:0, y:0, width: size.width, height: size.height))
let renderer = UIGraphicsImageRenderer(size: size)
return renderer.image { (context) in
self.draw(in: rect)
}
}
}
You should initialize your render based on the user device scale and multiply its width and height instead of dividing it:
extension UIImage {
func aspectFitScaled(to size: CGSize) -> UIImage {
let format = imageRendererFormat
format.opaque = false
format.scale = UIScreen.main.scale
let isLandscape = self.size.width > self.size.height
let ratio = isLandscape ? size.width / self.size.width : size.height / self.size.height
let drawSize = self.size.scaled(by: ratio)
let x = (size.width - drawSize.width) / 2
let y = (size.height - drawSize.height) / 2
let origin = CGPoint(x: x, y: y)
return UIGraphicsImageRenderer(size: size, format: format).image { _ in
draw(in: CGRect(origin: origin, size: drawSize))
}
}
}
usage:
class ViewController: UIViewController {
// imageView frame is 200 x 200
#IBOutlet weak var imageView: UIImageView!
override func viewDidLoad() {
super.viewDidLoad()
// original image size is (719.0, 808.0)
let image = UIImage(data: try! Data(contentsOf: URL(string: "https://i.stack.imgur.com/Xs4RX.jpg")!))!
imageView.backgroundColor = .gray
let ivImage = image.aspectFitScaled(to: imageView.frame.size)
imageView.image = ivImage
print("ivImage.size", ivImage.size) // (200.0, 200.0)
print("ivImage.scale", ivImage.scale) // screen scale 3.0 iPhone 8 Plus
// lets check the real image dimension
let data = ivImage.jpegData(compressionQuality: 1)!
let savedSize = UIImage(data: data)!.size
print("savedSize", savedSize) // savedSize (600.0, 600.0)
}
}

White Border around Resized Image

I'm using the following code to resize an image.But i keep getting a white border at the bottom
func resize(withSize targetSize: NSSize) -> NSImage? {
let size=NSSize(width: floor((targetSize.width/(NSScreen.main?.backingScaleFactor)!)), height:floor( (targetSize.height/(NSScreen.main?.backingScaleFactor)!)))
let frame = NSRect(x: 0, y: 0, width: floor(size.width), height: floor(size.height))
guard let representation = self.bestRepresentation(for: frame, context: nil,hints: nil) else {
return nil
}
let image = NSImage(size: size, flipped: false, drawingHandler: { (_) -> Bool in
return representation.draw(in: frame)
})
return image
}
I have tried using floor function to round off the values.But still the same issue.
extension NSImage {
func resize(withSize targetSize: NSSize) -> NSImage? {
let ratioX = targetSize.width / size.width / (NSScreen.main?.backingScaleFactor ?? 1)
let ratioY = targetSize.height / size.height / (NSScreen.main?.backingScaleFactor ?? 1)
let ratio = ratioX < ratioY ? ratioX : ratioY
let canvasSize = NSSize(width: size.width * ratio, height: size.height * ratio)
let frame = NSRect(origin: .zero, size: canvasSize)
guard let representation = bestRepresentation(for: frame, context: nil, hints: nil)
else { return nil }
return NSImage(size: canvasSize, flipped: false) { _ in representation.draw(in: frame) }
}
}
let image = NSImage(contentsOf: URL(string: "http://i.stack.imgur.com/Xs4RX.jpg")!)!
let resized = image.resize(withSize: .init(width: 400, height: 300)) // w 267 h 300
If you would like to take into account the screen scale just remove the division by (NSScreen.main?.backingScaleFactor ?? 1) from the method when calculating ratioX and ratioY

Resizing of NSImage not working

I am trying to resize a NSImage implementing a code that I got from this website https://gist.github.com/eiskalteschatten/dac3190fce5d38fdd3c944b45a4ca469, but it's not working.
Here is the code:
static func redimensionaNSImage(imagem: NSImage, tamanho: NSSize) -> NSImage {
var imagemRect: CGRect = CGRect(x: 0, y: 0, width: imagem.size.width, height: imagem.size.height)
let imagemRef = imagem.cgImage(forProposedRect: &imagemRect, context: nil, hints: nil)
return NSImage(cgImage: imagemRef!, size: tamanho)
}
I forgot to calculate the ratio. Now it's working fine.
static func redimensionaNSImage(imagem: NSImage, tamanho: NSSize) -> NSImage {
var ratio:Float = 0.0
let imageWidth = Float(imagem.size.width)
let imageHeight = Float(imagem.size.height)
let maxWidth = Float(tamanho.width)
let maxHeight = Float(tamanho.height)
// Get ratio (landscape or portrait)
if (imageWidth > imageHeight) {
// Landscape
ratio = maxWidth / imageWidth;
}
else {
// Portrait
ratio = maxHeight / imageHeight;
}
// Calculate new size based on the ratio
let newWidth = imageWidth * ratio
let newHeight = imageHeight * ratio
// Create a new NSSize object with the newly calculated size
let newSize:NSSize = NSSize(width: Int(newWidth), height: Int(newHeight))
// Cast the NSImage to a CGImage
var imageRect:CGRect = CGRect(x: 0, y: 0, width: imagem.size.width, height: imagem.size.height)
let imageRef = imagem.cgImage(forProposedRect: &imageRect, context: nil, hints: nil)
// Create NSImage from the CGImage using the new size
let imageWithNewSize = NSImage(cgImage: imageRef!, size: newSize)
// Return the new image
return imageWithNewSize
}