How do I crop Jpeg image from/to a URL, Swift, MacOS - swift

I want to do a rectangular crop of a JPEG image. I have the following code that will create a duplicate image. It uses an NSImage. I do not know how to create a cropped image.
func crop(index: Int) {
let croppedImageUrl = ...
let imageUrl = ...
// Create a cropped image.
let data = try? Data(contentsOf: imageUrl)
let image = NSImage(data: data!)
let tiffRepresentation = (image?.tiffRepresentation)!
let bitmap = NSBitmapImageRep(data: tiffRepresentation)
let representation = bitmap?.representation(using: NSBitmapImageRep.FileType.jpeg, properties: [:])
do {
try representation?.write(to: croppedImageUrl, options: [.withoutOverwriting])
} catch let error as NSError {
print(error.localizedDescription)
}
}

Something like...
func crop(nsImage: NSImage,rect: CGRect) -> NSImage {
let cgImage = (nsImage?.cgImage(forProposedRect: nil, context: nil, hints: nil)?.cropping(to: rect))!
let size = NSSize(width: rect.width, height: rect.height)
return NSImage(cgImage: cgImage, size: size)
}
Sorry, not compiled this code fragment but general method worked in my code. Probably better done as an extension to NSImage, if that is possible.

This may help you to crop image
func crop() -> UIImage? {
let imageUrl = URL(string: "imageUrl")!
let data = try! Data(contentsOf: imageUrl)
let image = UIImage(data: data)!
// Crop rectangle
let width = min(image.size.width, image.size.height)
let size = CGSize(width: width, height: width)
// If you want to crop center of image
let startPoint = CGPoint(x: (image.size.width - width) / 2, y: (image.size.height - width) / 2)
UIGraphicsBeginImageContextWithOptions(size, false, 0)
image.draw(in: CGRect(origin: startPoint, size: size))
let croppedImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return croppedImage
}

Related

How can I add a square image to a QRCode | Swift

Essentially I have the following QR Code function that successfully creates a QR code based on a given string - how can add a square image to the center of this QR code that is static no matter what string the code represents?
The following is the function I use to generate:
func generateQRCode(from string: String) -> UIImage? {
let data = string.data(using: String.Encoding.ascii)
if let filter = CIFilter(name: "CIQRCodeGenerator") {
filter.setValue(data, forKey: "inputMessage")
let transform = CGAffineTransform(scaleX: 3, y: 3)
if let output = filter.outputImage?.transformed(by: transform) {
return UIImage(ciImage: output)
}
}
return nil
}
Sample code from one of my apps, only slightly commented.
The size calculations maybe won't be required for you app.
func generateImage(code: String, size pointSize: CGSize, logo: UIImage? = nil) -> UIImage? {
let pixelScale = UIScreen.main.scale
let pixelSize = CGSize(width: pointSize.width * pixelScale, height: pointSize.height * pixelScale)
guard
let codeData = code.data(using: .isoLatin1),
let generator = CIFilter(name: "CIQRCodeGenerator")
else {
return nil
}
generator.setValue(codeData, forKey: "inputMessage")
// set higher self-correction level
generator.setValue("Q", forKey: "inputCorrectionLevel")
guard let codeImage = generator.outputImage else {
return nil
}
// calculate transform depending on required size
let transform = CGAffineTransform(
scaleX: pixelSize.width / codeImage.extent.width,
y: pixelSize.height / codeImage.extent.height
)
let scaledCodeImage = UIImage(ciImage: codeImage.transformed(by: transform), scale: 0, orientation: .up)
guard let logo = logo else {
return scaledCodeImage
}
// create a drawing buffer
UIGraphicsBeginImageContextWithOptions(pointSize, false, 0)
defer {
UIGraphicsEndImageContext()
}
// draw QR code into the buffer
scaledCodeImage.draw(in: CGRect(origin: .zero, size: pointSize))
// calculate scale to cover the central 25% of the image
let logoScaleFactor: CGFloat = 0.25
// update depending on logo width/height ratio
let logoScale = min(
pointSize.width * logoScaleFactor / logo.size.width,
pointSize.height * logoScaleFactor / logo.size.height
)
// size of the logo
let logoSize = CGSize(width: logoScale * logo.size.width, height: logoScale * logo.size.height)
// draw the logo
logo.draw(in: CGRect(
x: (pointSize.width - logoSize.width) / 2,
y: (pointSize.height - logoSize.height) / 2,
width: logoSize.width,
height: logoSize.height
))
return UIGraphicsGetImageFromCurrentImageContext()!
}

How to make an ellipse/circular UIImage with transparent background?

This is the code I am using
extension UIImage {
var ellipseMasked: UIImage? {
guard let cgImage = cgImage else { return nil }
let rect = CGRect(origin: .zero, size: size)
return UIGraphicsImageRenderer(size: size, format: imageRendererFormat)
.image{ _ in
UIBezierPath(ovalIn: rect).addClip()
UIImage(cgImage: cgImage, scale: scale, orientation: imageOrientation)
.draw(in: rect)
}
}
}
This is the image I got
The background color is black.
How can I make the background transparent?
I tried different ways but haven't made it work yet.
You can subclass UIImageView and mask its CALayer instead of clipping the image itself:
extension CAShapeLayer {
convenience init(path: UIBezierPath) {
self.init()
self.path = path.cgPath
}
}
class EllipsedView: UIImageView {
override func layoutSubviews() {
super.layoutSubviews()
layer.mask = CAShapeLayer(path: .init(ovalIn: bounds))
}
}
let profilePicture = UIImage(data: try! Data(contentsOf: URL(string:"http://i.stack.imgur.com/Xs4RX.jpg")!))!
let iv = EllipsedView(image: profilePicture)
edit/update
If you need to clip the UIImage itself you can do it as follow:
extension UIImage {
var ellipseMasked: UIImage? {
UIGraphicsBeginImageContextWithOptions(size, false, scale)
defer { UIGraphicsEndImageContext() }
UIBezierPath(ovalIn: .init(origin: .zero, size: size)).addClip()
draw(in: .init(origin: .zero, size: size))
return UIGraphicsGetImageFromCurrentImageContext()
}
}
For iOS10+ you can use UIGraphicsImageRenderer.
extension UIImage {
var ellipseMasked: UIImage {
let rect = CGRect(origin: .zero, size: size)
let format = imageRendererFormat
format.opaque = false
return UIGraphicsImageRenderer(size: size, format: format).image{ _ in
UIBezierPath(ovalIn: rect).addClip()
draw(in: rect)
}
}
}
let profilePicture = UIImage(data: try! Data(contentsOf: URL(string:"http://i.stack.imgur.com/Xs4RX.jpg")!))!
profilePicture.ellipseMasked
Here are two solutions using SwiftUI.
This solution can be used to clip the image shape to a circle.
Image("imagename").resizable()
.clipShape(Circle())
.scaledToFit()
This solution can be used to get more of an eclipse or oval shape from the image.
Image("imagename").resizable()
.cornerRadius(100)
.scaledToFit()
.padding()

How to use scale factor to scale image in Swift

I got a large image with 1920x1080 pixels. I'm trying to scale Image with 2 differents ways:
First: using CIFilter
func resize(image: UIImage, scale: Float, aspect: Float = 1) -> UIImage? {
return autoreleasepool(invoking: {
[weak self] () -> UIImage? in
var filter: CIFilter! = CIFilter(name: "CILanczosScaleTransform")!
filter.setValue(CIImage(image: image), forKey: kCIInputImageKey)
filter.setValue(NSNumber(value: scale as Float), forKey: kCIInputScaleKey)
filter.setValue(NSNumber(value: aspect as Float), forKey: kCIInputAspectRatioKey)
var result: UIImage?
var cgImage: CGImage? = nil
if let outputImage = filter.outputImage {
cgImage = self?.ctx?.createCGImage(outputImage, from: outputImage.extent)
}
if let cgImg = cgImage {
result = self?.convertUIImage(fromCGImage: cgImg)
}
if #available(iOS 10.0, *) {
self?.ctx?.clearCaches()
}
cgImage = nil
filter.setValue(nil, forKey: kCIInputImageKey)
filter.setValue(nil, forKey: kCIInputScaleKey)
filter.setValue(nil, forKey: kCIInputAspectRatioKey)
filter.setDefaults()
filter = nil
return result
})
}
Second: using UIImage()
func scaleImage(scale: CGFloat) -> UIImage? {
if let cgImage = self.cgImage {
return UIImage(cgImage: cgImage, scale: scale, orientation: imageOrientation)
}
return nil
}
But I realized that the scale factor in two methods produced conflicting results. For example, I set scale equal to 2
But in first method: new image is size (3840x2160), and second is (960x540).
I'm really confused. Can anyone explain me why this happened
In the future when using a new function have parameter scale how do I know when scale make my image smaller and vice versa
By default image has scale, its can any value. In began your image scale can be bigger than 2, that why your second way your image received smaller. Try this.
func scaleImage(image: UIImage, scale: CGFloat) -> UIImage? {
let size = CGSize(width: image.size.width * scale, height: image.size.height * scale)
let drawRect = CGRect(origin: .zero, size: size)
UIGraphicsBeginImageContextWithOptions(size, false, 0)
image.draw(in: drawRect)
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage
}

resizing image and saving to file

I have these functions that I've cobbled together to resize and save an image. But it doesn't seem to be resizing my images properly -- a 150x150 image attempted to be resized as 50x50 image ends up saved as 100x100. Any ideas what's causing it?
extension NSImage {
#discardableResult
func saveAsPNG(url: URL) -> Bool {
guard let tiffData = self.tiffRepresentation else {
print("failed to get tiffRepresentation. url: \(url)")
return false
}
let imageRep = NSBitmapImageRep(data: tiffData)
guard let imageData = imageRep?.representation(using: .PNG, properties: [:]) else {
print("failed to get PNG representation. url: \(url)")
return false
}
do {
try imageData.write(to: url)
return true
} catch {
print("failed to write to disk. url: \(url)")
return false
}
}
}
enum error:Error {
case imageCreationFailure
}
func resizeImageByFactor(_ url:URL) throws {
let image = NSImage(byReferencing: url)
guard image.isValid else { throw error.imageCreationFailure }
let reSize = NSSize(width: 50, height: 50)
let oldRect = CGRect(x: 0.0, y: 0.0, width: image.size.width, height: image.size.height)
let newRect = CGRect(x: 0.0, y: 0.0, width: reSize.width, height: reSize.height)
let newImage = NSImage(size: reSize)
newImage.lockFocus()
image.draw(in: newRect, from: oldRect, operation: .copy, fraction: 1.0)
newImage.unlockFocus()
newImage.size
let url = URL(fileURLWithPath: "test.jpg", relativeTo: url.deletingLastPathComponent())
newImage.saveAsPNG(url: url)
}
OS X & iOS devices have scaling factors. The iPhone 5, 5S, 6, etc. all have a scaling factor of 2x. The iPhone 6 Plus has a scaling factor of 3x. The old non-retina iPhones have a 1x scaling factor. My OS X machine with a 4K display has a scaling factor of 2x.
What you should do is this:
let scalingFactor = NSScreen.mainScreen()?.backingScaleFactor;
let size = NSSize(width: 50 / scalingFactor!, height: 50 / scalingFactor!);

change resolution and size of image with cocoa/osx/swift (no mobile apps)

I try to change the size and the resolution of an image programmatically, afterwards I save this image.
The imagesize in the imageView is changing, but when I look at my file "file3.png" it always has the original resolution of 640x1142.
I googled around but can't find a solution. I try to redraw the image. But maybe it's the wrong strategy.
thanks
#IBAction func pickOneImageBtn(sender: AnyObject) {
//load image from path
pickedImage.image = loadImageFromPath(fileInDocumentsDirectory("Angebote.png"))
let newSize = NSSize(width: 10, height: 10)
if let image = pickedImage.image {
print("found image")
//cast to CGImage
var imageRect:CGRect = CGRectMake(0, 0, image.size.width, image.size.height)
let imageRef = image.CGImageForProposedRect(&imageRect, context: nil, hints: nil)
if let imageRefExists = imageRef {
print("Cast to CGImage worked \(imageRefExists)")
}
//redraw to NSImage with new size
let imageWithNewSize = NSImage(CGImage: imageRef!, size: newSize)
//save on disk
let imgData: NSData! = imageWithNewSize.TIFFRepresentation!
let bitmap: NSBitmapImageRep! = NSBitmapImageRep(data: imgData!)
if let pngCoverImage = bitmap!.representationUsingType(NSBitmapImageFileType.NSPNGFileType, properties: [:]) {
pngCoverImage.writeToFile("/...correctpath.../imageSourceForResize/file3.png", atomically: false)
print("saved new image")
}
//the size is smaller
pickedImage.image = imageWithNewSize
}
}
Change
let imgData: NSData! = pickedImage.image!.TIFFRepresentation!
to
let imgData: NSData! = imageWithNewSize.TIFFRepresentation!
I tried to change the size of a NSImage for Mac application and here is the working function to resize an image written in swift.
func resize(image: NSImage, w: Int, h: Int) -> NSImage
{
let destSize = NSMakeSize(CGFloat(w), CGFloat(h))
let newImage = NSImage(size: destSize)
newImage.lockFocus()
image.drawInRect(NSMakeRect(0, 0, destSize.width, destSize.height), fromRect: NSZeroRect, operation: NSCompositingOperation.CompositeCopy, fraction: 1.0)
newImage.unlockFocus()
newImage.size = destSize
return NSImage(data: newImage.TIFFRepresentation!)!
}
You need to pass 3 parameters to call this function i.e NSImage, width, height and this function will return resized image.
targetimage = resize(source, w: Int(targetwidth), h: Int(targetheight))