Downsampling Images with SwiftUI - swift

I'm displaying images in my app that are downloaded from the network, but I'd like to downsample them so they aren't taking up multiple MB of memory. I could previously do this quite easily with UIKit:
func resizedImage(image: UIImage, for size: CGSize) -> UIImage? {
let renderer = UIGraphicsImageRenderer(size: size)
return renderer.image { (context) in
image.draw(in: CGRect(origin: .zero, size: size))
}
}
There are other methods as well, but they all depend on knowing the image view's desired size, which isn't straightforward in SwiftUI.
Is there a good API/method specifically for downsampling SwiftUI images?

I ended up solving it with geometry reader, which isn't ideal since it messes up the layout a bit.
#State var image: UIImage
var body: some View {
GeometryReader { geo in
Image(uiImage: self.image)
.resizable()
.aspectRatio(contentMode: .fit)
.onAppear {
let imageFrame = CGRect(x: 0, y: 0, width: geo.size.width, height: geo.size.height)
self.downsize(frame: imageFrame) // call whatever downsizing function you want
}
}
}
Use the geometry proxy to determine the image's frame, then downsample to that frame. I wish SwiftUI had their own API for this.

For resizing use this function. It works fast in lists or LazyVStack as well and reduces the memory consumption of the images.
public var body: some View {
GeometryReader { proxy in
let image = UIImage(named: imageName)?
.resize(height: proxy.size.height)
Image(uiImage: image ?? UIImage())
.resizable()
.scaledToFill()
}
}
public extension UIImage {
/// Resizes the image by keeping the aspect ratio
func resize(height: CGFloat) -> UIImage {
let scale = height / self.size.height
let width = self.size.width * scale
let newSize = CGSize(width: width, height: height)
let renderer = UIGraphicsImageRenderer(size: newSize)
return renderer.image { _ in
self.draw(in: CGRect(origin: .zero, size: newSize))
}
}
}

This method use CIFilter to scale down UIImage
https://developer.apple.com/library/archive/documentation/GraphicsImaging/Reference/CoreImageFilterReference/index.html#//apple_ref/doc/filter/ci/CILanczosScaleTransform
public extension UIImage {
func downsampled(by reductionAmount: Float) -> UIImage? {
let image = UIKit.CIImage(image: self)
guard let lanczosFilter = CIFilter(name: "CILanczosScaleTransform") else { return nil }
lanczosFilter.setValue(image, forKey: kCIInputImageKey)
lanczosFilter.setValue(NSNumber.init(value: reductionAmount), forKey: kCIInputScaleKey)
guard let outputImage = lanczosFilter.outputImage else { return nil }
let context = CIContext(options: [CIContextOption.useSoftwareRenderer: false])
guard let cgImage = context.createCGImage(outputImage, from: outputImage.extent) else { return nil}
let scaledImage = UIImage(cgImage: cgImage)
return scaledImage
}
}
And then you can use in SwiftUI View
struct ContentView: View {
var body: some View {
if let uiImage = UIImage(named: "sample")?.downsampled(by: 0.3) {
Image(uiImage: uiImage)
}
}
}

Related

Take snapshot from UIView with lower resolution

I'm taking snapshot from a PDFView in PDFKit for streaming (20 times per sec), and I use this extesnsion
extension UIView {
func asImageBackground(viewLayer: CALayer, viewBounds: CGRect) -> UIImage {
let renderer = UIGraphicsImageRenderer(bounds: viewBounds)
return renderer.image { rendererContext in
viewLayer.render(in: rendererContext.cgContext)
}
}
}
But the output UIImage from this extension has a high resolution which make it difficult to stream. I can reduce it by this extension
extension UIImage {
func resize(_ max_size: CGFloat) -> UIImage {
// adjust for device pixel density
let max_size_pixels = max_size / UIScreen.main.scale
// work out aspect ratio
let aspectRatio = size.width/size.height
// variables for storing calculated data
var width: CGFloat
var height: CGFloat
var newImage: UIImage
if aspectRatio > 1 {
// landscape
width = max_size_pixels
height = max_size_pixels / aspectRatio
} else {
// portrait
height = max_size_pixels
width = max_size_pixels * aspectRatio
}
// create an image renderer of the correct size
let renderer = UIGraphicsImageRenderer(size: CGSize(width: width, height: height), format: UIGraphicsImageRendererFormat.default())
// render the image
newImage = renderer.image {
(context) in
self.draw(in: CGRect(x: 0, y: 0, width: width, height: height))
}
// return the image
return newImage
}
}
but it add an additional workload which make the process even worse. Is there any better way?
Thanks
You can downsample it using ImageIO which is recommended by Apple:
extension UIImage {
func downsample(to resolution: CGSize) -> UIImage? {
let imageSourceOptions = [kCGImageSourceShouldCache: false] as CFDictionary
guard let data = self.jpegData(compressionQuality: 0.75) as? CFData, let imageSource = CGImageSourceCreateWithData(data, imageSourceOptions) else {
return nil
}
let maxDimensionInPixels = Swift.max(resolution.width, resolution.height) * 3
let downsampleOptions = [
kCGImageSourceCreateThumbnailFromImageAlways: true,
kCGImageSourceShouldCacheImmediately: true,
kCGImageSourceCreateThumbnailWithTransform: true,
kCGImageSourceThumbnailMaxPixelSize: maxDimensionInPixels
] as CFDictionary
guard let downsampledImage = CGImageSourceCreateThumbnailAtIndex(imageSource, 0, downsampleOptions) else {
return nil
}
return UIImage(cgImage: downsampledImage)
}
}

White strip on iPad when rendering a view to an image

I am trying to render a black rectangle to an image and save it to the photo library. But every time I render it on my iPad, the picture has a white strip on the top, that doesn’t happen if I do this on the iPhone.
I am using Swift Playgrounds 4, so maybe that’s the reason. It’s a bit strange, since both Views the small and the bigger one are both „iPads“.
Thank you for your help!
That’s my code so far:
import SwiftUI
struct ContentView: View {
var body: some View {
VStack {
Button("Snapshot") {
// Save Screenshot
let image = snapshotView.snapshot()
UIImageWriteToSavedPhotosAlbum(image, nil, nil, nil)
}
}
}
var snapshotView: some View {
VStack {
Rectangle()
.frame(width: 200, height: 200)
}
}
}
extension View {
func snapshot() -> UIImage {
let controller = UIHostingController(rootView: self)
let view = controller.view
let targetSize = controller.view.intrinsicContentSize
view?.bounds = CGRect(origin: .zero, size: targetSize)
view?.backgroundColor = .clear
let renderer = UIGraphicsImageRenderer(size: targetSize)
return renderer.image { _ in
view?.drawHierarchy(in: controller.view.bounds, afterScreenUpdates: true)
}
}
}
Image of the Rectangle

Cocoa: Capture Screen and scale image on saving in Swift

Below code I am using to capture screen in macOS application,
let img = CGDisplayCreateImage(CGMainDisplayID())
guard let destination = FileManager.default.urls(for: .downloadsDirectory,
in: .userDomainMask).first?.appendingPathComponent("shot.jpg", isDirectory: false)
else {
print("Unable to save captured image!")
return
}
let properties: CFDictionary = [
kCGImagePropertyPixelWidth: "900",
kCGImagePropertyPixelHeight: "380"
] as CFDictionary
if let dest = CGImageDestinationCreateWithURL(destination as CFURL, kUTTypeJPEG, 1, properties) {
CGImageDestinationAddImage(dest, img!, properties)
CGImageDestinationFinalize(dest)
}
else {
print("Unable to create captured image to the destination!")
}
I have to scale the image to particular size while saving. So, I used CFDictionary with width, heigh properties of the image. But It's seems I am doing it as wrong. Please help me to find out correct solution. Thank you!
First, you can't resize using CGImageDestinationCreateWithURL or CGImageDestinationAddImage. If you look at the docs here and here you will notice that neither kCGImagePropertyPixelWidth or kCGImagePropertyPixelHeight is supported.
You will need to resize manually. You can use this tool, or modify it, if you find it helpful. It supports fill (stretch) and fit (scale while keeping the original aspect ratio) content modes. If you specify .fit it will center the drawing in the resulting image. If you specify .fill it will fill the whole space stretching whichever dimension it needs to.
enum ImageResizer {
enum ContentMode {
case fill
case fit
}
enum Error: Swift.Error {
case badOriginal
case resizeFailed
}
static func resize(_ source: CGImage, to targetSize: CGSize, mode: ContentMode) throws -> CGImage {
let context = CGContext(
data: nil,
width: Int(targetSize.width),
height: Int(targetSize.height),
bitsPerComponent: source.bitsPerComponent,
bytesPerRow: 0,
space: source.colorSpace ?? CGColorSpace(name: CGColorSpace.sRGB)!,
bitmapInfo: source.bitmapInfo.rawValue
)
guard let context = context else {
throw Error.badOriginal
}
let drawingSize: CGSize
switch mode {
case .fill:
drawingSize = targetSize
case .fit:
drawingSize = CGSize(width: source.width, height: source.height)
.scaledToFit(target: targetSize)
}
let drawRect = CGRect(origin: .zero, size: targetSize)
.makeCenteredRect(withSize: drawingSize)
context.interpolationQuality = .high
context.draw(source, in: drawRect)
guard let result = context.makeImage() else {
throw Error.resizeFailed
}
return result
}
}
ImageResizer depends on these CG extensions for scaling the source image and centering scaled image:
extension CGSize {
var maxDimension: CGFloat {
Swift.max(width, height)
}
var minDimension: CGFloat {
Swift.min(width, height)
}
func scaled(by scalar: CGFloat) -> CGSize {
CGSize(width: width * scalar, height: height * scalar)
}
func scaleFactors(to target: CGSize) -> CGSize {
CGSize(
width: target.width / width,
height: target.height / height
)
}
func scaledToFit(target: CGSize) -> CGSize {
return scaled(by: scaleFactors(to: target).minDimension)
}
}
extension CGRect {
func makeCenteredRect(withSize size: CGSize) -> CGRect {
let origin = CGPoint(
x: midX - size.width / 2.0,
y: midY - size.height / 2.0
)
return CGRect(origin: origin, size: size)
}
}
Also, make sure you set up permissions if you're going to save to .downloadsDirectory.

How to make an ellipse/circular UIImage with transparent background?

This is the code I am using
extension UIImage {
var ellipseMasked: UIImage? {
guard let cgImage = cgImage else { return nil }
let rect = CGRect(origin: .zero, size: size)
return UIGraphicsImageRenderer(size: size, format: imageRendererFormat)
.image{ _ in
UIBezierPath(ovalIn: rect).addClip()
UIImage(cgImage: cgImage, scale: scale, orientation: imageOrientation)
.draw(in: rect)
}
}
}
This is the image I got
The background color is black.
How can I make the background transparent?
I tried different ways but haven't made it work yet.
You can subclass UIImageView and mask its CALayer instead of clipping the image itself:
extension CAShapeLayer {
convenience init(path: UIBezierPath) {
self.init()
self.path = path.cgPath
}
}
class EllipsedView: UIImageView {
override func layoutSubviews() {
super.layoutSubviews()
layer.mask = CAShapeLayer(path: .init(ovalIn: bounds))
}
}
let profilePicture = UIImage(data: try! Data(contentsOf: URL(string:"http://i.stack.imgur.com/Xs4RX.jpg")!))!
let iv = EllipsedView(image: profilePicture)
edit/update
If you need to clip the UIImage itself you can do it as follow:
extension UIImage {
var ellipseMasked: UIImage? {
UIGraphicsBeginImageContextWithOptions(size, false, scale)
defer { UIGraphicsEndImageContext() }
UIBezierPath(ovalIn: .init(origin: .zero, size: size)).addClip()
draw(in: .init(origin: .zero, size: size))
return UIGraphicsGetImageFromCurrentImageContext()
}
}
For iOS10+ you can use UIGraphicsImageRenderer.
extension UIImage {
var ellipseMasked: UIImage {
let rect = CGRect(origin: .zero, size: size)
let format = imageRendererFormat
format.opaque = false
return UIGraphicsImageRenderer(size: size, format: format).image{ _ in
UIBezierPath(ovalIn: rect).addClip()
draw(in: rect)
}
}
}
let profilePicture = UIImage(data: try! Data(contentsOf: URL(string:"http://i.stack.imgur.com/Xs4RX.jpg")!))!
profilePicture.ellipseMasked
Here are two solutions using SwiftUI.
This solution can be used to clip the image shape to a circle.
Image("imagename").resizable()
.clipShape(Circle())
.scaledToFit()
This solution can be used to get more of an eclipse or oval shape from the image.
Image("imagename").resizable()
.cornerRadius(100)
.scaledToFit()
.padding()

Swiftui getting an image's displaying dimensions

I'm trying to get the dimensions of a displayed image to draw bounding boxes over the text I have recognized using apple's Vision framework.
So I run the VNRecognizeTextRequest uppon the press of a button with this funcion
func readImage(image:NSImage, completionHandler:#escaping(([VNRecognizedText]?,Error?)->()), comp:#escaping((Double?,Error?)->())) {
var recognizedTexts = [VNRecognizedText]()
var rr = CGRect(x: 0, y: 0, width: image.size.width, height: image.size.height)
let requestHandler = VNImageRequestHandler(cgImage: image.cgImage(forProposedRect: &rr, context: nil, hints: nil)!
, options: [:])
let textRequest = VNRecognizeTextRequest { (request, error) in
guard let observations = request.results as? [VNRecognizedTextObservation] else { completionHandler(nil,error)
return
}
for currentObservation in observations {
let topCandidate = currentObservation.topCandidates(1)
if let recognizedText = topCandidate.first {
recognizedTexts.append(recognizedText)
}
}
completionHandler(recognizedTexts,nil)
}
textRequest.recognitionLevel = .accurate
textRequest.recognitionLanguages = ["es"]
textRequest.usesLanguageCorrection = true
textRequest.progressHandler = {(request, value, error) in
comp(value,nil)
}
try? requestHandler.perform([textRequest])
}
and compute the bounding boxes offsets using this struct and function
struct DisplayingRect:Identifiable {
var id = UUID()
var width:CGFloat = 0
var height:CGFloat = 0
var xAxis:CGFloat = 0
var yAxis:CGFloat = 0
init(width:CGFloat, height:CGFloat, xAxis:CGFloat, yAxis:CGFloat) {
self.width = width
self.height = height
self.xAxis = xAxis
self.yAxis = yAxis
}
}
func createBoundingBoxOffSet(recognizedTexts:[VNRecognizedText], image:NSImage) -> [DisplayingRect] {
var rects = [DisplayingRect]()
let imageSize = image.size
let imageTransform = CGAffineTransform.identity.scaledBy(x: imageSize.width, y: imageSize.height)
for obs in recognizedTexts {
let observationBounds = try? obs.boundingBox(for: obs.string.startIndex..<obs.string.endIndex)
let rectangle = observationBounds?.boundingBox.applying(imageTransform)
print("Rectange: \(rectangle!)")
let width = rectangle!.width
let height = rectangle!.height
let xAxis = rectangle!.origin.x - imageSize.width / 2 + rectangle!.width / 2
let yAxis = -(rectangle!.origin.y - imageSize.height / 2 + rectangle!.height / 2)
let rect = DisplayingRect(width: width, height: height, xAxis: xAxis, yAxis: yAxis)
rects.append(rect)
}
return(rects)
}
I place the rects using this code in the ContentView
ZStack{
Image(nsImage: self.img!)
.scaledToFit()
ForEach(self.rects) { rect in
Rectangle()
.fill(Color.init(.sRGB, red: 1, green: 0, blue: 0, opacity: 0.2))
.frame(width: rect.width, height: rect.height)
.offset(x: rect.xAxis, y: rect.yAxis)
}
}
If I use the original's image dimensions I get these results
But if I add
Image(nsImage: self.img!)
.resizable()
.scaledToFit()
I get these results
Is there a way to get the image dimensions and pass them and get the proper size of the image being displayed? I also need this because I can't show the whole image sometimes and need to scale it.
Thanks a lot
I would use GeometryReader on background so it reads exactly size of image, as below
#State var imageSize: CGSize = .zero // << or initial from NSImage
...
Image(nsImage: self.img!)
.resizable()
.scaledToFit()
.background(rectReader())
// ... somewhere below
private func rectReader() -> some View {
return GeometryReader { (geometry) -> Color in
let imageSize = geometry.size
DispatchQueue.main.async {
print(">> \(imageSize)") // use image actual size in your calculations
self.imageSize = imageSize
}
return .clear
}
}
Rather than pass in the frame to every view, Apple elected to give you a separate GeometryReader view that gets its frame passed in as a parameter to its child closure.
struct Example: View {
var body: some View {
GeometryReader { geometry in
Image(systemName: "check")
.onAppear {
print(geometry.frame(in: .local))
}
}
}
}