Scaling an image OSX Swift - swift

Im currently trying to scale an image using swift. This shouldnt be a difficult task, since i've implemented a scaling solution in C# in 30 mins - however, i've been stuck for 2 days now.
I've tried googling/crawling through stack posts but to no avail. The two main solutions i have seen people use are:
A function written in Swift to resize an NSImage proportionately
and
resizeNSImage.swift
An Obj C Implementation of the above link
So i would prefer to use the most efficient/least cpu intensive solution, which according to my research is option 2. Due to option 2 using NSImage.lockfocus() and NSImage.unlockFocus, the image will scale fine on non-retina Macs, but double the scaling size on retina macs. I know this is due to the pixel density of Retina macs, and is to be expected, but i need a scaling solution that ignores HiDPI specifications and just performs a normal scale operation.
This led me to do more research into option 1. It seems like a sound function, however it literally doesnt scale the input image, and then doubles the filesize as i save the returned image (presumably due to pixel density). I found another stack post with someone else having the exact same problem as i am, using the exact same implementation (found here). Of the two suggested answers, the first one doesnt work, and the second is the other implementation i've been trying to use.
If people could post Swift-ified answers, as opposed to Obj C, i'd appreciate it very much!
EDIT:
Here's a copy of my implementation of the first solution - I've divided it into 2 functions:
func getSizeProportions(oWidth: CGFloat, oHeight: CGFloat) -> NSSize {
var ratio:Float = 0.0
let imageWidth = Float(oWidth)
let imageHeight = Float(oHeight)
var maxWidth = Float(0)
var maxHeight = Float(600)
if ( maxWidth == 0 ) {
maxWidth = imageWidth
}
if(maxHeight == 0) {
maxHeight = imageHeight
}
// Get ratio (landscape or portrait)
if (imageWidth > imageHeight) {
// Landscape
ratio = maxWidth / imageWidth;
}
else {
// Portrait
ratio = maxHeight / imageHeight;
}
// Calculate new size based on the ratio
let newWidth = imageWidth * ratio
let newHeight = imageHeight * ratio
return NSMakeSize(CGFloat(newWidth), CGFloat(newHeight))
}
func resizeImage(image:NSImage) -> NSImage {
print("original: ", image.size.width, image.size.height )
// Cast the NSImage to a CGImage
var imageRect:CGRect = CGRect(x: 0, y: 0, width: image.size.width, height: image.size.height)
let imageRef = image.cgImage(forProposedRect: &imageRect, context: nil, hints: nil)
// Create a new NSSize object with the newly calculated size
let newSize = NSSize(width: CGFloat(450), height: CGFloat(600))
//let newSize = getSizeProportions(oWidth: CGFloat(image.size.width), oHeight: CGFloat(image.size.height))
// Create NSImage from the CGImage using the new size
let imageWithNewSize = NSImage(cgImage: imageRef!, size: newSize)
print("scaled: ", imageWithNewSize.size.width, imageWithNewSize.size.height )
return NSImage(data: imageWithNewSize.tiffRepresentation!)!
}
EDIT 2:
As pointed out by Zneak: i need to save the returned image to disk - Using both implementations, my save function writes the file to disk successfully. Although i dont think my save function could be screwing with my current resizing implementation, i've attached it anyways just in case:
func saveAction(image: NSImage, url: URL) {
if let tiffdata = image.tiffRepresentation,
let bitmaprep = NSBitmapImageRep(data: tiffdata) {
let props = [NSImageCompressionFactor: Appearance.imageCompressionFactor]
if let bitmapData = NSBitmapImageRep.representationOfImageReps(in: [bitmaprep], using: .JPEG, properties: props) {
let path: NSString = "~/Desktop/out.jpg"
let resolvedPath = path.expandingTildeInPath
try! bitmapData.write(to: URL(fileURLWithPath: resolvedPath), options: [])
print("Your image has been saved to \(resolvedPath)")
}
}

To anyone else experiencing this problem - I ended up spending countless hours trying to find a way to do this, and ended up just getting the scaling factor of the screen (1 for normal macs, 2 for retina)... The code looks like this:
func getScaleFactor() -> CGFloat {
return NSScreen.main()!.backingScaleFactor
}
Then once you have the scale factor you either scale normally or half the dimensions for retina:
if (scaleFactor == 2) {
//halve size proportions for saving on Retina Macs
return NSMakeSize(CGFloat(oWidth*ratio)/2, CGFloat(oHeight*ratio)/2)
} else {
return NSMakeSize(CGFloat(oWidth*ratio), CGFloat(oHeight*ratio))
}

Related

create transparent texture in swift

I just need to create a transparent texture.(pixels with alpha 0).
func layerTexture()-> MTLTexture {
let width = Int(self.drawableSize.width )
let height = Int(self.drawableSize.height )
let texDescriptor = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: .bgra8Unorm, width: width , height: height, mipmapped: false)
let temparyTexture = self.device?.makeTexture(descriptor: texDescriptor)
return temparyTexture!
}
when I open temparyTexture using preview,it's appeared to be black. What is the missing here?
UPDATE
I just tried to create texture using transparent image.
code.
func layerTexture(imageData:Data)-> MTLTexture {
let width = Int(self.drawableSize.width )
let height = Int(self.drawableSize.height )
let bytesPerRow = width * 4
let texDescriptor = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: .rgba8Unorm, width: width , height: height, mipmapped: false)
let temparyTexture = self.device?.makeTexture(descriptor: texDescriptor)
let region = MTLRegionMake2D(0, 0, width, height)
imageData.withUnsafeBytes { (u8Ptr: UnsafePointer<UInt8>) in
let rawPtr = UnsafeRawPointer(u8Ptr)
temparyTexture?.replace(region: region, mipmapLevel: 0, withBytes: rawPtr, bytesPerRow: bytesPerRow)
}
return temparyTexture!
}
method is get called as follows
let image = UIImage(named: "layer1.png")!
let imageData = UIImagePNGRepresentation(image)
self.layerTexture(imageData: imageData!)
where layer1.png is a transparent png. But even though it is crashing with message "Thread 1: EXC_BAD_ACCESS (code=1, address=0x107e8c000) " at the point I try to replace texture. I believe it's because image data is compressed and rawpointer should point to uncompressed data. How can I resolve this?
Am I in correct path or completely in wrong direction? Is there any other alternatives. What I just need is to create transparent texture.
Pre-edit: When you quick-look a transparent texture, it will appear black. I just double-checked with some code I have running stably in production - that is the expected result.
Post-edit: You are correct, you should not be copying PNG or JPEG data to a MTLTexture's contents directly. I would recommend doing something like this:
var pixelBuffer: CVPixelBuffer?
let attrs = [kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue,
kCVPixelBufferCGBitmapContextCompatibilityKey: kCFBooleanTrue,
kCVPixelBufferMetalCompatibilityKey: kCFBooleanTrue]
var status = CVPixelBufferCreate(nil, Int(image.size.width), Int(image.size.height),
kCVPixelFormatType_32BGRA, attrs as CFDictionary,
&pixelBuffer)
assert(status == noErr)
let coreImage = CIImage(image: image)!
let context = CIContext(mtlDevice: MTLCreateSystemDefaultDevice()!)
context.render(coreImage, to: pixelBuffer!)
var textureWrapper: CVMetalTexture?
status = CVMetalTextureCacheCreateTextureFromImage(kCFAllocatorDefault,
GPUManager.shared.textureCache, pixelBuffer!, nil, .bgra8Unorm,
CVPixelBufferGetWidth(pixelBuffer!), CVPixelBufferGetHeight(pixelBuffer!), 0, &textureWrapper)
let texture = CVMetalTextureGetTexture(textureWrapper!)!
// use texture now for your Metal texture. the texture is now map-bound to the CVPixelBuffer's underlying memory.
The issue you are running into is that it is actually pretty hard to fully grasp how bitmaps work and how they can be laid out differently. Graphics is a very closed field with lots of esoteric terminology, some of which refers to things that take years to grasp, some of which refers to things that are trivial but people just picked a weird word to call them by. My main pointers are:
Get out of UIImage land as early in your code as possible. The best way to avoiding overhead and delays when you go into Metal land is to get your images into a GPU-compatible representation as soon as you can.
Once you are outside of UIImage land, always know your channel order (RGBA, BGRA). At any point in code that you are editing, you should have a mental model of what pixel format each CVPixelBuffer / MTLTexture has.
Read up on premultiplied vs non-premultiplied alpha, you may not run into issues with this, but it threw me off repeatedly when I was first learning.
total byte size of a bitmap/pixelbuffer = bytesPerRow * height

Images being flipped when adding to NSAttributedString

I have a strange problem when resizing an image that's in a NSAttributedString. The resizing extension is working fine, but when the image is added to the NSAttributedString, it gets flipped vertically for some reason.
This is the resizing extension:
extension NSImage {
func resize(containerWidth: CGFloat) -> NSImage {
var scale : CGFloat = 1.0
let currentWidth = self.size.width
let currentHeight = self.size.height
if currentWidth > containerWidth {
scale = (containerWidth * 0.9) / currentWidth
}
let newWidth = currentWidth * scale
let newHeight = currentHeight * scale
self.size = NSSize(width: newWidth, height: newHeight)
return self
}
}
And here is the enumeration over the images in the attributed string:
newAttributedString.enumerateAttribute(NSAttributedStringKey.attachment, in: NSMakeRange(0, newAttributedString.length), options: []) { value, range, stop in
if let attachement = value as? NSTextAttachment {
let image = attachement.image(forBounds: attachement.bounds, textContainer: NSTextContainer(), characterIndex: range.location)!
let newImage = image.resize(containerWidth: markdown.bounds.width)
let newAttribute = NSTextAttachment()
newAttribute.image = newImage
newAttributedString.addAttribute(NSAttributedStringKey.attachment, value: newAttribute, range: range)
}
}
I've set breakpoints and inspected the images, and they are all in the correct rotation, except when it reaches this line:
newAttributedString.addAttribute(NSAttributedStringKey.attachment, value: newAttribute, range: range)
where the image gets flipped vertically.
I have no clue what could be causing this vertical flip. Is there a way to fix this?
If you look at the developer docs for NSTextAttachment:
https://developer.apple.com/documentation/uikit/nstextattachment
The bounds parameter is defined as follows:
“Defines the layout bounds of the receiver's graphical representation in the text coordinate system.”
I know that when using CoreText to layout text, you need to flip the coordinates, so I should imagine you need to transform your bounds parameter with a vertical reflection too.
Hope that helps.
I figured it out and it was so much simpler than I was making it.
Because the image was in a NSAttribuetdString being appended into a NSTextView I didn't need to resize each image in the NSAttributedString, rather I just had to set the attachment scaling inside the NSTextView with
markdown.layoutManager?.defaultAttachmentScaling = NSImageScaling.scaleProportionallyDown
One line is all it took

I need help integrating a specific UIImage resizing extension into my current draw CGRect function

I found this extension online, it allows me to have images adhere to aspect fit/fill even when drawn inside dynamically growing/shrinking image views (currently when image is saved to camera roll after my draw function the image reverts to "scale fill" regardless of what the content mode of the image view is. I suspect the reasoning for this is because I have it drawing the image to size/bounds of the image view, but since the image view is dynamic, i don't see any way around this without using this extension):
// MARK: - Image Scaling.
extension UIImage {
/// Scales an image to fit within a bounds with a size governed by the passed size. Also keeps the aspect ratio.
/// Switch MIN to MAX for aspect fill instead of fit.
///
/// - parameter newSize: newSize the size of the bounds the image must fit within.
///
/// - returns: a new scaled image.
func scaleImageToSize(newSize: CGSize) -> UIImage {
var scaledImageRect = CGRect.zero
let aspectWidth = newSize.width/size.width
let aspectheight = newSize.height/size.height
let aspectRatio = max(aspectWidth, aspectheight)
scaledImageRect.size.width = size.width * aspectRatio;
scaledImageRect.size.height = size.height * aspectRatio;
scaledImageRect.origin.x = (newSize.width - scaledImageRect.size.width) / 2.0;
scaledImageRect.origin.y = (newSize.height - scaledImageRect.size.height) / 2.0;
UIGraphicsBeginImageContext(newSize)
draw(in: scaledImageRect)
let scaledImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return scaledImage!
}
}
This is my current function I'm using for drawing the image on screen to be able to save it to camera roll (this function combines two images, a frame and an image from camera roll:
func drawImagesAndText() {
let renderer = UIGraphicsImageRenderer(size: CGSize(width: imageView.bounds.size.width, height: imageView.bounds.size.height))
img = renderer.image { ctx in
// var newSize = currentImage.scaleImageToSize
let bgImage = currentImage
bgImage?.draw(in: CGRect(x: 0, y: 0, width: imageView.bounds.size.width, height: imageView.bounds.size.height))
frames = UIImage(named: framesAr)
frames?.draw(in: CGRect(x: 0, y: 0, width: imageView.bounds.size.width, height: imageView.bounds.size.height))
}
}
All the tutorials I've found on how to use extensions don't cover how to pass in and out variables like this one requires. Any insight would be greatly appreciated.
I understand that you don't know how to use the extension, is that correct? Since it just adds a function to every UIImage, you can simply call it on your image like this: currentImage.scaleImageToSize(newSize: someSize) and pass the size you want the image to fit into.
Dorian Roy was telling me to use that call in place of using just "currentImage", and that's what worked!
(I commented on his initial answer saying I was having issues because I was trying to use the return value from the extension itself in place of "currentImage")

Image downsizing while loop, can't get below 400kB no matter how many iterations are run

func downsizeImage(image: UIImage) -> Data{
var imagePointer = image
let targetDataSize: CGFloat = 256.0 * 256
var imageData = UIImageJPEGRepresentation(image, CGFloat(1.0))!
while (CGFloat(imageData.count) > targetDataSize){
var newProportion = targetDataSize / CGFloat(imageData.count)
print("image data size is \(imageData.count)\n")
imageData = UIImageJPEGRepresentation(imagePointer, CGFloat(newProportion))!
imagePointer = UIImage(data: imageData)!
}
return imageData
}
image data size is 6581432
image data size is 391167
image data size is 394974
image data size is 394915
image data size is 394845
Any clue what my problem is?
The quality parameter to UIImageJPEGRepresentation is not a proportion of image size. Rather it specifies who lossy the JPG compression can be. Quality of zero represents the lossiest and smallest size file the algorithm can generate for your data, but it has no guarantee that its going to be a certain size. The actual size depends on the size and complexity of the image, in terms of color/spatial frequency since thats what the compression algorithm is discarding. If you need to make your image a certain size on disk I suggest you down sample it to a lower resolution and then use a JPG quality setting between 0.4 and 0.6. Lowe values tend to produce a bunch of artifacts, depending on the type of image you are compressing.
EDIT: add code for downsampling extension to UIImage:
extension UIImage {
func resize(to proportion: CGFloat) -> UIImage {
let newSize = CGSize(width: size.width * proportion, height: size.height * proportion)
return UIGraphicsImageRenderer(size: newSize).image { context in
self.draw(in: CGRect(origin: .zero, size: newSize))
}
}
}

Continually scrolling dynamic bitmap in Core Graphics

I have been struggling with a performance related issue for a simple Cocoa based visualization program, that is continually drawing a heat map as a bitmap and displaing the bitmap on screen. Broadly, the heat map is a continually scrolling source of realtime information. At each redraw, new pixels are added at the right side of the bitmap and then the bitmap is shifted to the left, creating an on-going flow of information.
My current approach is implemented in a custom NVSView, with a circular buffer for the underlying pixels. The image is actually drawn rotated 90 degrees, so adding new data is simply a matter of appending the buffer (adding rows to the image). I then use a CGDataProviderCreateWithData to create a CoreGraphics image and draw that to a rotated context. (The code is below, although is less important to the question.)
My hope is to figure out a more performant way of achieving this. It seems like redrawing the full bitmap each time is excessive, and in my attempts, it uses a surprisingly high amount of CPU (~20-30%). I feel like there should be a way to somehow instruct the GPU to cache, but shift the existing pixels and then append new data.
I am considering an approach that will use two CGBitmapContextCreate, one context for what is currently on the screen and one context for modification. Prior pixels would be copied from one context to the other context, but I am not sure that will improve performance significantly. Before I proceed too far down that rabbit hole, are there better ways to handle such updating?
Here is the relevant code from my current implementation, although I think I am more in need of higher level guidance:
class PlotView: NSView
{
private let bytesPerPixel = 4
private let height = 512
private let bufferLength = 5242880
private var buffer: TPCircularBuffer
// ... Other code that appends data to buffer and set needsDisplay ...
override func drawRect(dirtyRect: NSRect) {
super.drawRect(dirtyRect)
// get context
guard let context = NSGraphicsContext.currentContext() else {
return
}
// get colorspace
guard let colorSpace = CGColorSpaceCreateDeviceRGB() else {
return
}
// number of sampels to draw
let maxSamples = Int(frame.width) * height
let maxBytes = maxSamples * bytesPerPixel
// bytes for reading
var availableBytes: Int32 = 0
let bufferStart = UnsafeMutablePointer<UInt8>(TPCircularBufferTail(&buffer, &availableBytes))
let samplesStart: UnsafeMutablePointer<UInt8>
let samplesCount: Int
let samplesBytes: Int
// buffer management
if Int(availableBytes) > maxBytes {
// advance buffer start
let bufferOffset = Int(availableBytes) - maxBytes
// set sample start
samplesStart = bufferStart.advancedBy(bufferOffset)
// consume values
TPCircularBufferConsume(&buffer, Int32(bufferOffset))
// number of samples
samplesCount = maxSamples
samplesBytes = maxBytes
}
else {
// set to start
samplesStart = bufferStart
// number of samples
samplesBytes = Int(availableBytes)
samplesCount = samplesBytes / bytesPerPixel
}
// get dimensions
let rows = height
let columns = samplesCount / rows
// bitmap info
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.NoneSkipFirst.rawValue)
// counts
let bytesPerComponent = sizeof(UInt8)
let bytesPerPixel = sizeof(UInt8) * bytesPerPixel
// prepare image
let provider = CGDataProviderCreateWithData(nil, samplesStart, samplesBytes, nil)
let image = CGImageCreate(rows, columns, 8 * bytesPerComponent, 8 * bytesPerPixel, rows * bytesPerPixel, colorSpace, bitmapInfo, provider, nil, false, .RenderingIntentDefault)
// make rotated size
let rotatedSize = CGSize(width: frame.width, height: frame.height)
let rotatedCenterX = rotatedSize.width / 2, rotatedCenterY = rotatedSize.height / 2
// rotate, from: http://stackoverflow.com/questions/10544887/rotating-a-cgimage
CGContextTranslateCTM(context.CGContext, rotatedCenterX, rotatedCenterY)
CGContextRotateCTM(context.CGContext, CGFloat(0 - M_PI_2))
CGContextScaleCTM(context.CGContext, 1, -1)
CGContextTranslateCTM(context.CGContext, -rotatedCenterY, -rotatedCenterX)
// draw
CGContextDrawImage(context.CGContext, NSRect(origin: NSPoint(x: rotatedSize.height - CGFloat(rows), y: 0), size: NSSize(width: rows, height: columns)), image)
}
}