Memory leak when making CGImage from MTLTexture (Swift, macOS) - swift

I have a Metal app and I'm trying to export frames to a quicktime movie. I am rendering frames in super hi-res and then scaling them down before writing, in order to antialias the scene.
To scale it, I'm taking the hi-res texture and converting it to a CGImage, then I resize the image and write out the smaller version. I have this extension I found online for converting an MTLTexture to a CGImage:
extension MTLTexture {
func bytes() -> UnsafeMutableRawPointer {
let width = self.width
let height = self.height
let rowBytes = self.width * 4
let p = malloc(width * height * 4)
self.getBytes(p!, bytesPerRow: rowBytes, from: MTLRegionMake2D(0, 0, width, height), mipmapLevel: 0)
return p!
}
func toImage() -> CGImage? {
let p = bytes()
let pColorSpace = CGColorSpaceCreateDeviceRGB()
let rawBitmapInfo = CGImageAlphaInfo.premultipliedFirst.rawValue | CGBitmapInfo.byteOrder32Little.rawValue // noneSkipFirst
let bitmapInfo:CGBitmapInfo = CGBitmapInfo(rawValue: rawBitmapInfo)
let size = self.width * self.height * 4
let rowBytes = self.width * 4
let releaseMaskImagePixelData: CGDataProviderReleaseDataCallback = { (info: UnsafeMutableRawPointer?, data: UnsafeRawPointer, size: Int) -> () in
// https://developer.apple.com/reference/coregraphics/cgdataproviderreleasedatacallback
// N.B. 'CGDataProviderRelease' is unavailable: Core Foundation objects are automatically memory managed
return
}
if let provider = CGDataProvider(dataInfo: nil, data: p, size: size, releaseData: releaseMaskImagePixelData) {
let cgImageRef = CGImage(width: self.width, height: self.height, bitsPerComponent: 8, bitsPerPixel: 32, bytesPerRow: rowBytes, space: pColorSpace, bitmapInfo: bitmapInfo, provider: provider, decode: nil, shouldInterpolate: true, intent: CGColorRenderingIntent.defaultIntent)!
p.deallocate() //this fixes the memory leak
return cgImageRef
}
p.deallocate() //this fixes the memory leak
return nil
}
} // end extension
I'm not positive, but it seems like something in this function is resulting in the memory leak -- with every frame it is holding on to the amount of memory in the giant texture / cgimage and not releasing it.
The CGDataProvider initialization takes that 'releaseData' callback argument, but I was under the impression that it was no longer needed.
I also have a resizing extention to CGImage -- this might also cause a leak, I don't know. However, I can comment out the resizing and writing of the frame, and the memory leak still builds up, so it seems to me that the conversion to CGImage is the main problem.
extension CGImage {
func resize(_ scale:Float) -> CGImage? {
let imageWidth = Float(width)
let imageHeight = Float(height)
let w = Int(imageWidth * scale)
let h = Int(imageHeight * scale)
guard let colorSpace = colorSpace else { return nil }
guard let context = CGContext(data: nil, width: w, height: h, bitsPerComponent: bitsPerComponent, bytesPerRow: Int(Float(bytesPerRow)*scale), space: colorSpace, bitmapInfo: alphaInfo.rawValue) else { return nil }
// draw image to context (resizing it)
context.interpolationQuality = .high
let r = CGRect(x: 0, y: 0, width: w, height: h)
context.clear(r)
context.draw(self, in:r)
// extract resulting image from context
return context.makeImage()
}
}
Finally, here is the big function that I call every frame when exporting. I'm sorry for the length but it is probably better to provide too much information than too little. So, basically at the start of rendering I allocate a giant MTL texture ('exportTextureBig'), the size of my normal screen multiplied by 'zoom_subvisions' in each direction. I render the scene in chunks, one for each spot on the grid, and assemble the large frame by using blitCommandEncoder.copy() to copy each small chunk onto the large texture. Once the entire frame is filled in, then I try to make a CGImage from it, scale it down to another CGImage, and write that out.
I'm calling commandBuffer.waitUntilCompleted() every frame while exporting -- hoping to avoid having the renderer hold on to textures that it is still using.
func exportFrame2(_ commandBuffer:MTLCommandBuffer, _ texture:MTLTexture) { // texture is the offscreen render target for the screen-size chunks
if zoom_index < zoom_subdivisions*zoom_subdivisions { // copy screen-size chunk to large texture
if let blitCommandEncoder = commandBuffer.makeBlitCommandEncoder() {
let dx = Int(BigRender.globals_L.displaySize.x) * (zoom_index%zoom_subdivisions)
let dy = Int(BigRender.globals_L.displaySize.y) * (zoom_index/zoom_subdivisions)
blitCommandEncoder.copy(from:texture,
sourceSlice: 0,
sourceLevel: 0,
sourceOrigin: MTLOrigin(x:0,y:0,z:0),
sourceSize: MTLSize(width:Int(BigRender.globals_L.displaySize.x),height:Int(BigRender.globals_L.displaySize.y), depth:1),
to:BigVideoWriter!.exportTextureBig!,
destinationSlice: 0,
destinationLevel: 0,
destinationOrigin: MTLOrigin(x:dx,y:dy,z:0))
blitCommandEncoder.synchronize(resource: BigVideoWriter!.exportTextureBig!)
blitCommandEncoder.endEncoding()
}
}
commandBuffer.commit()
commandBuffer.waitUntilCompleted() // do this instead
// is big frame complete?
if (zoom_index == zoom_subdivisions*zoom_subdivisions-1) {
// shrink the big texture here
if let cgImage = self.exportTextureBig!.toImage() { // memory leak here?
// this can be commented out and memory leak still happens
if let smallImage = cgImage.resize(1.0/Float(zoom_subdivisions)) {
writeFrame(nil, smallImage)
}
}
}
}
This all works, except for the huge memory leak. Is there something I can do to make it release the cgImage data each frame? Why is it holding onto it?
Thanks very much for any suggestions!

I think you've misunderstood the issue with CGDataProviderReleaseDataCallback and CGDataProviderRelease() being unavailable.
CGDataProviderRelease() is (in C) used to release the CGDataProvider object itself. But that's not the same thing as the byte buffer that you've provided to the CGDataProvider when you created it.
In Swift, the lifetime of the CGDataProvider object is managed for you, but that doesn't help deallocate the byte buffer.
Ideally, CGDataProvider would be able to automatically manage the lifetime of the byte buffer, but it can't. CGDataProvider doesn't know how to release that byte buffer because it doesn't know how it was allocated. That's why you have to provide a callback that it can use to release it. You are essentially providing the knowledge of how to release the byte buffer.
Since you're using malloc() to allocate the byte buffer, your callback needs to free() it.
That said, you'd be much better off using CFMutableData rather than UnsafeMutableRawPointer. Then, create the data provider using CGDataProvider(data:). In this case, all of the memory is managed for you.

I'm using very similar code, once I added code to deallocate P, the issue got solved:
func toImage() -> CGImage? {
let p = bytes()
let pColorSpace = CGColorSpaceCreateDeviceRGB()
let rawBitmapInfo = CGImageAlphaInfo.premultipliedFirst.rawValue | CGBitmapInfo.byteOrder32Little.rawValue // noneSkipFirst
let bitmapInfo:CGBitmapInfo = CGBitmapInfo(rawValue: rawBitmapInfo)
let size = self.width * self.height * 4
let rowBytes = self.width * 4
let releaseMaskImagePixelData: CGDataProviderReleaseDataCallback = { (info: UnsafeMutableRawPointer?, data: UnsafeRawPointer, size: Int) -> () in
// https://developer.apple.com/reference/coregraphics/cgdataproviderreleasedatacallback
// N.B. 'CGDataProviderRelease' is unavailable: Core Foundation objects are automatically memory managed
return
}
if let provider = CGDataProvider(dataInfo: nil, data: p, size: size, releaseData: releaseMaskImagePixelData) {
let cgImageRef = CGImage(width: self.width, height: self.height, bitsPerComponent: 8, bitsPerPixel: 32, bytesPerRow: rowBytes, space: pColorSpace, bitmapInfo: bitmapInfo, provider: provider, decode: nil, shouldInterpolate: true, intent: CGColorRenderingIntent.defaultIntent)!
p.deallocate() //this fixes the memory leak
return cgImageRef
}
p.deallocate() //this fixes the memory leak, but the data provider is no longer available (you just deallocated it's backing store)
return nil
}
anywhere you need to rapidly use CGImage
autoreleasepool {
let lastDrawableDisplayed = self.metalView?.currentDrawable?.texture
let cgImage = lastDrawableDisplayed?.toImage() // your code to convert drawable to CGImage
// do work with cgImage
}

Related

iOS: How to get pixel data array from CGImage in Swift

I need to get pixel data as byte array from a CGImage that can be RGB8, RGB16, GRAYSCALE8 or GRAYSCALE16. Previous solutions such as this one produce a dark or distorted image.
Per the link provided in your question, you can get the pixelData by doing
extension UIImage {
func pixelData() -> [UInt8]? {
let size = self.size
let dataSize = size.width * size.height * 4
var pixelData = [UInt8](repeating: 0, count: Int(dataSize))
let colorSpace = CGColorSpaceCreateDeviceRGB()
let context = CGContext(data: &pixelData,
width: Int(size.width),
height: Int(size.height),
bitsPerComponent: 8,
bytesPerRow: 4 * Int(size.width),
space: colorSpace,
bitmapInfo: CGImageAlphaInfo.noneSkipLast.rawValue)
guard let cgImage = self.cgImage else { return nil }
context?.draw(cgImage, in: CGRect(x: 0, y: 0, width: size.width, height: size.height))
return pixelData
}
}
However, as a developer, the concerning objects here are the bitmapInfo and colorSpace. Your image may be getting distorted or colored differently depending on the information provided. The exact solution will be dependent upon how you obtained the image and what color schemes were provided from the image. You may just need to play with the variables.
I've never had an issue using CGColorSpaceCreateDeviceRGB() as my colorSpace but I have had to alter my bitmapInfo many times as my images were coming in as a different value.
Here is the location to reference the different types of bitmaps. More than likely though, you only need a variation of the CGImageAlphaInfo which can be located here.
If necessary, you can change the colorSpace. The default CGcolorSpace webpage can be found here. However, you could probably get away with one of the default ones located here

Convert a CGImage to MTLTexture without premultiplication

I have a UIImage which I've previously created from a png file:
let strokeUIImage = UIImage(data: pngData)
I want to convert strokeImage (which has opacity) to an MTLTexture for display in an MTKView, but doing the conversion seems to perform an unwanted premultiplication, which darkens all the semitransparent edges.
My blending settings are as follows:
pipelineDescriptor.colorAttachments[0].isBlendingEnabled = true
pipelineDescriptor.colorAttachments[0].rgbBlendOperation = .add
pipelineDescriptor.colorAttachments[0].alphaBlendOperation = .add
pipelineDescriptor.colorAttachments[0].sourceRGBBlendFactor = .one
pipelineDescriptor.colorAttachments[0].sourceAlphaBlendFactor = .one
pipelineDescriptor.colorAttachments[0].destinationRGBBlendFactor = .oneMinusSourceAlpha
pipelineDescriptor.colorAttachments[0].destinationAlphaBlendFactor = .oneMinusSourceAlpha
I've tried two methods of conversion:
let stampTexture = try! MTKTextureLoader(device: self.device!).newTexture(cgImage: strokeUIImage.cgImage!, options: nil)
and the more elaborate dataProvider-driven method:
let image = strokeUIImage.cgImage!
let imageWidth = image.width
let imageHeight = image.height
let bytesPerPixel:Int! = 4
let rowBytes = imageWidth * bytesPerPixel
let texDescriptor = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: .rgba8Unorm_srgb,
width: imageWidth,
height: imageHeight,
mipmapped: false)
guard let stampTexture = device!.makeTexture(descriptor: texDescriptor) else { return }
let srcData: CFData! = image.dataProvider?.data
let pixelData = CFDataGetBytePtr(srcData)
let region = MTLRegionMake2D(0, 0, imageWidth, imageHeight)
stampTexture.replace(region: region, mipmapLevel: 0, withBytes: pixelData!, bytesPerRow: Int(rowBytes))
both of which yield the same unwanted premultiplied result.
The latter I tried, as there were some posts suggesting that the old swift3 method CGDataProviderCopyData() extracts raw pixel data from the image which is not premultiplied. Sadly, the equivalent:
let srcData: CFData! = image.dataProvider?.data
does not seem to do the trick. Am I missing something?
Any pointers would be appreciated.
After much experimenting, I've come to a solution which addresses the pre-multiplication issue inherent in CoreGraphics images. Thanks to Warren's tip regarding using an Accelerate function (vImageUnpremultiplyData_ARGB8888 in particular), I thought, why not build a CGImage using vImage_CGImageFormat which will allow me to play with the bitmapInfo setting that specifies how to interpret alpha...The result is not perfect, as demonstrated by the image attachment below:
Somehow, in the translation the alpha values are getting punched up slightly, (possibly the rgb as well, but not significantly). By the way, I should point out that the png pixel format is sRGB, and the MTKView I'm using is set to MTLPixelFormat.rgba16Float (app requirement)
Below is the full metalDrawStrokeUIImage routine I implemented. Of particular note is the line:
bitmapInfo: CGBitmapInfo(rawValue: CGImageAlphaInfo.last.rawValue)
which essentially unassociates the alpha (I think) without calling vImageUnpremultiplyData_ARGB8888. Looking at the resulting image certainly looks like an un-premultiplied image...
Lastly, to get back a premultiplied texture on the MTKView side, I let the fragment shader handle the pre-multiplication:
fragment float4 premult_fragment(VertexOut interpolated [[stage_in]],
texture2d<float> texture [[texture(0)]],
sampler sampler2D [[sampler(0)]]) {
float4 sampled = texture.sample(sampler2D, interpolated.texCoord);
// this fragment shader premultiplies incoming rgb with texture's alpha
return float4(sampled.r * sampled.a,
sampled.g * sampled.a,
sampled.b * sampled.a,
sampled.a );
} // end of premult_fragment
The result is pretty close to the input source, but the image is maybe 5% more opaque than the incoming png. Again, png pixel format is sRGB, and the MTKView I'm using to display is set to MTLPixelFormat.rgba16Float . So, I'm sure something is getting mushed somewhere. If anyone has any pointers, I'd sure appreciate it.
Below is the rest of the relevant code:
func metalDrawStrokeUIImage (strokeUIImage: UIImage, strokeBbox: CGRect) {
self.metalSetupRenderPipeline(compStyle: compMode.strokeCopy) // needed so stampTexture is not modified by fragmentFunction
let bytesPerPixel = 4
let bitsPerComponent = 8
let width = Int(strokeUIImage.size.width)
let height = Int(strokeUIImage.size.height)
let rowBytes = width * bytesPerPixel
//
let texDescriptor = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: .rgba8Unorm_srgb,
width: width,
height: height,
mipmapped: false)
guard let stampTexture = device!.makeTexture(descriptor: texDescriptor) else { return }
//let cgImage: CGImage = strokeUIImage.cgImage!
//let sourceColorSpace = cgImage.colorSpace else {
guard
let cgImage = strokeUIImage.cgImage,
let sourceColorSpace = cgImage.colorSpace else {
print("Unable to initialize cgImage or colorSpace.")
return
}
var format = vImage_CGImageFormat(
bitsPerComponent: UInt32(cgImage.bitsPerComponent),
bitsPerPixel: UInt32(cgImage.bitsPerPixel),
colorSpace: Unmanaged.passRetained(sourceColorSpace),
bitmapInfo: CGBitmapInfo(rawValue: CGImageAlphaInfo.last.rawValue),
version: 0, decode: nil,
renderingIntent: CGColorRenderingIntent.defaultIntent)
var sourceBuffer = vImage_Buffer()
defer {
free(sourceBuffer.data)
}
var error = vImageBuffer_InitWithCGImage(&sourceBuffer, &format, nil, cgImage, numericCast(kvImageNoFlags))
guard error == kvImageNoError else {
print ("[MetalBrushStrokeView]: can't vImageBuffer_InitWithCGImage")
return
}
//vImagePremultiplyData_RGBA8888(&sourceBuffer, &sourceBuffer, numericCast(kvImageNoFlags))
// create a CGImage from vImage_Buffer
var destCGImage = vImageCreateCGImageFromBuffer(&sourceBuffer, &format, nil, nil, numericCast(kvImageNoFlags), &error)?.takeRetainedValue()
guard error == kvImageNoError else {
print ("[MetalBrushStrokeView]: can't vImageCreateCGImageFromBuffer")
return
}
let dstData: CFData = (destCGImage!.dataProvider!.data)!
let pixelData = CFDataGetBytePtr(dstData)
destCGImage = nil
let region = MTLRegionMake2D(0, 0, Int(width), Int(height))
stampTexture.replace(region: region, mipmapLevel: 0, withBytes: pixelData!, bytesPerRow: Int(rowBytes))
let stampColor = UIColor.white
let stampCorners = self.stampSetVerticesFromBbox(bbox: strokeBbox)
self.stampAppendToVertexBuffer(stampLayer: stampLayerMode.stampLayerFG, stampCorners: stampCorners, stampColor: stampColor)
self.metalRenderStampSingle(stampTexture: stampTexture)
self.initializeStampArray() // clears out the stamp array so we always draw 1 stamp at a time
} // end of func metalDrawStrokeUIImage (strokeUIImage: UIImage, strokeBbox: CGRect)

How can you make a CVPixelBuffer directly from a CIImage instead of a UIImage in Swift?

I am recording filtered video through an iPhone camera, and there is a huge increase in CPU usage when converting a CIImage to a UIImage in real time while recording. My buffer function to make a CVPixelBuffer uses a UIImage, which so far requires me to make this conversion. I'd like to instead make a buffer function that takes a CIImage if possible so I can skip the conversion from UIImage to CIImage. I'm thinking this will give me a huge boost in performance when recording video, since there won't be any hand off between CPU and GPU.
This is what I have right now. Within my captureOutput function, I create a UIImage from the CIImage, which is the filtered image. I create a CVPixelBuffer from the buffer function using the UIImage, and append it to the assetWriter's pixelBufferInput:
let imageUI = UIImage(ciImage: ciImage)
let filteredBuffer:CVPixelBuffer? = buffer(from: imageUI)
let success = self.assetWriterPixelBufferInput?.append(filteredBuffer!, withPresentationTime: self.currentSampleTime!)
My buffer function that uses a UIImage:
func buffer(from image: UIImage) -> CVPixelBuffer? {
let attrs = [kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue, kCVPixelBufferCGBitmapContextCompatibilityKey: kCFBooleanTrue] as CFDictionary
var pixelBuffer : CVPixelBuffer?
let status = CVPixelBufferCreate(kCFAllocatorDefault, Int(image.size.width), Int(image.size.height), kCVPixelFormatType_32ARGB, attrs, &pixelBuffer)
guard (status == kCVReturnSuccess) else {
return nil
}
CVPixelBufferLockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags(rawValue: 0))
let pixelData = CVPixelBufferGetBaseAddress(pixelBuffer!)
let videoRecContext = CGContext(data: pixelData,
width: Int(image.size.width),
height: Int(image.size.height),
bitsPerComponent: 8,
bytesPerRow: videoRecBytesPerRow,
space: (MTLCaptureView?.colorSpace)!, // It's getting the current colorspace from a MTKView
bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue)
videoRecContext?.translateBy(x: 0, y: image.size.height)
videoRecContext?.scaleBy(x: 1.0, y: -1.0)
UIGraphicsPushContext(videoRecContext!)
image.draw(in: CGRect(x: 0, y: 0, width: image.size.width, height: image.size.height))
UIGraphicsPopContext()
CVPixelBufferUnlockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags(rawValue: 0))
return pixelBuffer
}
Create a CIContext and use it to render the CIImage directly to your CVPixelBuffer using CIContext.render(_: CIImage, to buffer: CVPixelBuffer).
rob mayoff answer sums it up, but there's a VERY-VERY-VERY important thing to keep in mind:
Core Image defers the rendering until the client requests the access to the frame buffer, i.e. CVPixelBufferLockBaseAddress.
I learned this from speaking with the Apple's technical support engineer and couldn't find this in any of the docs. I used this only with macOS, but imagine it wouldn't be different on iOS.
Keep in mind, if you lock buffer before the rendering it will still work but will run one frame behind and the first render will be empty.
Finally, it's mentioned more than once on SO and even in this thread: avoid creating new CVPixelBuffer for each render because each buffer takes up a ton of system resources. This is why we have CVPixelBufferPool – Apple uses it in their frameworks, so can you to achieve even better performance! ✌️
To extend the answer I got from rob mayoff, I'll show what I changed below:
Within the captureOutput function, I changed my code to:
let filteredBuffer : CVPixelBuffer? = buffer(from: ciImage)
filterContext?.render(_:ciImage, to:filteredBuffer!)
let success = self.assetWriterPixelBufferInput?.append(filteredBuffer!, withPresentationTime: self.currentSampleTime!)
Notice the buffer function passes a ciImage. I formatted my buffer function to pass the CIImage, and was able to get rid a lot of what was inside:
func buffer(from image: CIImage) -> CVPixelBuffer? {
let attrs = [kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue, kCVPixelBufferCGBitmapContextCompatibilityKey: kCFBooleanTrue] as CFDictionary
var pixelBuffer : CVPixelBuffer?
let status = CVPixelBufferCreate(kCFAllocatorDefault, Int(image.extent.width), Int(image.extent.height), kCVPixelFormatType_32ARGB, attrs, &pixelBuffer)
guard (status == kCVReturnSuccess) else {
return nil
}
return pixelBuffer
}

Swift Metal save bgra8Unorm texture to PNG file

I have a kernel that outputs a texture, and it is a valid MTLTexture object. I want to save it to a png file in the working directory of my project. How should this be done?
The texture format is .bgra8Unorm, and the target output format is PNG.
The texture is stored in a MTLTexture object.
EDIT: I am on macOS XCode.
If your app is using Metal on macOS, the first thing you need to do is ensure that your texture data can be read by the CPU. If the texture that's being written by the kernel is in .private storage mode, that means you'll need to blit (copy) from the texture into another texture in .managed mode. If your texture is starting out in .managed storage, you probably need to create a blit command encoder and call synchronize(resource:) on the texture to ensure that its contents on the GPU are reflected on the CPU:
if let blitEncoder = commandBuffer.makeBlitCommandEncoder() {
blitEncoder.synchronize(resource: outputTexture)
blitEncoder.endEncoding()
}
Once the command buffer completes (which you can wait on by calling waitUntilCompleted or by adding a completion handler to the command buffer), you're ready to copy the data and create an image:
func makeImage(for texture: MTLTexture) -> CGImage? {
assert(texture.pixelFormat == .bgra8Unorm)
let width = texture.width
let height = texture.height
let pixelByteCount = 4 * MemoryLayout<UInt8>.size
let imageBytesPerRow = width * pixelByteCount
let imageByteCount = imageBytesPerRow * height
let imageBytes = UnsafeMutableRawPointer.allocate(byteCount: imageByteCount, alignment: pixelByteCount)
defer {
imageBytes.deallocate()
}
texture.getBytes(imageBytes,
bytesPerRow: imageBytesPerRow,
from: MTLRegionMake2D(0, 0, width, height),
mipmapLevel: 0)
swizzleBGRA8toRGBA8(imageBytes, width: width, height: height)
guard let colorSpace = CGColorSpace(name: CGColorSpace.linearSRGB) else { return nil }
let bitmapInfo = CGImageAlphaInfo.premultipliedLast.rawValue
guard let bitmapContext = CGContext(data: nil,
width: width,
height: height,
bitsPerComponent: 8,
bytesPerRow: imageBytesPerRow,
space: colorSpace,
bitmapInfo: bitmapInfo) else { return nil }
bitmapContext.data?.copyMemory(from: imageBytes, byteCount: imageByteCount)
let image = bitmapContext.makeImage()
return image
}
You'll notice a call in the middle of this function to a utility function called swizzleBGRA8toRGBA8. This function swaps the bytes in the image buffer so that they're in the RGBA order expected by CoreGraphics. It uses vImage (be sure to import Accelerate) and looks like this:
func swizzleBGRA8toRGBA8(_ bytes: UnsafeMutableRawPointer, width: Int, height: Int) {
var sourceBuffer = vImage_Buffer(data: bytes,
height: vImagePixelCount(height),
width: vImagePixelCount(width),
rowBytes: width * 4)
var destBuffer = vImage_Buffer(data: bytes,
height: vImagePixelCount(height),
width: vImagePixelCount(width),
rowBytes: width * 4)
var swizzleMask: [UInt8] = [ 2, 1, 0, 3 ] // BGRA -> RGBA
vImagePermuteChannels_ARGB8888(&sourceBuffer, &destBuffer, &swizzleMask, vImage_Flags(kvImageNoFlags))
}
Now we can write a function that enables us to write a texture to a specified URL:
func writeTexture(_ texture: MTLTexture, url: URL) {
guard let image = makeImage(for: texture) else { return }
if let imageDestination = CGImageDestinationCreateWithURL(url as CFURL, kUTTypePNG, 1, nil) {
CGImageDestinationAddImage(imageDestination, image, nil)
CGImageDestinationFinalize(imageDestination)
}
}

Drawing a simple line on a JPEG image

I'm stuck again with an apparently simple question.
I loaded a JPEG file into a CGImage. I got the correct values for width and height (in pixels) and was able to show "myImage" in a ImageView Controller. But I wanted to add some graphics on this image and found that I should instead get it into a NSImage. So I did but got different (proportional) values for width and height: 595.08 instead for 1653, and 841.68 instead of 2338, respectively.
I tried to create a NSCGContext from a CGContext 'gc' for drawing (a simple line and a rectangle) which resulted in a "Value of optional type 'CGContext?' not unwrapped, did you mean to use '!' or '?'?"... I'm lost...
// with NSData
//
let imageAsData = try Data(contentsOf: chosenFiles[0])
let imageProvider = CGDataProvider(data: imageAsData as CFData)
var myImage = CGImage(jpegDataProviderSource: imageProvider!, decode: nil, shouldInterpolate: true, intent: .defaultIntent)
let imageWidth = myImage!.width
let imageHeight = myImage!.height
// with NSImage, now
//
let imageAsNSImage=NSImage(contentsOf: chosenFiles[0])
let imageSize=imageAsNSImage?.size // ---> 0.36 * pixels
// creating a CG context and drawing
//
let colorSpace:CGColorSpace = CGColorSpaceCreateDeviceRGB()
let gc = CGContext(data: nil, width: imageWidth, height: imageHeight, bitsPerComponent: 8, bytesPerRow: 0,space: colorSpace, bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue)
let NSGContext = NSGraphicsContext(cgContext: gc, flipped: true)
let currentContext = NSGraphicsContext.current() // Cocoa GC object appropriate for the current drawing environment
NSGraphicsContext.saveGraphicsState()
NSGraphicsContext.current = NSGContext
NSGContext?.beginPath()
NSGContext?.setStrokeColor(redColor)
NSGContext?.setLineWidth(50.0)
NSGContext?.move(to: targetStart)
NSGContext?.addLine(to: targetEnd)
NSGContext?.setStrokeColor(grayColor)
NSGContext?.setFillColor(grayColor)
NSGContext?.addRect(ROIRect)
NSGContext?.closePath()
NSGContext.restoreGraphicsState()
imageAsNSImage?.draw(at: NSZeroPoint, from: NSZeroRect, operation: NSCompositeSourceOver, fraction: 1.0)
imageAsNSImage?.unlockFocus()
NSGraphicsContext.setcurrent(currentContext)
myImageView.image = imageAsNSImage // image & drawings should show in View
Drawing a simple line on a JPEG image
// load JPEG from main bundle
guard let path = Bundle.main.pathForImageResource(NSImage.Name("picture.jpg")),
let image = NSImage(contentsOfFile: path)
else { fatalError() }
let size = image.size
image.lockFocus() // prepare image for drawing
NSColor.red.setStroke()
NSBezierPath.strokeLine(from: .zero, to: NSPoint(x: size.width, y: size.height))
image.unlockFocus() // drawing commands done
The code above strokes a red line from lower left corner to top right.
If you have an NSImageView at hand you can use the image directly:
#IBOutlet weak var imageView: NSImageView!
...
imageView.image = image
Thanks to djromero, here is the solution I just reached:
// Load the JPEG image from disk into a CGImage
//
let imageAsData = try Data(contentsOf: chosenFiles[0])
let imageProvider = CGDataProvider(data: imageAsData as CFData)
var myImage = CGImage(jpegDataProviderSource: imageProvider!, decode: nil, shouldInterpolate: true, intent: .defaultIntent)
// Create a NSImage from the CGImage (with the same width and height in pixels)
//
let imageAsNSImage=NSImage(cgImage: myImage!, size: NSZeroSize)
// Drawing a simple line
//
imageAsNSImage.lockFocusFlipped(true) // Otherwise, the origin is at the lower left corner
NSColor.red.setStroke()
NSBezierPath.strokeLine(from: targetStart, to: targetEnd)
imageAsNSImage.unlockFocus()
// Show the NSImage in the NSImageView
//
myImageView.image = imageAsNSImage