CGImage to MPSTexture or MPSImage - swift

I have an CGImage which is constructed out of a CVPixelbuffer (ARGB). I want to convert that CGImage into a MTLTexture. I use:
let texture: MTLTexture = try m_textureLoader.newTexture(with: cgImage, options: [MTKTextureLoaderOptionSRGB : NSNumber(value: true)] )
Later I want to use the texture in an MPSImage having 3 channels:
let sid = MPSImageDescriptor(channelFormat: MPSImageFeatureChannelFormat.float16, width: 40, height: 40, featureChannels: 3)
preImage = MPSTemporaryImage(commandBuffer: commandBuffer, imageDescriptor: sid)
lanczos.encode(commandBuffer: commandBuffer, sourceTexture: texture!, destinationTexture: preImage.texture)
scale.encode (commandBuffer: commandBuffer, sourceImage: preImage, destinationImage: srcImage)
Now my questions:
How does textureLoader.newTexture(...) map the four ARGB channels to the 3 channels specified in the MPSImageDescriptor ?
How can I ensure that the RGB components are used and not e.g. ARG ?
Is there a way to specify that channel mapping ?
Thanks, Chris

Why not construct the MTLTexture from the CVPixelBuffer directly? Is much quicker!
Do this once at the beginning of your program:
// declare this somewhere, so we can re-use it
var textureCache: CVMetalTextureCache?
// create the texture cache object
guard CVMetalTextureCacheCreate(kCFAllocatorDefault, nil, device, nil, &textureCache) == kCVReturnSuccess else {
print("Error: could not create a texture cache")
return false
}
Do this once your have your CVPixelBuffer:
let width = CVPixelBufferGetWidth(pixelBuffer)
let height = CVPixelBufferGetHeight(pixelBuffer)
var texture: CVMetalTexture?
CVMetalTextureCacheCreateTextureFromImage(kCFAllocatorDefault, textureCache,
pixelBuffer, nil, .bgra8Unorm, width, height, 0, &texture)
if let texture = texture {
metalTexture = CVMetalTextureGetTexture(texture)
}
Now metalTexture contains an MTLTexture object with the contents of the CVPixelBuffer.

Related

Convert a CGImage to MTLTexture without premultiplication

I have a UIImage which I've previously created from a png file:
let strokeUIImage = UIImage(data: pngData)
I want to convert strokeImage (which has opacity) to an MTLTexture for display in an MTKView, but doing the conversion seems to perform an unwanted premultiplication, which darkens all the semitransparent edges.
My blending settings are as follows:
pipelineDescriptor.colorAttachments[0].isBlendingEnabled = true
pipelineDescriptor.colorAttachments[0].rgbBlendOperation = .add
pipelineDescriptor.colorAttachments[0].alphaBlendOperation = .add
pipelineDescriptor.colorAttachments[0].sourceRGBBlendFactor = .one
pipelineDescriptor.colorAttachments[0].sourceAlphaBlendFactor = .one
pipelineDescriptor.colorAttachments[0].destinationRGBBlendFactor = .oneMinusSourceAlpha
pipelineDescriptor.colorAttachments[0].destinationAlphaBlendFactor = .oneMinusSourceAlpha
I've tried two methods of conversion:
let stampTexture = try! MTKTextureLoader(device: self.device!).newTexture(cgImage: strokeUIImage.cgImage!, options: nil)
and the more elaborate dataProvider-driven method:
let image = strokeUIImage.cgImage!
let imageWidth = image.width
let imageHeight = image.height
let bytesPerPixel:Int! = 4
let rowBytes = imageWidth * bytesPerPixel
let texDescriptor = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: .rgba8Unorm_srgb,
width: imageWidth,
height: imageHeight,
mipmapped: false)
guard let stampTexture = device!.makeTexture(descriptor: texDescriptor) else { return }
let srcData: CFData! = image.dataProvider?.data
let pixelData = CFDataGetBytePtr(srcData)
let region = MTLRegionMake2D(0, 0, imageWidth, imageHeight)
stampTexture.replace(region: region, mipmapLevel: 0, withBytes: pixelData!, bytesPerRow: Int(rowBytes))
both of which yield the same unwanted premultiplied result.
The latter I tried, as there were some posts suggesting that the old swift3 method CGDataProviderCopyData() extracts raw pixel data from the image which is not premultiplied. Sadly, the equivalent:
let srcData: CFData! = image.dataProvider?.data
does not seem to do the trick. Am I missing something?
Any pointers would be appreciated.
After much experimenting, I've come to a solution which addresses the pre-multiplication issue inherent in CoreGraphics images. Thanks to Warren's tip regarding using an Accelerate function (vImageUnpremultiplyData_ARGB8888 in particular), I thought, why not build a CGImage using vImage_CGImageFormat which will allow me to play with the bitmapInfo setting that specifies how to interpret alpha...The result is not perfect, as demonstrated by the image attachment below:
Somehow, in the translation the alpha values are getting punched up slightly, (possibly the rgb as well, but not significantly). By the way, I should point out that the png pixel format is sRGB, and the MTKView I'm using is set to MTLPixelFormat.rgba16Float (app requirement)
Below is the full metalDrawStrokeUIImage routine I implemented. Of particular note is the line:
bitmapInfo: CGBitmapInfo(rawValue: CGImageAlphaInfo.last.rawValue)
which essentially unassociates the alpha (I think) without calling vImageUnpremultiplyData_ARGB8888. Looking at the resulting image certainly looks like an un-premultiplied image...
Lastly, to get back a premultiplied texture on the MTKView side, I let the fragment shader handle the pre-multiplication:
fragment float4 premult_fragment(VertexOut interpolated [[stage_in]],
texture2d<float> texture [[texture(0)]],
sampler sampler2D [[sampler(0)]]) {
float4 sampled = texture.sample(sampler2D, interpolated.texCoord);
// this fragment shader premultiplies incoming rgb with texture's alpha
return float4(sampled.r * sampled.a,
sampled.g * sampled.a,
sampled.b * sampled.a,
sampled.a );
} // end of premult_fragment
The result is pretty close to the input source, but the image is maybe 5% more opaque than the incoming png. Again, png pixel format is sRGB, and the MTKView I'm using to display is set to MTLPixelFormat.rgba16Float . So, I'm sure something is getting mushed somewhere. If anyone has any pointers, I'd sure appreciate it.
Below is the rest of the relevant code:
func metalDrawStrokeUIImage (strokeUIImage: UIImage, strokeBbox: CGRect) {
self.metalSetupRenderPipeline(compStyle: compMode.strokeCopy) // needed so stampTexture is not modified by fragmentFunction
let bytesPerPixel = 4
let bitsPerComponent = 8
let width = Int(strokeUIImage.size.width)
let height = Int(strokeUIImage.size.height)
let rowBytes = width * bytesPerPixel
//
let texDescriptor = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: .rgba8Unorm_srgb,
width: width,
height: height,
mipmapped: false)
guard let stampTexture = device!.makeTexture(descriptor: texDescriptor) else { return }
//let cgImage: CGImage = strokeUIImage.cgImage!
//let sourceColorSpace = cgImage.colorSpace else {
guard
let cgImage = strokeUIImage.cgImage,
let sourceColorSpace = cgImage.colorSpace else {
print("Unable to initialize cgImage or colorSpace.")
return
}
var format = vImage_CGImageFormat(
bitsPerComponent: UInt32(cgImage.bitsPerComponent),
bitsPerPixel: UInt32(cgImage.bitsPerPixel),
colorSpace: Unmanaged.passRetained(sourceColorSpace),
bitmapInfo: CGBitmapInfo(rawValue: CGImageAlphaInfo.last.rawValue),
version: 0, decode: nil,
renderingIntent: CGColorRenderingIntent.defaultIntent)
var sourceBuffer = vImage_Buffer()
defer {
free(sourceBuffer.data)
}
var error = vImageBuffer_InitWithCGImage(&sourceBuffer, &format, nil, cgImage, numericCast(kvImageNoFlags))
guard error == kvImageNoError else {
print ("[MetalBrushStrokeView]: can't vImageBuffer_InitWithCGImage")
return
}
//vImagePremultiplyData_RGBA8888(&sourceBuffer, &sourceBuffer, numericCast(kvImageNoFlags))
// create a CGImage from vImage_Buffer
var destCGImage = vImageCreateCGImageFromBuffer(&sourceBuffer, &format, nil, nil, numericCast(kvImageNoFlags), &error)?.takeRetainedValue()
guard error == kvImageNoError else {
print ("[MetalBrushStrokeView]: can't vImageCreateCGImageFromBuffer")
return
}
let dstData: CFData = (destCGImage!.dataProvider!.data)!
let pixelData = CFDataGetBytePtr(dstData)
destCGImage = nil
let region = MTLRegionMake2D(0, 0, Int(width), Int(height))
stampTexture.replace(region: region, mipmapLevel: 0, withBytes: pixelData!, bytesPerRow: Int(rowBytes))
let stampColor = UIColor.white
let stampCorners = self.stampSetVerticesFromBbox(bbox: strokeBbox)
self.stampAppendToVertexBuffer(stampLayer: stampLayerMode.stampLayerFG, stampCorners: stampCorners, stampColor: stampColor)
self.metalRenderStampSingle(stampTexture: stampTexture)
self.initializeStampArray() // clears out the stamp array so we always draw 1 stamp at a time
} // end of func metalDrawStrokeUIImage (strokeUIImage: UIImage, strokeBbox: CGRect)

Convert [UInt8] into white and transparent image

I'm trying to create a white and transparent image in swift from a [Uint8] array. The array has width * height elements and each element correspond to the transparency (alpha value).
So far, I managed to create a black and white image using this :
guard let providerRef = CGDataProvider(data: Data.init(bytes: bitmapArray) as CFData) else { return nil }
guard let cgImage = CGImage(
width: width,
height: height,
bitsPerComponent: 8,
bitsPerPixel: 8,
bytesPerRow: width,
space: CGColorSpaceCreateDeviceGray(),
bitmapInfo: CGBitmapInfo.init(rawValue: CGImageAlphaInfo.none.rawValue),
provider: providerRef,
decode: nil,
shouldInterpolate: true,
intent: .defaultIntent
) else {
return nil
}
let image = UIImage(cgImage: cgImage)
unfortunately, as I said, this gives me a black and white image.
What I would like is to turn every black pixel (0 in my initial array) into a completely transparent pixel (my array contains only either 0 or 255). How do I do that ?
PS: I've tried to use CGImageAlphaInfo.alphaOnlybut I get "CGImageCreate: invalid image alphaInfo: 7"
Any help would be appreciated.
I found a solution that doesn't exactly satisfies me in terms of code elegance but does the job. The solution is to create that black and white full opaque image, and mask all black pixels using a CIFilter.
Here's a working code:
guard let providerRef = CGDataProvider(data: Data.init(bytes: bitmapArray) as CFData) else { return nil }
guard let cgImage = CGImage(
width: width,
height: height,
bitsPerComponent: 8,
bitsPerPixel: 8,
bytesPerRow: width,
space: CGColorSpaceCreateDeviceGray(),
bitmapInfo: CGBitmapInfo.init(rawValue: CGImageAlphaInfo.none.rawValue),
provider: providerRef,
decode: nil,
shouldInterpolate: true,
intent: .defaultIntent
) else {
return nil
}
let context = CIContext(options: nil)
let ciimage = CIImage(cgImage: cgImage)
guard let filter = CIFilter(name: "CIMaskToAlpha") else { return nil }
filter.setDefaults()
filter.setValue(ciimage, forKey: kCIInputImageKey)
guard let result = filter.outputImage else { return nil }
guard let newCgImage = context.createCGImage(result, from: result.extent) else { return nil }
return UIImage(cgImage: newCgImage)
Feel free to provide your own (perhaps more elegant/optimal) solution !
I found a workaround: since kCGAlphaImageOnly is supported by CGBitmapContext, you can create a bitmap context from the data, then create an image from that context. This is Objective-C, but it shouldn't be too hard to translate into Swift:
CGContextRef ctx = CGBitmapContextCreate(
bitmapArray, width, height,
8, width, NULL, (CGBitmapInfo)kCGImageAlphaOnly
);
CGImageRef image = CGBitmapContextCreateImage(ctx);

Swift Metal save bgra8Unorm texture to PNG file

I have a kernel that outputs a texture, and it is a valid MTLTexture object. I want to save it to a png file in the working directory of my project. How should this be done?
The texture format is .bgra8Unorm, and the target output format is PNG.
The texture is stored in a MTLTexture object.
EDIT: I am on macOS XCode.
If your app is using Metal on macOS, the first thing you need to do is ensure that your texture data can be read by the CPU. If the texture that's being written by the kernel is in .private storage mode, that means you'll need to blit (copy) from the texture into another texture in .managed mode. If your texture is starting out in .managed storage, you probably need to create a blit command encoder and call synchronize(resource:) on the texture to ensure that its contents on the GPU are reflected on the CPU:
if let blitEncoder = commandBuffer.makeBlitCommandEncoder() {
blitEncoder.synchronize(resource: outputTexture)
blitEncoder.endEncoding()
}
Once the command buffer completes (which you can wait on by calling waitUntilCompleted or by adding a completion handler to the command buffer), you're ready to copy the data and create an image:
func makeImage(for texture: MTLTexture) -> CGImage? {
assert(texture.pixelFormat == .bgra8Unorm)
let width = texture.width
let height = texture.height
let pixelByteCount = 4 * MemoryLayout<UInt8>.size
let imageBytesPerRow = width * pixelByteCount
let imageByteCount = imageBytesPerRow * height
let imageBytes = UnsafeMutableRawPointer.allocate(byteCount: imageByteCount, alignment: pixelByteCount)
defer {
imageBytes.deallocate()
}
texture.getBytes(imageBytes,
bytesPerRow: imageBytesPerRow,
from: MTLRegionMake2D(0, 0, width, height),
mipmapLevel: 0)
swizzleBGRA8toRGBA8(imageBytes, width: width, height: height)
guard let colorSpace = CGColorSpace(name: CGColorSpace.linearSRGB) else { return nil }
let bitmapInfo = CGImageAlphaInfo.premultipliedLast.rawValue
guard let bitmapContext = CGContext(data: nil,
width: width,
height: height,
bitsPerComponent: 8,
bytesPerRow: imageBytesPerRow,
space: colorSpace,
bitmapInfo: bitmapInfo) else { return nil }
bitmapContext.data?.copyMemory(from: imageBytes, byteCount: imageByteCount)
let image = bitmapContext.makeImage()
return image
}
You'll notice a call in the middle of this function to a utility function called swizzleBGRA8toRGBA8. This function swaps the bytes in the image buffer so that they're in the RGBA order expected by CoreGraphics. It uses vImage (be sure to import Accelerate) and looks like this:
func swizzleBGRA8toRGBA8(_ bytes: UnsafeMutableRawPointer, width: Int, height: Int) {
var sourceBuffer = vImage_Buffer(data: bytes,
height: vImagePixelCount(height),
width: vImagePixelCount(width),
rowBytes: width * 4)
var destBuffer = vImage_Buffer(data: bytes,
height: vImagePixelCount(height),
width: vImagePixelCount(width),
rowBytes: width * 4)
var swizzleMask: [UInt8] = [ 2, 1, 0, 3 ] // BGRA -> RGBA
vImagePermuteChannels_ARGB8888(&sourceBuffer, &destBuffer, &swizzleMask, vImage_Flags(kvImageNoFlags))
}
Now we can write a function that enables us to write a texture to a specified URL:
func writeTexture(_ texture: MTLTexture, url: URL) {
guard let image = makeImage(for: texture) else { return }
if let imageDestination = CGImageDestinationCreateWithURL(url as CFURL, kUTTypePNG, 1, nil) {
CGImageDestinationAddImage(imageDestination, image, nil)
CGImageDestinationFinalize(imageDestination)
}
}

Drawing a simple line on a JPEG image

I'm stuck again with an apparently simple question.
I loaded a JPEG file into a CGImage. I got the correct values for width and height (in pixels) and was able to show "myImage" in a ImageView Controller. But I wanted to add some graphics on this image and found that I should instead get it into a NSImage. So I did but got different (proportional) values for width and height: 595.08 instead for 1653, and 841.68 instead of 2338, respectively.
I tried to create a NSCGContext from a CGContext 'gc' for drawing (a simple line and a rectangle) which resulted in a "Value of optional type 'CGContext?' not unwrapped, did you mean to use '!' or '?'?"... I'm lost...
// with NSData
//
let imageAsData = try Data(contentsOf: chosenFiles[0])
let imageProvider = CGDataProvider(data: imageAsData as CFData)
var myImage = CGImage(jpegDataProviderSource: imageProvider!, decode: nil, shouldInterpolate: true, intent: .defaultIntent)
let imageWidth = myImage!.width
let imageHeight = myImage!.height
// with NSImage, now
//
let imageAsNSImage=NSImage(contentsOf: chosenFiles[0])
let imageSize=imageAsNSImage?.size // ---> 0.36 * pixels
// creating a CG context and drawing
//
let colorSpace:CGColorSpace = CGColorSpaceCreateDeviceRGB()
let gc = CGContext(data: nil, width: imageWidth, height: imageHeight, bitsPerComponent: 8, bytesPerRow: 0,space: colorSpace, bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue)
let NSGContext = NSGraphicsContext(cgContext: gc, flipped: true)
let currentContext = NSGraphicsContext.current() // Cocoa GC object appropriate for the current drawing environment
NSGraphicsContext.saveGraphicsState()
NSGraphicsContext.current = NSGContext
NSGContext?.beginPath()
NSGContext?.setStrokeColor(redColor)
NSGContext?.setLineWidth(50.0)
NSGContext?.move(to: targetStart)
NSGContext?.addLine(to: targetEnd)
NSGContext?.setStrokeColor(grayColor)
NSGContext?.setFillColor(grayColor)
NSGContext?.addRect(ROIRect)
NSGContext?.closePath()
NSGContext.restoreGraphicsState()
imageAsNSImage?.draw(at: NSZeroPoint, from: NSZeroRect, operation: NSCompositeSourceOver, fraction: 1.0)
imageAsNSImage?.unlockFocus()
NSGraphicsContext.setcurrent(currentContext)
myImageView.image = imageAsNSImage // image & drawings should show in View
Drawing a simple line on a JPEG image
// load JPEG from main bundle
guard let path = Bundle.main.pathForImageResource(NSImage.Name("picture.jpg")),
let image = NSImage(contentsOfFile: path)
else { fatalError() }
let size = image.size
image.lockFocus() // prepare image for drawing
NSColor.red.setStroke()
NSBezierPath.strokeLine(from: .zero, to: NSPoint(x: size.width, y: size.height))
image.unlockFocus() // drawing commands done
The code above strokes a red line from lower left corner to top right.
If you have an NSImageView at hand you can use the image directly:
#IBOutlet weak var imageView: NSImageView!
...
imageView.image = image
Thanks to djromero, here is the solution I just reached:
// Load the JPEG image from disk into a CGImage
//
let imageAsData = try Data(contentsOf: chosenFiles[0])
let imageProvider = CGDataProvider(data: imageAsData as CFData)
var myImage = CGImage(jpegDataProviderSource: imageProvider!, decode: nil, shouldInterpolate: true, intent: .defaultIntent)
// Create a NSImage from the CGImage (with the same width and height in pixels)
//
let imageAsNSImage=NSImage(cgImage: myImage!, size: NSZeroSize)
// Drawing a simple line
//
imageAsNSImage.lockFocusFlipped(true) // Otherwise, the origin is at the lower left corner
NSColor.red.setStroke()
NSBezierPath.strokeLine(from: targetStart, to: targetEnd)
imageAsNSImage.unlockFocus()
// Show the NSImage in the NSImageView
//
myImageView.image = imageAsNSImage

How to create a MTLTexture backed by a CVPixelBuffer

What's the correct way to generate a MTLTexture backed by a CVPixelBuffer?
I have the following code, but it seems to leak:
func PixelBufferToMTLTexture(pixelBuffer:CVPixelBuffer) -> MTLTexture
{
var texture:MTLTexture!
let width = CVPixelBufferGetWidth(pixelBuffer)
let height = CVPixelBufferGetHeight(pixelBuffer)
let format:MTLPixelFormat = .BGRA8Unorm
var textureRef : Unmanaged<CVMetalTextureRef>?
let status = CVMetalTextureCacheCreateTextureFromImage(nil,
videoTextureCache!.takeUnretainedValue(),
pixelBuffer,
nil,
format,
width,
height,
0,
&textureRef)
if(status == kCVReturnSuccess)
{
texture = CVMetalTextureGetTexture(textureRef!.takeUnretainedValue())
}
return texture
}
Ah, I was missing: textureRef?.release()