RGB values of CIImage pixel - swift

I want to access the average colour value of a specific area of CVPixelBuffer that I get from ARFrame in real-time. I managed to crop the image, use filter to calculate average colour and after converting to CGImage I get the value from the pixel but unfortunately, it affects the performance of my app (FPS drops below 30fps). I think that the reason for that is using CGContext. Is there a way to access colour without converting CIImage to CGImage?
This is code that I'm using at the moment:
func fun() {
let croppVector = CIVector(cgRect: inputImageRect)
guard let filter = CIFilter(
name: "CIAreaAverage",
parameters: [kCIInputImageKey: image, kCIInputExtentKey: croppVector]
),
let outputImage = filter.outputImage,
let cgImage = context.createCGImage(
outputImage,
from: CGRect(x: 0, y: 0, width: 1, height: 1)
),
let dataProvider = cgImage.dataProvider,
let data = CFDataGetBytePtr(dataProvider.data) else { return nil }
let color = UIColor(
red: CGFloat(data[0]) / 255,
green: CGFloat(data[1]) / 255,
blue: CGFloat(data[2]) / 255,
alpha: CGFloat(data[3]) / 255
)
}

I think there is not too much you can do here – reduction operations like this are expensive.
A few things you can try:
Set up your CIContext to not perform any color management by setting the .workingColorSpace and .outputColorSpace options to NSNull().
Render directly into a piece of memory instead of going through a CGImage. The context has the method render(_ image: CIImage, toBitmap data: UnsafeMutableRawPointer, rowBytes: Int, bounds: CGRect, format: CIFormat, colorSpace: CGColorSpace?) you can use for that. Also pass nil as color space here. You should be able to just pass a "pointer" to a simd_uchar4 var as data here. rowBytes would be 4 and format would be .BGRA8 in this case, I think.
You can also try to scale down your image (which already is a reduction operation) before you do the average calculation. It wouldn't be the same value, but a fair approximation – and it might be faster.

Related

Alpha blending for frame averaging in Core Image

In my app, I'm trying to realize a motion blur features that stack different frames (averaging them) coming from the video output into a single image. The effect I'm trying to obtain is well-explained here: https://photographylife.com/image-averaging-technique.
I tried using a custom CIKernel that performs the averaging operation on each color channel as follow:
float4 makeAverage(sample_t currentStack, sample_t newImage, float stackCount) {
float4 cstack = unpremultiply(currentStack);
float4 nim = unpremultiply(newImage);
float4 avg = ((cstack * stackCount) + nim) / (stackCount + 1.0);
return premultiply(avg);
}
You can find more details on the complete code here: Problems with frame averaging with Core Image
It works but, after a while, weird patches start to appear in the image, hinting that the color channels are clipping.
Is there a way I could achieve the same results using alpha blending in core image? Maybe, instead of doing the stacking operation on the color channels, could I stack subsequent images with a decreasing alpha value?
If so, what would be the procedure/algorithm to do it?
You can accomplish the same as what you are doing via combination of CIColorMatrix and CISourceOverCompositing in a simpler way just by using CIMix filter like this
func makeCompositeImage(stackImage: CIImage, newImage: CIImage?, count: Double) -> CIImage {
let opacity = 1.0 / count
return newImage?
.applyingFilter("CIMix", parameters: [
kCIInputBackgroundImageKey: stackImage,
kCIInputAmountKey: opacity
]) ?? stackImage
}
Please check out this app I just published: https://apps.apple.com/us/app/filter-magic/id1594986951. It lets you play with every single filter out there.
After some trying, I found the solution. Basically, here is how to achieve the desired effect:
I lower the opacity of each successive frame, using the CIColorMatrix filter. Below the code for the function:
func setOpacity (image : CIImage, alpha : Double) ->CIImage {
guard let overlayFilter: CIFilter = CIFilter(name: "CIColorMatrix") else { fatalError() }
let overlayRgba: [CGFloat] = [0, 0, 0, alpha]
let alphaVector: CIVector = CIVector(values: overlayRgba, count: 4)
overlayFilter.setValue(image, forKey: kCIInputImageKey)
overlayFilter.setValue(alphaVector, forKey: "inputAVector")
return overlayFilter.outputImage!
}
Each time a frame arrives, the alpha value is calculated as 1/count. Then, I perform the alpha blending using the CISourceOverCompositing filter:
func makeCompositeImage(stackImage: CIImage?, newImage: CIImage?, count: Double) -> CIImage {
let bgImage = stackImage
let opacity = 1.0 / count
let newImageFiltered = setOpacity(image: newImage!, alpha: opacity)
// Filter part
let currentFilter = CIFilter(name: "CISourceOverCompositing")
currentFilter?.setValue(newImageFiltered, forKey: kCIInputImageKey)
currentFilter?.setValue(bgImage, forKey: kCIInputBackgroundImageKey)
guard let outputImage = currentFilter?.outputImage else { return bgImage!}
return outputImage
}
The method works as expected and I can simulate motion blur from successive frames. However, the original problem remains and after a while, noticeable color bands and patches start to appear in the image, ruining the final results.
I hope, however, that the code above could be useful for somebody.

Why the SceneKit Material looks different, even when the image is the same?

The material content support many options to be loaded, two of these are NSImage (or UIImage) and SKTexture.
I noticed when loading the same image file (.png) with different loaders, the material is rendered different.
I'm very sure it is an extra property loaded from SpriteKit transformation, but I don't know what is it.
Why the SceneKit Material looks different, even when the image is the same?
This is the rendered example:
About the code:
let plane = SCNPlane(width: 1, height: 1)
plane.firstMaterial?.diffuse.contents = NSColor.green
let plane = SCNPlane(width: 1, height: 1)
plane.firstMaterial?.diffuse.contents = NSImage(named: "texture")
let plane = SCNPlane(width: 1, height: 1)
plane.firstMaterial?.diffuse.contents = SKTexture(imageNamed: "texture")
The complete example is here: https://github.com/Maetschl/SceneKitExamples/tree/master/MaterialTests
I think this has something to do with color spaces/gamma correction. My guess is that textures loaded via the SKTexture(imageNamed:) initializer aren't properly gamma corrected. You would think this would be documented somewhere, or other people would have noticed, but I can't seem to find anything.
Here's some code to swap with the last image in your linked sample project. I've force unwrapped as much as possible for brevity:
// Create the texture using the SKTexture(cgImage:) init
// to prove it has the same output image as SKTexture(imageNamed:)
let originalDogNSImage = NSImage(named: "dog")!
var originalDogRect = CGRect(x: 0, y: 0, width: originalDogNSImage.size.width, height: originalDogNSImage.size.height)
let originalDogCGImage = originalDogNSImage.cgImage(forProposedRect: &originalDogRect, context: nil, hints: nil)!
let originalDogTexture = SKTexture(cgImage: originalDogCGImage)
// Create the ciImage of the original image to use as the input for the CIFilter
let imageData = originalDogNSImage.tiffRepresentation!
let ciImage = CIImage(data: imageData)
// Create the gamma adjustment Core Image filter
let gammaFilter = CIFilter(name: "CIGammaAdjust")!
gammaFilter.setValue(ciImage, forKey: kCIInputImageKey)
// 0.75 is the default. 2.2 makes the dog image mostly match the NSImage(named:) intializer
gammaFilter.setValue(2.2, forKey: "inputPower")
// Create a SKTexture using the output of the CIFilter
let gammaCorrectedDogCIImage = gammaFilter.outputImage!
let gammaCorrectedDogCGImage = CIContext().createCGImage(gammaCorrectedDogCIImage, from: gammaCorrectedDogCIImage.extent)!
let gammaCorrectedDogTexture = SKTexture(cgImage: gammaCorrectedDogCGImage)
// Looks bad, like in StackOverflow question image.
// let planeWithSKTextureDog = planeWith(diffuseContent: originalDogTexture)
// Looks correct
let planeWithSKTextureDog = planeWith(diffuseContent: gammaCorrectedDogTexture)
Using a CIGammaAdjust filter with an inputPower of 2.2 makes the SKTexture almost? match the NSImage(named:) init. I've included the original image being loaded through SKTexture(cgImage:) to rule out any changes caused by using that initializer versus the SKTexture(imageNamed:) you asked about.

create transparent texture in swift

I just need to create a transparent texture.(pixels with alpha 0).
func layerTexture()-> MTLTexture {
let width = Int(self.drawableSize.width )
let height = Int(self.drawableSize.height )
let texDescriptor = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: .bgra8Unorm, width: width , height: height, mipmapped: false)
let temparyTexture = self.device?.makeTexture(descriptor: texDescriptor)
return temparyTexture!
}
when I open temparyTexture using preview,it's appeared to be black. What is the missing here?
UPDATE
I just tried to create texture using transparent image.
code.
func layerTexture(imageData:Data)-> MTLTexture {
let width = Int(self.drawableSize.width )
let height = Int(self.drawableSize.height )
let bytesPerRow = width * 4
let texDescriptor = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: .rgba8Unorm, width: width , height: height, mipmapped: false)
let temparyTexture = self.device?.makeTexture(descriptor: texDescriptor)
let region = MTLRegionMake2D(0, 0, width, height)
imageData.withUnsafeBytes { (u8Ptr: UnsafePointer<UInt8>) in
let rawPtr = UnsafeRawPointer(u8Ptr)
temparyTexture?.replace(region: region, mipmapLevel: 0, withBytes: rawPtr, bytesPerRow: bytesPerRow)
}
return temparyTexture!
}
method is get called as follows
let image = UIImage(named: "layer1.png")!
let imageData = UIImagePNGRepresentation(image)
self.layerTexture(imageData: imageData!)
where layer1.png is a transparent png. But even though it is crashing with message "Thread 1: EXC_BAD_ACCESS (code=1, address=0x107e8c000) " at the point I try to replace texture. I believe it's because image data is compressed and rawpointer should point to uncompressed data. How can I resolve this?
Am I in correct path or completely in wrong direction? Is there any other alternatives. What I just need is to create transparent texture.
Pre-edit: When you quick-look a transparent texture, it will appear black. I just double-checked with some code I have running stably in production - that is the expected result.
Post-edit: You are correct, you should not be copying PNG or JPEG data to a MTLTexture's contents directly. I would recommend doing something like this:
var pixelBuffer: CVPixelBuffer?
let attrs = [kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue,
kCVPixelBufferCGBitmapContextCompatibilityKey: kCFBooleanTrue,
kCVPixelBufferMetalCompatibilityKey: kCFBooleanTrue]
var status = CVPixelBufferCreate(nil, Int(image.size.width), Int(image.size.height),
kCVPixelFormatType_32BGRA, attrs as CFDictionary,
&pixelBuffer)
assert(status == noErr)
let coreImage = CIImage(image: image)!
let context = CIContext(mtlDevice: MTLCreateSystemDefaultDevice()!)
context.render(coreImage, to: pixelBuffer!)
var textureWrapper: CVMetalTexture?
status = CVMetalTextureCacheCreateTextureFromImage(kCFAllocatorDefault,
GPUManager.shared.textureCache, pixelBuffer!, nil, .bgra8Unorm,
CVPixelBufferGetWidth(pixelBuffer!), CVPixelBufferGetHeight(pixelBuffer!), 0, &textureWrapper)
let texture = CVMetalTextureGetTexture(textureWrapper!)!
// use texture now for your Metal texture. the texture is now map-bound to the CVPixelBuffer's underlying memory.
The issue you are running into is that it is actually pretty hard to fully grasp how bitmaps work and how they can be laid out differently. Graphics is a very closed field with lots of esoteric terminology, some of which refers to things that take years to grasp, some of which refers to things that are trivial but people just picked a weird word to call them by. My main pointers are:
Get out of UIImage land as early in your code as possible. The best way to avoiding overhead and delays when you go into Metal land is to get your images into a GPU-compatible representation as soon as you can.
Once you are outside of UIImage land, always know your channel order (RGBA, BGRA). At any point in code that you are editing, you should have a mental model of what pixel format each CVPixelBuffer / MTLTexture has.
Read up on premultiplied vs non-premultiplied alpha, you may not run into issues with this, but it threw me off repeatedly when I was first learning.
total byte size of a bitmap/pixelbuffer = bytesPerRow * height

Core Image filter CISourceOverCompositing not appearing as expected with alpha overlay

I’m using CISourceOverCompositing to overlay text on top of an image and I’m getting unexpected results when the text image is not fully opaque. Dark colors are not dark enough and light colors are too light in the output image.
I recreated the issue in a simple Xcode project. It creates an image with orange, white, black text drawn with 0.3 alpha, and that looks correct. I even threw that image into Sketch placing it on top of the background image and it looks great. The image at the bottom of the screen shows how that looks in Sketch. The problem is, after overlaying the text on the background using CISourceOverCompositing, the white text is too opaque as if alpha was 0.5 and the black text is barely visible as if alpha was 0.1. The top image shows that programmatically created image. You can drag the slider to adjust the alpha (defaulted at 0.3) which will recreate the result image.
The code is included in the project of course, but also included here. This creates the text overlay with 0.3 alpha, which appears as expected.
let colorSpace = CGColorSpaceCreateDeviceRGB()
let alphaInfo = CGImageAlphaInfo.premultipliedLast.rawValue
let bitmapContext = CGContext(data: nil, width: Int(imageRect.width), height: Int(imageRect.height), bitsPerComponent: 8, bytesPerRow: 0, space: colorSpace, bitmapInfo: alphaInfo)!
bitmapContext.setAlpha(0.3)
bitmapContext.setTextDrawingMode(CGTextDrawingMode.fill)
bitmapContext.textPosition = CGPoint(x: 20, y: 20)
let displayLineTextWhite = CTLineCreateWithAttributedString(NSAttributedString(string: "hello world", attributes: [.foregroundColor: UIColor.white, .font: UIFont.systemFont(ofSize: 50)]))
CTLineDraw(displayLineTextWhite, bitmapContext)
let textCGImage = bitmapContext.makeImage()!
let textImage = CIImage(cgImage: textCGImage)
Next that text image is overlaid on top of the background image, which does not appear as expected.
let combinedFilter = CIFilter(name: "CISourceOverCompositing")!
combinedFilter.setValue(textImage, forKey: "inputImage")
combinedFilter.setValue(backgroundImage, forKey: "inputBackgroundImage")
let outputImage = combinedFilter.outputImage!
After a lot of back and forth trying different things, (thanks #andy and #Juraj Antas for pushing me in the right direction) I finally have the answer.
So drawing into a Core Graphics context results in the correct appearance, but it requires more resources to draw images using that approach. It seemed the problem was with CISourceOverCompositing, but the problem actually lies with the fact that, by default, Core Image filters work in linear SRGB space whereas Core Graphics works in perceptual RGB space, which explains the different results - sRGB is better at preserving dark blacks and linearSRGB is better at preserving bright whites. So the original code is fine, but you can output the image in a different way to get a different appearance.
You could create a Core Graphics image from the Core Image filter using a Core Image context that performs no color management. This essentially causes it to interpret the color values "incorrectly" as device RGB (since that's the default for no color management), which can cause red from the standard color range to appear as even more red from the wide color range for example. But this addresses the original concern with alpha compositing.
let ciContext = CIContext(options: [.workingColorSpace: NSNull()])
let outputCGImage = ciContext.createCGImage(outputCIImage, from: outputCIImage.extent)
It is probably more desirable to keep color management enabled and specify the working color space to be sRGB. This too resolves the issue and results in "correct" color interpretation. Note if the image being composited were Display P3, you'd need to specify displayP3 as the working color space to preserve the wide colors.
let workingColorSpace = CGColorSpace(name: CGColorSpace.sRGB)!
let ciContext = CIContext(options: [.workingColorSpace: workingColorSpace])
let outputCGImage = ciContext.createCGImage(outputCIImage, from: outputCIImage.extent)
For black-and-white text
If you're using .normal compositing operation you'll definitely get not the same result as using .hardLight. Your picture shows the result of .hardLight operation.
.normal operation is classical OVER op with formula: (Image1 * A1) + (Image2 * (1 – A1)).
Here's a premultiplied text (RGB*A), so RGB pattern depends on A's opacity in this particular case. RGB of text image can contain any color, including a black one. If A=0 (black alpha) and RGB=0 (black color) and your image is premultiplied – the whole image is totally transparent, if A=1 (white alpha) and RGB=0 (black color) – the image is opaque black.
If your text has no alpha when you use .normal operation, I'll get ADD op: Image1 + Image2.
To get what you want, you need to set up a compositing operation to .hardLight.
.hardLight compositing operation works as .multiply
if alpha of text image less than 50 percent (A < 0.5, the image is almost transparent)
Formula for .multiply: Image1 * Image2
.hardLight compositing operation works as .screen
if alpha of text image greater than or equal to 50 percent (A >= 0.5, the image is semi-opaque)
Formula 1 for .screen: (Image1 + Image2) – (Image1 * Image2)
Formula 2 for .screen: 1 – (1 – Image1) * (1 – Image2)
.screen operation has much softer result than .plus, and it allows to keep alpha not greater than 1 (plus operation adds alphas of Image1 and Image2, so you might get alpha = 2, if you have two alphas). .screen compositing operation is good for making reflections.
func editImage() {
print("Drawing image with \(selectedOpacity) alpha")
let text = "hello world"
let backgroundCGImage = #imageLiteral(resourceName: "background").cgImage!
let backgroundImage = CIImage(cgImage: backgroundCGImage)
let imageRect = backgroundImage.extent
//set up transparent context and draw text on top
let colorSpace = CGColorSpaceCreateDeviceRGB()
let alphaInfo = CGImageAlphaInfo.premultipliedLast.rawValue
let bitmapContext = CGContext(data: nil, width: Int(imageRect.width), height: Int(imageRect.height), bitsPerComponent: 8, bytesPerRow: 0, space: colorSpace, bitmapInfo: alphaInfo)!
bitmapContext.draw(backgroundCGImage, in: imageRect)
bitmapContext.setAlpha(CGFloat(selectedOpacity))
bitmapContext.setTextDrawingMode(.fill)
//TRY THREE COMPOSITING OPERATIONS HERE
bitmapContext.setBlendMode(.hardLight)
//bitmapContext.setBlendMode(.multiply)
//bitmapContext.setBlendMode(.screen)
//white text
bitmapContext.textPosition = CGPoint(x: 15 * UIScreen.main.scale, y: (20 + 60) * UIScreen.main.scale)
let displayLineTextWhite = CTLineCreateWithAttributedString(NSAttributedString(string: text, attributes: [.foregroundColor: UIColor.white, .font: UIFont.systemFont(ofSize: 58 * UIScreen.main.scale)]))
CTLineDraw(displayLineTextWhite, bitmapContext)
//black text
bitmapContext.textPosition = CGPoint(x: 15 * UIScreen.main.scale, y: 20 * UIScreen.main.scale)
let displayLineTextBlack = CTLineCreateWithAttributedString(NSAttributedString(string: text, attributes: [.foregroundColor: UIColor.black, .font: UIFont.systemFont(ofSize: 58 * UIScreen.main.scale)]))
CTLineDraw(displayLineTextBlack, bitmapContext)
let outputImage = bitmapContext.makeImage()!
topImageView.image = UIImage(cgImage: outputImage)
}
So for recreating this compositing operation you need the following logic:
//rgb1 – text image
//rgb2 - background
//a1 - alpha of text image
if a1 >= 0.5 {
//use this formula for compositing: 1–(1–rgb1)*(1–rgb2)
} else {
//use this formula for compositing: rgb1*rgb2
}
I recreated an image using compositing app The Foundry NUKE 11. Offset=0.5 here is Add=0.5.
I used property Offset=0.5 because transparency=0.5 is a pivot point of .hardLight compositing operation.
For color text
You need to use .sourceAtop compositing operation in case you have ORANGE (or any other color) text in addition to B&W text. Applying .sourceAtop case of .setBlendMode method you make Swift use the luminance of the background image to determine what to show. Alternatively you can employ CISourceAtopCompositing core image filter instead of CISourceOverCompositing.
bitmapContext.setBlendMode(.sourceAtop)
or
let compositingFilter = CIFilter(name: "CISourceAtopCompositing")
.sourceAtop operation has the following formula: (Image1 * A2) + (Image2 * (1 – A1)). As you can see you need two alpha channels: A1 is the alpha for text and A2 is the alpha for background image.
bitmapContext.textPosition = CGPoint(x: 15 * UIScreen.main.scale, y: (20 + 60) * UIScreen.main.scale)
let displayLineTextOrange = CTLineCreateWithAttributedString(NSAttributedString(string: text, attributes: [.foregroundColor: UIColor.orange, .font: UIFont.systemFont(ofSize: 58 * UIScreen.main.scale)]))
CTLineDraw(displayLineTextOrange, bitmapContext)
Final answer: Formula in CISourceOverCompositing is good one. It is right thing to do.
BUT
It is working in wrong color space. In graphic programs you most likely have sRGB color space. On iOS Generic RGB color space is used. This is why results don't match.
Using custom CIFilter I recreated CISourceOverCompositing filter.
s1 is text image.
s2 is background image.
Kernel for it is this:
kernel vec4 opacity( __sample s1, __sample s2) {
vec3 text = s1.rgb;
float textAlpha = s1.a;
vec3 background = s2.rgb;
vec3 res = background * (1.0 - textAlpha) + text;
return vec4(res, 1.0);
}
So to fix this color 'issue' you must convert text image from RGB to sRGB. I guess your next question will be how to do that ;)
Important: iOS does not support device-independent or generic color spaces. iOS applications must use device color spaces instead.
Apple doc about color spaces

My pixel map image (NSImage) sometimes doesn't look like the RGB values I pass in, Swift Core Graphics Quartz

I have two d pixel maps ([Double]) that I encode (correctly) into a [UInt32]), which is fed into a CGDataProvider, which is the source for a CGImage, converted to an NSImage and finally displayed in a custom view with
class SingleStepMapView:NSView {
#IBOutlet var dataSource : SingleStepViewController!
override func drawRect(dirtyRect: NSRect) {
let mapImage = TwoDPixMap(data: dataSource.mapData,
width: dataSource.mapWidth, color: dataSource.mapColor).image
mapImage.drawInRect(self.bounds, fromRect: NSZeroRect, operation: NSCompositingOperation.CompositeSourceAtop, fraction: 1.0)
return
}
The part where the NSImage is being constructed is in the image property of the TwoDPixMap instance...
var image : NSImage {
var buffer = [UInt32]()
let bufferScale = 1.0 / (mapMax - mapMin)
for d in data {
let scaled = bufferScale * (d - mapMin)
buffer.append( color.rgba(scaled) )
}
let bufferLength = 4 * height * width
let dataProvider = CGDataProviderCreateWithData(nil, buffer, bufferLength, nil)!
let bitsPerComponent = 8
let bitsPerPixel = 32
let bytesPerRow = 4 * width
let colorSpace = CGColorSpaceCreateDeviceRGB()
var bitMapInfo = CGBitmapInfo()
bitMapInfo.insert(.ByteOrderDefault)
// let bitMapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.PremultipliedFirst.rawValue)
let interpolate = false
let renderingIntent = CGColorRenderingIntent.RenderingIntentDefault
let theImage = CGImageCreate(width, height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpace, bitMapInfo, dataProvider, nil, interpolate, renderingIntent)!
let value = NSImage(CGImage: theImage, size: NSSize(width: width, height: height))
return value
}
I have checked that the created buffer values are always correct when fed into the CGDataProviderCreateWithData call. In particular for the following examples, which are sixty four Double values (from 0.0 to 63.0) which will be mapped into sixty four UInt32 values using a color bar that I've constructed such that the first pixel is pure red (0xff0000ff), and the last pixel is pure white (0xffffffff). In none of the values do we see anything that translates to black. When arrayed as a 64 by one map (it is essentially the color bar) it should look like this...
but sometimes, just reinvokeing the drawrect method, with identical data and created buffer it looks completely wrong
or sometimes almost right (one pixel the wrong color). I would post more examples but I'm restricted to only two links
The problem is that you've created a data provider with an unsafe pointer to buffer, filled that pixel buffer, created the image using that provider, but then allowed buffer fall out of scope and be released even though the data provider still had an unsafe reference to that memory address.
But once that memory was deallocated, it can be used for other things, rendering weird behavior ranging from a single pixel that is off, or wholesale alteration of that memory range as it's used for other things.
In Objective-C environment, I'd do a malloc of the memory when I created the provider, and then the last parameter to CGDataProviderCreateWithData would be a C function that I'd write to free that memory. (And that memory-freeing function may not be called until much later, not until the image was released.)
Another approach that I've used is to call CGBitmapContextCreate to create a context, and then use CGBitmapContextGetData to get the buffer it created for the image. I then can fill that buffer as I see fit. But because that buffer was created for me, the OS takes care of the memory management:
func createImageWithSize(size: NSSize) -> NSImage {
let width = Int(size.width)
let height = Int(size.height)
let bitsPerComponent = 8
let bytesPerRow = 4 * width
let colorSpace = CGColorSpaceCreateDeviceRGB()
let bitmapInfo = CGImageAlphaInfo.PremultipliedLast.rawValue // this depends upon how your `rgba` method was written; use whatever `bitmapInfo` that makes sense for your app
let context = CGBitmapContextCreate(nil, width, height, bitsPerComponent, bytesPerRow, colorSpace, bitmapInfo)!
let pixelBuffer = UnsafeMutablePointer<UInt32>(CGBitmapContextGetData(context))
var currentPixel = pixelBuffer
for row in 0 ..< height {
for column in 0 ..< width {
let red = ...
let green = ...
let blue = ...
currentPixel.memory = rgba(red: red, green: green, blue: blue, alpha: 255)
currentPixel++
}
}
let cgImage = CGBitmapContextCreateImage(context)!
return NSImage(CGImage: cgImage, size: size)
}
That yields:
One could make an argument that the first technique (create the data provider, doing a malloc of the memory and then providing CGDataProviderCreateWithData a function parameter to free it) might be better, but then you're stuck writing a C function to do that cleanup, something I'd rather not do in a Swift environment. But it's up to you.