My pixel map image (NSImage) sometimes doesn't look like the RGB values I pass in, Swift Core Graphics Quartz - swift

I have two d pixel maps ([Double]) that I encode (correctly) into a [UInt32]), which is fed into a CGDataProvider, which is the source for a CGImage, converted to an NSImage and finally displayed in a custom view with
class SingleStepMapView:NSView {
#IBOutlet var dataSource : SingleStepViewController!
override func drawRect(dirtyRect: NSRect) {
let mapImage = TwoDPixMap(data: dataSource.mapData,
width: dataSource.mapWidth, color: dataSource.mapColor).image
mapImage.drawInRect(self.bounds, fromRect: NSZeroRect, operation: NSCompositingOperation.CompositeSourceAtop, fraction: 1.0)
return
}
The part where the NSImage is being constructed is in the image property of the TwoDPixMap instance...
var image : NSImage {
var buffer = [UInt32]()
let bufferScale = 1.0 / (mapMax - mapMin)
for d in data {
let scaled = bufferScale * (d - mapMin)
buffer.append( color.rgba(scaled) )
}
let bufferLength = 4 * height * width
let dataProvider = CGDataProviderCreateWithData(nil, buffer, bufferLength, nil)!
let bitsPerComponent = 8
let bitsPerPixel = 32
let bytesPerRow = 4 * width
let colorSpace = CGColorSpaceCreateDeviceRGB()
var bitMapInfo = CGBitmapInfo()
bitMapInfo.insert(.ByteOrderDefault)
// let bitMapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.PremultipliedFirst.rawValue)
let interpolate = false
let renderingIntent = CGColorRenderingIntent.RenderingIntentDefault
let theImage = CGImageCreate(width, height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpace, bitMapInfo, dataProvider, nil, interpolate, renderingIntent)!
let value = NSImage(CGImage: theImage, size: NSSize(width: width, height: height))
return value
}
I have checked that the created buffer values are always correct when fed into the CGDataProviderCreateWithData call. In particular for the following examples, which are sixty four Double values (from 0.0 to 63.0) which will be mapped into sixty four UInt32 values using a color bar that I've constructed such that the first pixel is pure red (0xff0000ff), and the last pixel is pure white (0xffffffff). In none of the values do we see anything that translates to black. When arrayed as a 64 by one map (it is essentially the color bar) it should look like this...
but sometimes, just reinvokeing the drawrect method, with identical data and created buffer it looks completely wrong
or sometimes almost right (one pixel the wrong color). I would post more examples but I'm restricted to only two links

The problem is that you've created a data provider with an unsafe pointer to buffer, filled that pixel buffer, created the image using that provider, but then allowed buffer fall out of scope and be released even though the data provider still had an unsafe reference to that memory address.
But once that memory was deallocated, it can be used for other things, rendering weird behavior ranging from a single pixel that is off, or wholesale alteration of that memory range as it's used for other things.
In Objective-C environment, I'd do a malloc of the memory when I created the provider, and then the last parameter to CGDataProviderCreateWithData would be a C function that I'd write to free that memory. (And that memory-freeing function may not be called until much later, not until the image was released.)
Another approach that I've used is to call CGBitmapContextCreate to create a context, and then use CGBitmapContextGetData to get the buffer it created for the image. I then can fill that buffer as I see fit. But because that buffer was created for me, the OS takes care of the memory management:
func createImageWithSize(size: NSSize) -> NSImage {
let width = Int(size.width)
let height = Int(size.height)
let bitsPerComponent = 8
let bytesPerRow = 4 * width
let colorSpace = CGColorSpaceCreateDeviceRGB()
let bitmapInfo = CGImageAlphaInfo.PremultipliedLast.rawValue // this depends upon how your `rgba` method was written; use whatever `bitmapInfo` that makes sense for your app
let context = CGBitmapContextCreate(nil, width, height, bitsPerComponent, bytesPerRow, colorSpace, bitmapInfo)!
let pixelBuffer = UnsafeMutablePointer<UInt32>(CGBitmapContextGetData(context))
var currentPixel = pixelBuffer
for row in 0 ..< height {
for column in 0 ..< width {
let red = ...
let green = ...
let blue = ...
currentPixel.memory = rgba(red: red, green: green, blue: blue, alpha: 255)
currentPixel++
}
}
let cgImage = CGBitmapContextCreateImage(context)!
return NSImage(CGImage: cgImage, size: size)
}
That yields:
One could make an argument that the first technique (create the data provider, doing a malloc of the memory and then providing CGDataProviderCreateWithData a function parameter to free it) might be better, but then you're stuck writing a C function to do that cleanup, something I'd rather not do in a Swift environment. But it's up to you.

Related

create transparent texture in swift

I just need to create a transparent texture.(pixels with alpha 0).
func layerTexture()-> MTLTexture {
let width = Int(self.drawableSize.width )
let height = Int(self.drawableSize.height )
let texDescriptor = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: .bgra8Unorm, width: width , height: height, mipmapped: false)
let temparyTexture = self.device?.makeTexture(descriptor: texDescriptor)
return temparyTexture!
}
when I open temparyTexture using preview,it's appeared to be black. What is the missing here?
UPDATE
I just tried to create texture using transparent image.
code.
func layerTexture(imageData:Data)-> MTLTexture {
let width = Int(self.drawableSize.width )
let height = Int(self.drawableSize.height )
let bytesPerRow = width * 4
let texDescriptor = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: .rgba8Unorm, width: width , height: height, mipmapped: false)
let temparyTexture = self.device?.makeTexture(descriptor: texDescriptor)
let region = MTLRegionMake2D(0, 0, width, height)
imageData.withUnsafeBytes { (u8Ptr: UnsafePointer<UInt8>) in
let rawPtr = UnsafeRawPointer(u8Ptr)
temparyTexture?.replace(region: region, mipmapLevel: 0, withBytes: rawPtr, bytesPerRow: bytesPerRow)
}
return temparyTexture!
}
method is get called as follows
let image = UIImage(named: "layer1.png")!
let imageData = UIImagePNGRepresentation(image)
self.layerTexture(imageData: imageData!)
where layer1.png is a transparent png. But even though it is crashing with message "Thread 1: EXC_BAD_ACCESS (code=1, address=0x107e8c000) " at the point I try to replace texture. I believe it's because image data is compressed and rawpointer should point to uncompressed data. How can I resolve this?
Am I in correct path or completely in wrong direction? Is there any other alternatives. What I just need is to create transparent texture.
Pre-edit: When you quick-look a transparent texture, it will appear black. I just double-checked with some code I have running stably in production - that is the expected result.
Post-edit: You are correct, you should not be copying PNG or JPEG data to a MTLTexture's contents directly. I would recommend doing something like this:
var pixelBuffer: CVPixelBuffer?
let attrs = [kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue,
kCVPixelBufferCGBitmapContextCompatibilityKey: kCFBooleanTrue,
kCVPixelBufferMetalCompatibilityKey: kCFBooleanTrue]
var status = CVPixelBufferCreate(nil, Int(image.size.width), Int(image.size.height),
kCVPixelFormatType_32BGRA, attrs as CFDictionary,
&pixelBuffer)
assert(status == noErr)
let coreImage = CIImage(image: image)!
let context = CIContext(mtlDevice: MTLCreateSystemDefaultDevice()!)
context.render(coreImage, to: pixelBuffer!)
var textureWrapper: CVMetalTexture?
status = CVMetalTextureCacheCreateTextureFromImage(kCFAllocatorDefault,
GPUManager.shared.textureCache, pixelBuffer!, nil, .bgra8Unorm,
CVPixelBufferGetWidth(pixelBuffer!), CVPixelBufferGetHeight(pixelBuffer!), 0, &textureWrapper)
let texture = CVMetalTextureGetTexture(textureWrapper!)!
// use texture now for your Metal texture. the texture is now map-bound to the CVPixelBuffer's underlying memory.
The issue you are running into is that it is actually pretty hard to fully grasp how bitmaps work and how they can be laid out differently. Graphics is a very closed field with lots of esoteric terminology, some of which refers to things that take years to grasp, some of which refers to things that are trivial but people just picked a weird word to call them by. My main pointers are:
Get out of UIImage land as early in your code as possible. The best way to avoiding overhead and delays when you go into Metal land is to get your images into a GPU-compatible representation as soon as you can.
Once you are outside of UIImage land, always know your channel order (RGBA, BGRA). At any point in code that you are editing, you should have a mental model of what pixel format each CVPixelBuffer / MTLTexture has.
Read up on premultiplied vs non-premultiplied alpha, you may not run into issues with this, but it threw me off repeatedly when I was first learning.
total byte size of a bitmap/pixelbuffer = bytesPerRow * height

Swift : Convert byte array into CIImage

I am looking for a way to convert a byte array to CIIimage so that I can feed it into a ML model for classification. I am using a REST HTTP server, where I send a POST request to the server with payload as the image. The image bytes received by the server needs to be processed and converted into a CIImage format for MAC OS so that it can be fed into a ML model which accepts requests of type VNImageRequestHandler(ciImage: <ciimage>).
Could someone give an example to do this in swift ?
VNImageRequestHandler : NSObject
let data = Data(bytes)
let imgHandler = VNImageRequestHandler(ciImage: data)
The above data variable needs to be typecased to be of type CIImage.
On the HTTP server side, I am receiving the bytes of the image like this:
imageData = request.body.bytes
Convert byte array to CGImage using this method. You must make it sure that your bytes are rgba 32 bit pixel raw bytes.
func byteArrayToCGImage(raw: UnsafeMutablePointer<UInt8>, // Your byte array
w: Int, // your image's width
h: Int // your image's height
) -> CGImage! {
// 4 bytes(rgba channels) for each pixel
let bytesPerPixel: Int = 4
// (8 bits per each channel)
let bitsPerComponent: Int = 8
let bitsPerPixel = bytesPerPixel * bitsPerComponent;
// channels in each row (width)
let bytesPerRow: Int = w * bytesPerPixel;
let cfData = CFDataCreate(nil, raw, w * h * bytesPerPixel)
let cgDataProvider = CGDataProvider.init(data: cfData!)!
let deviceColorSpace = CGColorSpaceCreateDeviceRGB()
let image: CGImage! = CGImage.init(width: w,
height: h,
bitsPerComponent: bitsPerComponent,
bitsPerPixel: bitsPerPixel,
bytesPerRow: bytesPerRow,
space: deviceColorSpace,
bitmapInfo: [],
provider: cgDataProvider,
decode: nil,
shouldInterpolate: true,
intent: CGColorRenderingIntent.defaultIntent)
return image;
}
Using this method, you can convert to CIImage like this.
let cgimage = byteArrayToCGImage(raw: <#Pointer to Your byte array#> ,
w: <#your image's width#>,
h: <#your image's height#>)
if cgimage != nil {
let ciImage = CIImage.init(cgImage: cgimage)
}
According to the comment, your datas might be RGB raw bytes rather than RGBA. In this case, you will have to allocate new buffer, put 255 for each alpha channel manually and pass that buffer to the method.
Updated for convertion of 32 bits RGB to 32 bits RGBA
func convertTo32bitsRGBA(from32bitsRGB pointer: UnsafeMutablePointer<UInt8>!,
width: Int,
height: Int) -> UnsafeMutablePointer<UInt8> {
let pixelCount = width * height
let memorySize = pixelCount * 4
let newBuffer = malloc(memorySize).bindMemory(to: UInt8.self, capacity: width * height)
var i = 0;
while(i < pixelCount) {
let oldBufferIndex = i * 3;
let newBufferIndex = i * 4;
// red channel
newBuffer.advanced(by: newBufferIndex).pointee = pointer.advanced(by: oldBufferIndex).pointee
// green channel
newBuffer.advanced(by: newBufferIndex + 1).pointee = pointer.advanced(by: oldBufferIndex + 1).pointee
// blue channel
newBuffer.advanced(by: newBufferIndex + 2).pointee = pointer.advanced(by: oldBufferIndex + 2).pointee
// alpha channel
newBuffer.advanced(by: newBufferIndex + 3).pointee = 0xff;
// &+ is used for little performance gain
i = i &+ 1;
}
return newBuffer;
}
You can call the converter method with your rgb image buffer as follow
let newImageBuffer = convertTo32bitsRGBA(from32bitsRGB: <#Your RGB image buffer#>,
width: <#Your image pixel row count or width#>,
height: <#Your image pixel column count or height#>)
but remember, like in C, C++ or Objective-C, you are responsible to release the memory allocation returned by this method. These are pointers which memory are not managed by compiler.
You can release with simple function.
newImageBuffer.deallocate();
After deallocation, you must not access the deallocated memory. If you do so, you will get BAD_ACCESS_EXC (Bad access exception thrown by OS for accessing memory you do not own).

Getting Pixel Information From CGImage and Change it with Swift 4

The thing is I can get pixel information with following code but can not write back.
var pixelsCopy:[UInt8] = []
let data: NSData = cgImage!.dataProvider!.data!
var pixels = data.bytes.assumingMemoryBound(to: UInt8.self)
while index < widthOfImage * heightOfImage * 4 {
pixelsCopy.append( pixels[index] )
pixelsCopy.append( pixels[index + 1] )
pixelsCopy.append( pixels[index + 2] )
pixelsCopy.append( pixels[index + 3] )
index += 4
}
With writing back I mean something like following
pixels[index] = newRedValue
I am having "subscript get-only" error.
We could do it at Obj-C as much as I recall.
You can't write pixels back to a CGImage. A CGImage is read-only. You need to:
Create a CGContext.
Draw the image into it.
Draw whatever you want on top of the image.
Ask the context to create a new image.
This is a kind of answer.
I could not change pixels arrays but I created pixelsCopy array played RGB and Alpha values of that array then created first a CGImage then to an UIImage.
But sometimes final images has becoming distorted I do not know why at the moment.
To convert pixelsCopy array to CGImage I used this code
let numComponents = 4
let lenght = givenImageWidth * givenImageHeight * numComponents
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.premultipliedLast.rawValue)
let colorspace = CGColorSpaceCreateDeviceRGB()
let rgbData = CFDataCreate(nil, pixelsCopy, lenght)!
let provider = CGDataProvider(data: rgbData)!
let rgbImageRef = CGImage(width: size_w,
height: size_h,
bitsPerComponent: 8,
bitsPerPixel: 8 * numComponents,
bytesPerRow: givenImageWidth * numComponents,
space: colorspace,
bitmapInfo: bitmapInfo,
provider: provider,
decode: nil,
shouldInterpolate: true,
intent: CGColorRenderingIntent.defaultIntent)!
let finalUIImage = UIImage(cgImage: rgbImageRef)
anImageView.image = finalUIImage
pixelsCopy.removeAll()
If you want to see good and and distorted images check out the R letters at the bottom of images below.
The R on the left is the given image the one at the right is my finalUIImage image.

Continually scrolling dynamic bitmap in Core Graphics

I have been struggling with a performance related issue for a simple Cocoa based visualization program, that is continually drawing a heat map as a bitmap and displaing the bitmap on screen. Broadly, the heat map is a continually scrolling source of realtime information. At each redraw, new pixels are added at the right side of the bitmap and then the bitmap is shifted to the left, creating an on-going flow of information.
My current approach is implemented in a custom NVSView, with a circular buffer for the underlying pixels. The image is actually drawn rotated 90 degrees, so adding new data is simply a matter of appending the buffer (adding rows to the image). I then use a CGDataProviderCreateWithData to create a CoreGraphics image and draw that to a rotated context. (The code is below, although is less important to the question.)
My hope is to figure out a more performant way of achieving this. It seems like redrawing the full bitmap each time is excessive, and in my attempts, it uses a surprisingly high amount of CPU (~20-30%). I feel like there should be a way to somehow instruct the GPU to cache, but shift the existing pixels and then append new data.
I am considering an approach that will use two CGBitmapContextCreate, one context for what is currently on the screen and one context for modification. Prior pixels would be copied from one context to the other context, but I am not sure that will improve performance significantly. Before I proceed too far down that rabbit hole, are there better ways to handle such updating?
Here is the relevant code from my current implementation, although I think I am more in need of higher level guidance:
class PlotView: NSView
{
private let bytesPerPixel = 4
private let height = 512
private let bufferLength = 5242880
private var buffer: TPCircularBuffer
// ... Other code that appends data to buffer and set needsDisplay ...
override func drawRect(dirtyRect: NSRect) {
super.drawRect(dirtyRect)
// get context
guard let context = NSGraphicsContext.currentContext() else {
return
}
// get colorspace
guard let colorSpace = CGColorSpaceCreateDeviceRGB() else {
return
}
// number of sampels to draw
let maxSamples = Int(frame.width) * height
let maxBytes = maxSamples * bytesPerPixel
// bytes for reading
var availableBytes: Int32 = 0
let bufferStart = UnsafeMutablePointer<UInt8>(TPCircularBufferTail(&buffer, &availableBytes))
let samplesStart: UnsafeMutablePointer<UInt8>
let samplesCount: Int
let samplesBytes: Int
// buffer management
if Int(availableBytes) > maxBytes {
// advance buffer start
let bufferOffset = Int(availableBytes) - maxBytes
// set sample start
samplesStart = bufferStart.advancedBy(bufferOffset)
// consume values
TPCircularBufferConsume(&buffer, Int32(bufferOffset))
// number of samples
samplesCount = maxSamples
samplesBytes = maxBytes
}
else {
// set to start
samplesStart = bufferStart
// number of samples
samplesBytes = Int(availableBytes)
samplesCount = samplesBytes / bytesPerPixel
}
// get dimensions
let rows = height
let columns = samplesCount / rows
// bitmap info
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.NoneSkipFirst.rawValue)
// counts
let bytesPerComponent = sizeof(UInt8)
let bytesPerPixel = sizeof(UInt8) * bytesPerPixel
// prepare image
let provider = CGDataProviderCreateWithData(nil, samplesStart, samplesBytes, nil)
let image = CGImageCreate(rows, columns, 8 * bytesPerComponent, 8 * bytesPerPixel, rows * bytesPerPixel, colorSpace, bitmapInfo, provider, nil, false, .RenderingIntentDefault)
// make rotated size
let rotatedSize = CGSize(width: frame.width, height: frame.height)
let rotatedCenterX = rotatedSize.width / 2, rotatedCenterY = rotatedSize.height / 2
// rotate, from: http://stackoverflow.com/questions/10544887/rotating-a-cgimage
CGContextTranslateCTM(context.CGContext, rotatedCenterX, rotatedCenterY)
CGContextRotateCTM(context.CGContext, CGFloat(0 - M_PI_2))
CGContextScaleCTM(context.CGContext, 1, -1)
CGContextTranslateCTM(context.CGContext, -rotatedCenterY, -rotatedCenterX)
// draw
CGContextDrawImage(context.CGContext, NSRect(origin: NSPoint(x: rotatedSize.height - CGFloat(rows), y: 0), size: NSSize(width: rows, height: columns)), image)
}
}

Is programmatically inverting the colors of an image possible?

I want to take an image and invert the colors in iOS.
To expand on quixoto's answer and because I have relevant source code from a project of my own, if you were to need to drop to on-CPU pixel manipulation then the following, which I've added exposition to, should do the trick:
#implementation UIImage (NegativeImage)
- (UIImage *)negativeImage
{
// get width and height as integers, since we'll be using them as
// array subscripts, etc, and this'll save a whole lot of casting
CGSize size = self.size;
int width = size.width;
int height = size.height;
// Create a suitable RGB+alpha bitmap context in BGRA colour space
CGColorSpaceRef colourSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *memoryPool = (unsigned char *)calloc(width*height*4, 1);
CGContextRef context = CGBitmapContextCreate(memoryPool, width, height, 8, width * 4, colourSpace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colourSpace);
// draw the current image to the newly created context
CGContextDrawImage(context, CGRectMake(0, 0, width, height), [self CGImage]);
// run through every pixel, a scan line at a time...
for(int y = 0; y < height; y++)
{
// get a pointer to the start of this scan line
unsigned char *linePointer = &memoryPool[y * width * 4];
// step through the pixels one by one...
for(int x = 0; x < width; x++)
{
// get RGB values. We're dealing with premultiplied alpha
// here, so we need to divide by the alpha channel (if it
// isn't zero, of course) to get uninflected RGB. We
// multiply by 255 to keep precision while still using
// integers
int r, g, b;
if(linePointer[3])
{
r = linePointer[0] * 255 / linePointer[3];
g = linePointer[1] * 255 / linePointer[3];
b = linePointer[2] * 255 / linePointer[3];
}
else
r = g = b = 0;
// perform the colour inversion
r = 255 - r;
g = 255 - g;
b = 255 - b;
// multiply by alpha again, divide by 255 to undo the
// scaling before, store the new values and advance
// the pointer we're reading pixel data from
linePointer[0] = r * linePointer[3] / 255;
linePointer[1] = g * linePointer[3] / 255;
linePointer[2] = b * linePointer[3] / 255;
linePointer += 4;
}
}
// get a CG image from the context, wrap that into a
// UIImage
CGImageRef cgImage = CGBitmapContextCreateImage(context);
UIImage *returnImage = [UIImage imageWithCGImage:cgImage];
// clean up
CGImageRelease(cgImage);
CGContextRelease(context);
free(memoryPool);
// and return
return returnImage;
}
#end
So that adds a category method to UIImage that:
creates a clear CoreGraphics bitmap context that it can access the memory of
draws the UIImage to it
runs through every pixel, converting from premultiplied alpha to uninflected RGB, inverting each channel separately, multiplying by alpha again and storing back
gets an image from the context and wraps it into a UIImage
cleans up after itself, and returns the UIImage
With CoreImage:
#import <CoreImage/CoreImage.h>
#implementation UIImage (ColorInverse)
+ (UIImage *)inverseColor:(UIImage *)image {
CIImage *coreImage = [CIImage imageWithCGImage:image.CGImage];
CIFilter *filter = [CIFilter filterWithName:#"CIColorInvert"];
[filter setValue:coreImage forKey:kCIInputImageKey];
CIImage *result = [filter valueForKey:kCIOutputImageKey];
return [UIImage imageWithCIImage:result];
}
#end
Sure, it's possible-- one way is using the "difference" blend mode (kCGBlendModeDifference). See this question (among others) for the outline of the code to set up the image processing. Use your image as the bottom (base) image, and then draw a pure white bitmap on top of it.
You can also do the per-pixel operation manually by getting the CGImageRef and drawing it into a bitmap context, and then looping over the pixels in the bitmap context.
Swift 3 update: (from #BadPirate Answer)
extension UIImage {
func inverseImage(cgResult: Bool) -> UIImage? {
let coreImage = UIKit.CIImage(image: self)
guard let filter = CIFilter(name: "CIColorInvert") else { return nil }
filter.setValue(coreImage, forKey: kCIInputImageKey)
guard let result = filter.value(forKey: kCIOutputImageKey) as? UIKit.CIImage else { return nil }
if cgResult { // I've found that UIImage's that are based on CIImages don't work with a lot of calls properly
return UIImage(cgImage: CIContext(options: nil).createCGImage(result, from: result.extent)!)
}
return UIImage(ciImage: result)
}
}
Created a swift extension to do just this. Also because CIImage based UIImages break down (most libraries assume CGImage is set) I added an option to return a UIImage that is based on a modified CIImage:
extension UIImage {
func inverseImage(cgResult: Bool) -> UIImage? {
let coreImage = UIKit.CIImage(image: self)
guard let filter = CIFilter(name: "CIColorInvert") else { return nil }
filter.setValue(coreImage, forKey: kCIInputImageKey)
guard let result = filter.valueForKey(kCIOutputImageKey) as? UIKit.CIImage else { return nil }
if cgResult { // I've found that UIImage's that are based on CIImages don't work with a lot of calls properly
return UIImage(CGImage: CIContext(options: nil).createCGImage(result, fromRect: result.extent))
}
return UIImage(CIImage: result)
}
}
Tommy answer is THE answer but I'd like to point out that could be a really intense and time consuming task for bigger images. There two frameworks that could help you in manipulating images:
CoreImage
Accelerator
And it really worth to mention the amazing GPUImage framework from Brad Larson, GPUImage makes the routines run on the GPU using custom fragment shader in OpenGlES 2.0 environment, with remarkable speed improvement. With CoreImge if a negative filter is available you can choose CPU or GPU, using Accelerator all routines run on CPU but using vector math image processing.
Updated to Swift 5 version of #MLBDG answer
extension UIImage {
func inverseImage(cgResult: Bool) -> UIImage? {
let coreImage = self.ciImage
guard let filter = CIFilter(name: "CIColorInvert") else { return nil }
filter.setValue(coreImage, forKey: kCIInputImageKey)
guard let result = filter.value(forKey: kCIOutputImageKey) as? UIKit.CIImage else { return nil }
if cgResult { // I've found that UIImage's that are based on CIImages don't work with a lot of calls properly
return UIImage(cgImage: CIContext(options: nil).createCGImage(result, from: result.extent)!)
}
return UIImage(ciImage: result)
}
}