Creating CGImage via byte array sometimes results in EXC_BAD_ACCESS - swift

I'm trying to create a CGImage from a byte array, with something similar to this example function that generates a red square on a black field:
var bgrArray: [UInt8] = Array(repeating: 0, count: 480*480*4)
for i in 50..<250 {
for j in 50..<250 {
bgrArray[(i*480+j)*4] = 255
bgrArray[(i*480+j)*4+1] = 0
bgrArray[(i*480+j)*4+2] = 0
bgrArray[(i*480+j)*4+3] = 0
}
}
let releaseMaskImagePixelData: CGDataProviderReleaseDataCallback = { (info: UnsafeMutableRawPointer?, data: UnsafeRawPointer, size: Int) -> () in
return
}
let provider = CGDataProvider(dataInfo: nil, data: bgrArray, size: bgrArray.count, releaseData: releaseMaskImagePixelData)!
let colorspace = CGColorSpaceCreateDeviceRGB()
let bitsPerComponent = 8;
let bitsPerPixel = 32;
let bytesPerRow = 4 * 480;
let bitmapInfo: CGBitmapInfo = [.byteOrder32Big, CGBitmapInfo(rawValue: CGImageAlphaInfo.noneSkipLast.rawValue)]
let img = CGImage(width: 480, height: 480, bitsPerComponent: bitsPerComponent, bitsPerPixel: bitsPerPixel, bytesPerRow: bytesPerRow, space: colorspace, bitmapInfo: bitmapInfo, provider: provider, decode: nil, shouldInterpolate: false, intent: .defaultIntent)
let uiimage = UIImage(cgImage: img!)
return CIImage(image: uiimage)!
I have this exact bit of code copied into two different projects, in one project it always succeeds and in the other the line let img = CGImage(width: 480, height: 480, bitsPerComponent: bitsPerComponent, bitsPerPixel: bitsPerPixel, bytesPerRow: bytesPerRow, space: colorspace, bitmapInfo: bitmapInfo, provider: provider, decode: nil, shouldInterpolate: false, intent: .defaultIntent) always fails with a EXC_BAD_ACCESS. The fact that it succeeds in one project and fails in another confuses me, is this bit of code able to be impacted by other things going on in an app?

Turns out this is due to optimization, seems like the exception occurs when optimization is turned on.

Related

What's the most efficient way from raw pixels to a PixelBuffer?

I'm receiving raw pixels of an image and successfully transforming them into a CGImage with the following function:
func imageFromTexturePixels(raw: UnsafePointer<UInt8>, w: Int, h: Int) -> CGImage? {
// 4 bytes(rgba channels) for each pixel
let bytesPerPixel: Int = 4
// (8 bits per each channel)
let bitsPerComponent: Int = 8
let bitsPerPixel = bytesPerPixel * bitsPerComponent
// channels in each row (width)
let bytesPerRow: Int = w * bytesPerPixel
let cfData = CFDataCreate(nil, raw, w * h * bytesPerPixel)
let cgDataProvider = CGDataProvider(data: cfData!)!
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.last.rawValue)
let image: CGImage! = CGImage(width: w,
height: h,
bitsPerComponent: bitsPerComponent,
bitsPerPixel: bitsPerPixel,
bytesPerRow: bytesPerRow,
space: deviceColorSpace,
bitmapInfo: bitmapInfo,
provider: cgDataProvider,
decode: nil,
shouldInterpolate: false,
intent: CGColorRenderingIntent.defaultIntent)
return image
}
What I really want in the end is a PixelBuffer, so I'm transforming that image into a pixelbuffer using this extension.
Although all this works, it looks a bit inefficient and I'd like to know how to transform the raw pixels directly into a CVPixelBuffer, without converting them into a CGImage before.

how to properly extract the array of numbers from an image in swift?

I'm trying to extract the array of numbers from a UIImage in swift but at the end I got only a bunch of zeros no useful information at all.
that's the code I wrote to try accomplishing this.
var photo = UIImage(named: "myphoto.jpg")!
var withAlpha = true
var bytesPerPixels: Int = withAlpha ? 4 : 3
var width: Int = Int(photo.size.width)
var height: Int = Int(photo.size.height)
var bitsPerComponent: Int = 8
var bytesPerRow = bytesPerPixels * width
var totalPixels = (bytesPerPixels * width) * height
var alignment = MemoryLayout<UInt32>.alignment
var data = UnsafeMutableRawPointer.allocate(byteCount: totalPixels, alignment: alignment )
var bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.premultipliedFirst.rawValue | CGBitmapInfo.byteOrder32Little.rawValue).rawValue
var colorSpace = CGColorSpaceCreateDeviceRGB()
let ctx = CGContext(data: data, width: width, height: height, bitsPerComponent: bitsPerComponent, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo)
let bindedPointer: UnsafeMutablePointer<UInt32> = data.bindMemory(to: UInt32.self, capacity: totalPixels)
var pixels = UnsafeMutableBufferPointer.init(start: bindedPointer, count: totalPixels)
for p in pixels{
print(p, Date())
}
At the end I tried to bind the unsafeMutableRawPointer to extract the values but got no success,
what could I be missing here?
Thank you all in advance.
A few observations:
You need to draw the image to the context.
I’d also suggest that rather than creating a buffer that you have to manage manually, that you pass nil and let the OS create (and manage) that buffer for you.
Note that totalPixels should be just width * height.
Your code assumes the scale of the image is 1. That’s not always a valid assumption. I’d grab the cgImage and use its width and height.
Even if you have only three components, you still need to use 4 bytes per pixel.
Thus:
guard
let photo = UIImage(named: "myphoto.jpg”),
let cgImage = photo.cgImage
else { return }
let bytesPerPixels = 4
let width = cgImage.width
let height = cgImage.height
let bitsPerComponent: Int = 8
let bytesPerRow = bytesPerPixels * width
let totalPixels = width * height
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.premultipliedFirst.rawValue | CGBitmapInfo.byteOrder32Little.rawValue).rawValue
let colorSpace = CGColorSpaceCreateDeviceRGB()
guard
let ctx = CGContext(data: nil, width: width, height: height, bitsPerComponent: bitsPerComponent, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo),
let data = ctx.data
else { return }
ctx.draw(cgImage, in: CGRect(x: 0, y: 0, width: width, height: height))
let pointer = data.bindMemory(to: UInt32.self, capacity: totalPixels)
let pixels = UnsafeMutableBufferPointer(start: pointer, count: totalPixels)
for p in pixels {
print(String(p, radix: 16), Date())
}
You need to draw the image into the context.
ctx?.draw(photo.cgImage!, in: CGRect(origin: .zero, size: photo.size))
Add that just after creating the CGContext.

Bitmap to Metal Texture and back - How does the pixelFormat work?

I'm having problems understanding how the pixelFormat of a MTLTexture relates to the properties of a NSBitmapImageRep?
In particular, I want to use a metal compute kernel (or the built in MPS method) to subtract an image from another one and KEEP the negative values temporarily.
I have a method that creates a MTLTexture from a bitmap with a specified pixelFormat:
func textureFrom(bitmap: NSBitmapImageRep, pixelFormat: MTLPixelFormat) -> MTLTexture? {
guard !bitmap.isPlanar else {
return nil
}
let region = MTLRegionMake2D(0, 0, bitmap.pixelsWide, bitmap.pixelsHigh)
var textureDescriptor = MTLTextureDescriptor()
textureDescriptor = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: pixelFormat, width: bitmap.pixelsWide, height: bitmap.pixelsHigh, mipmapped: false)
guard let texture = device.makeTexture(descriptor: textureDescriptor),
let src = bitmap.bitmapData else { return nil }
texture.replace(region: region, mipmapLevel: 0, withBytes: src, bytesPerRow: bitmap.bytesPerRow)
return texture
}
Then I use the textures to do some computation (like a subtraction) and when I'm done, I want to get a bitmap back. In the case of textures with a .r8Snorm pixelFormat, I thought I could do:
func bitmapFrom(r8SnormTexture: MTLTexture?) -> NSBitmapImageRep? {
guard let texture = r8SnormTexture,
texture.pixelFormat == .r8Snorm else { return nil }
let bytesPerPixel = 1
let imageByteCount = Int(texture.width * texture.height * bytesPerPixel)
let bytesPerRow = texture.width * bytesPerPixel
var src = [Float](repeating: 0, count: imageByteCount)
let region = MTLRegionMake2D(0, 0, texture.width, texture.height)
texture.getBytes(&src, bytesPerRow: bytesPerRow, from: region, mipmapLevel: 0)
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.none.rawValue)
let colorSpace = CGColorSpaceCreateDeviceGray()
let bitsPerComponent = 8
let context = CGContext(data: &src, width: texture.width, height: texture.height, bitsPerComponent: bitsPerComponent, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo.rawValue)
guard let dstImageFilter = context?.makeImage() else {
return nil
}
return NSBitmapImageRep(cgImage: dstImageFilter)
}
But the negative values are not preserved, they are clamped to zero somehow...
Any insight on how swift goes from bitmap to texture and back would be appreciated.

Empty CGContext

In Objective-C I was able to use CGBitmapContextCreate to create an empty context. I am trying to to the same in Swift 3, but for some reason it is nil. What am I missing?
let inImage: UIImage = ...
let width = Int(inImage.size.width)
let height = Int(inImage.size.height)
let bitmapBytesPerRow = width * 4
let bitmapByteCount = bitmapBytesPerRow * height
let pixelData = UnsafeMutablePointer<UInt8>.allocate(capacity: bitmapByteCount)
let context = CGContext(data: pixelData,
width: width,
height: height,
bitsPerComponent: 8,
bytesPerRow: bitmapBytesPerRow,
space: CGColorSpaceCreateDeviceRGB(),
bitmapInfo: CGImageAlphaInfo.alphaOnly.rawValue)
I'm not sure what the code in the liked article would do, but two things are different with your Swift code.
bytesPerRow: width // width * 4 (== bitmapBytesPerRow)
space : NULL // CGColorSpaceCreateDeviceRGB()
The documentation of CGBitmapContextCreate does not say anything about supplying NULL for colorspace, but the header doc says The number of components for each pixel is specified by space, so, at least, CGColorSpaceCreateDeviceRGB() is not appropriate for alphaOnly (which should have only 1 component per pixel).
As far as I tested, this code returns non-nil CGContext:
let bitmapBytesPerRow = width //<-
let bitmapByteCount = bitmapBytesPerRow * height
let pixelData = UnsafeMutablePointer<UInt8>.allocate(capacity: bitmapByteCount)
let context = CGContext(data: pixelData,
width: width,
height: height,
bitsPerComponent: 8,
bytesPerRow: bitmapBytesPerRow,
space: CGColorSpaceCreateDeviceGray(), //<-
bitmapInfo: CGImageAlphaInfo.alphaOnly.rawValue)
But, not sure if this works for your purpose or not.
I was working on this thing and faced same issue.
The solution I found is to use
var colorSpace = CGColorSpace.init(name: CGColorSpace.sRGB)!
let context = CGContext(data: nil,
width: Int(outputSize.width),
height: Int(outputSize.height),
bitsPerComponent: self.bitsPerComponent,
bytesPerRow: bitmapBytesPerRow,
space: colorSpace,
bitmapInfo: CGImageAlphaInfo.premultipliedLast.rawValue)
Actually my image's color space was Indexed Which can't be used to make a context.
So instead of using image's own colorSpace I made my own by using
var colorSpace = CGColorSpace.init(name: CGColorSpace.sRGB)!
and passed it to the context.
it resolved my error (nil context issue).

Xcode 8 / Swift 3 - Type 'CGColorRenderingIntent' has no member 'RenderingIntentDefault'

I've successfully converted many errors to Swift 3 except for the last line. It works in Xcode 7 but not Xcode 8.
It's also worth noting that Xcode 7 has documentation on CGColorRenderingIntent but Xcode 8 doesn't.
Type 'CGColorRenderingIntent' has no member 'RenderingIntentDefault'
Code I'm working with:
import CoreImage
// omitted code
public func imageFromPixels(pixels: ([Pixel], width: Int, height: Int)) -> CIImage {
let bitsPerComponent = 8
let bitsPerPixel = 32
let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.premultipliedLast.rawValue) // alpha is last
let providerRef = CGDataProvider(data: NSData(bytes: pixels.0, length: pixels.0.count * sizeof(Pixel)))
let image = CGImageCreate(pixels.1, pixels.2, bitsPerComponent, bitsPerPixel, pixels.1 * sizeof(Pixel), rgbColorSpace, bitmapInfo, providerRef!, nil, true, CGColorRenderingIntent.RenderingIntentDefault)
return CIImage(CGImage: image!)
}
Apple documentation:
enum CGColorRenderingIntent : Int32 {
case RenderingIntentDefault
case RenderingIntentAbsoluteColorimetric
case RenderingIntentRelativeColorimetric
case RenderingIntentPerceptual
case RenderingIntentSaturation
}
Updated Code:
let image = CGImage(width: pixels.1,
height: pixels.2,
bitsPerComponent: bitsPerComponent,
bitsPerPixel: bitsPerPixel,
bytesPerRow: pixels.1 * sizeof(Pixel),
space: rgbColorSpace,
bitmapInfo: bitmapInfo,
provider: providerRef!,
decode: nil,
shouldInterpolate: true,
intent: .defaultIntent)
return CGImage(CGImage: image!) // Incorrect argument label in call (have 'CGImage:', expected 'copy:')
⌘-click on the symbol CGColorRenderingIntent and you will see
public enum CGColorRenderingIntent : Int32 {
case defaultIntent
case absoluteColorimetric
case relativeColorimetric
case perceptual
case saturation
}
So it's
let image = CGImage(width: pixels.1,
height: pixels.2,
bitsPerComponent: bitsPerComponent,
bitsPerPixel: bitsPerPixel,
bytesPerRow: pixels.1 * sizeof(Pixel),
space: rgbColorSpace,
bitmapInfo: bitmapInfo,
provider: providerRef!,
decode: nil,
shouldInterpolate: true,
intent: .defaultIntent)
return CIImage(cgImage: image!)
even the initializers of CGImage and CIImage have been changed.