Convert [UInt8] into white and transparent image - swift

I'm trying to create a white and transparent image in swift from a [Uint8] array. The array has width * height elements and each element correspond to the transparency (alpha value).
So far, I managed to create a black and white image using this :
guard let providerRef = CGDataProvider(data: Data.init(bytes: bitmapArray) as CFData) else { return nil }
guard let cgImage = CGImage(
width: width,
height: height,
bitsPerComponent: 8,
bitsPerPixel: 8,
bytesPerRow: width,
space: CGColorSpaceCreateDeviceGray(),
bitmapInfo: CGBitmapInfo.init(rawValue: CGImageAlphaInfo.none.rawValue),
provider: providerRef,
decode: nil,
shouldInterpolate: true,
intent: .defaultIntent
) else {
return nil
}
let image = UIImage(cgImage: cgImage)
unfortunately, as I said, this gives me a black and white image.
What I would like is to turn every black pixel (0 in my initial array) into a completely transparent pixel (my array contains only either 0 or 255). How do I do that ?
PS: I've tried to use CGImageAlphaInfo.alphaOnlybut I get "CGImageCreate: invalid image alphaInfo: 7"
Any help would be appreciated.

I found a solution that doesn't exactly satisfies me in terms of code elegance but does the job. The solution is to create that black and white full opaque image, and mask all black pixels using a CIFilter.
Here's a working code:
guard let providerRef = CGDataProvider(data: Data.init(bytes: bitmapArray) as CFData) else { return nil }
guard let cgImage = CGImage(
width: width,
height: height,
bitsPerComponent: 8,
bitsPerPixel: 8,
bytesPerRow: width,
space: CGColorSpaceCreateDeviceGray(),
bitmapInfo: CGBitmapInfo.init(rawValue: CGImageAlphaInfo.none.rawValue),
provider: providerRef,
decode: nil,
shouldInterpolate: true,
intent: .defaultIntent
) else {
return nil
}
let context = CIContext(options: nil)
let ciimage = CIImage(cgImage: cgImage)
guard let filter = CIFilter(name: "CIMaskToAlpha") else { return nil }
filter.setDefaults()
filter.setValue(ciimage, forKey: kCIInputImageKey)
guard let result = filter.outputImage else { return nil }
guard let newCgImage = context.createCGImage(result, from: result.extent) else { return nil }
return UIImage(cgImage: newCgImage)
Feel free to provide your own (perhaps more elegant/optimal) solution !

I found a workaround: since kCGAlphaImageOnly is supported by CGBitmapContext, you can create a bitmap context from the data, then create an image from that context. This is Objective-C, but it shouldn't be too hard to translate into Swift:
CGContextRef ctx = CGBitmapContextCreate(
bitmapArray, width, height,
8, width, NULL, (CGBitmapInfo)kCGImageAlphaOnly
);
CGImageRef image = CGBitmapContextCreateImage(ctx);

Related

Adding gradient behind image returns empty image

I have a round avatar image with a transparent background. I want to create a new round image of the same size out of the initial image, with a gradient background behind it. So it looks like standing in sky instead of having a transparent background.
Since I will use this image as tabbaritem’s image, I couldn’t use uiview and edit it’s background layer.
And to make it reusable I wanted to create a UIImage extension.
Below is what I do:
extension UIImage {
func gradientImage() -> UIImage? {
let width = self.size.width
let height = self.size.height
UIGraphicsBeginImageContextWithOptions(size, false, 0)
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.premultipliedLast.rawValue)
let colorSpace = CGColorSpaceCreateDeviceRGB()
guard let bitmapContext = CGContext(data: nil,
width: Int(width),
height: Int(height),
bitsPerComponent: 8,
bytesPerRow: 0,
space: colorSpace,
bitmapInfo: bitmapInfo.rawValue) else { return nil }
let locations: [CGFloat] = [0.0, 1.0]
let top = R.color.duckDimDarkGrey()?.cgColor
let bottom = R.color.duckPencilDark()?.cgColor
let colors = [top, bottom] as CFArray
guard let gradient = CGGradient(colorsSpace: colorSpace, colors: colors, locations: locations) else {
return nil
}
bitmapContext.drawLinearGradient(gradient, start: CGPoint.zero, end: CGPoint(x: 0, y: size.height), options: CGGradientDrawingOptions())
guard let cgImage = UIGraphicsGetImageFromCurrentImageContext()?.cgImage else { return nil }
UIGraphicsEndImageContext()
let img = UIImage(cgImage: cgImage)
return img
}
}
Here is how I use it:
Let image1 = UIImage(named: “test.png”)
self.tabBar.items[3].image = image1.gradientImage()
However I am getting an empty image somehow.

How can I change this black shade (gradient) to some colored shade?

I want to use the actual color of the image, apply alpha effect pixel by pixel and create something like this but in colored way. How can I do it?
I tried giving it a color but it turned my image from white to pink.
In this code, I go pixel by pixel and change the pixel alpha to 0 or 1. If I make the rgb to black e.g 0, it creates the right gradient, however when I use rgb values other than 0, it turns into white shades which is not required
func processPixels(in image: UIImage) -> UIImage? {
let cgImage = convertCIImageToCGImage(inputImage: image)
guard let inputCGImage = cgImage else {
print("unable to get cgImage")
return nil
}
let colorSpace = CGColorSpaceCreateDeviceRGB()
let width = inputCGImage.width
let height = inputCGImage.height
let bytesPerPixel = 4
let bitsPerComponent = 8
let bytesPerRow = bytesPerPixel * width
let bitmapInfo = RGBA32.bitmapInfo
guard let context = CGContext(data: nil, width: width, height: height, bitsPerComponent: bitsPerComponent, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo) else {
print("unable to create context")
return nil
}
context.draw(inputCGImage, in: CGRect(x: 0, y: 0, width: width, height: height))
guard let buffer = context.data else {
print("unable to get context data")
return nil
}
let pixelBuffer = buffer.bindMemory(to: RGBA32.self, capacity: width * height)
var rowAlpha: UInt8 = 0
for row in 0 ..< Int(height) {
rowAlpha = UInt8(row)
for column in 0 ..< Int(width) {
let offset = row * width + column
if pixelBuffer[offset].alphaComponent > 0 {
pixelBuffer[offset] = RGBA32(red: pixelBuffer[offset].redComponent, green: pixelBuffer[offset].greenComponent, blue: pixelBuffer[offset].blueComponent, alpha: rowAlpha)
}
}
}
let outputCGImage = context.makeImage()!
let outputImage = UIImage(cgImage: outputCGImage, scale: image.scale, orientation: image.imageOrientation)
return outputImage
}
func convertCIImageToCGImage(inputImage: UIImage) -> CGImage? {
guard let ciImage = inputImage.ciImage else {
print("unable to get ciImage")
return nil
}
let context = CIContext(options: nil)
if let cgImage = context.createCGImage(ciImage, from: ciImage.extent) {
return cgImage
}
return nil
}
Original Image
Gradient Image without color

Convert a CGImage to MTLTexture without premultiplication

I have a UIImage which I've previously created from a png file:
let strokeUIImage = UIImage(data: pngData)
I want to convert strokeImage (which has opacity) to an MTLTexture for display in an MTKView, but doing the conversion seems to perform an unwanted premultiplication, which darkens all the semitransparent edges.
My blending settings are as follows:
pipelineDescriptor.colorAttachments[0].isBlendingEnabled = true
pipelineDescriptor.colorAttachments[0].rgbBlendOperation = .add
pipelineDescriptor.colorAttachments[0].alphaBlendOperation = .add
pipelineDescriptor.colorAttachments[0].sourceRGBBlendFactor = .one
pipelineDescriptor.colorAttachments[0].sourceAlphaBlendFactor = .one
pipelineDescriptor.colorAttachments[0].destinationRGBBlendFactor = .oneMinusSourceAlpha
pipelineDescriptor.colorAttachments[0].destinationAlphaBlendFactor = .oneMinusSourceAlpha
I've tried two methods of conversion:
let stampTexture = try! MTKTextureLoader(device: self.device!).newTexture(cgImage: strokeUIImage.cgImage!, options: nil)
and the more elaborate dataProvider-driven method:
let image = strokeUIImage.cgImage!
let imageWidth = image.width
let imageHeight = image.height
let bytesPerPixel:Int! = 4
let rowBytes = imageWidth * bytesPerPixel
let texDescriptor = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: .rgba8Unorm_srgb,
width: imageWidth,
height: imageHeight,
mipmapped: false)
guard let stampTexture = device!.makeTexture(descriptor: texDescriptor) else { return }
let srcData: CFData! = image.dataProvider?.data
let pixelData = CFDataGetBytePtr(srcData)
let region = MTLRegionMake2D(0, 0, imageWidth, imageHeight)
stampTexture.replace(region: region, mipmapLevel: 0, withBytes: pixelData!, bytesPerRow: Int(rowBytes))
both of which yield the same unwanted premultiplied result.
The latter I tried, as there were some posts suggesting that the old swift3 method CGDataProviderCopyData() extracts raw pixel data from the image which is not premultiplied. Sadly, the equivalent:
let srcData: CFData! = image.dataProvider?.data
does not seem to do the trick. Am I missing something?
Any pointers would be appreciated.
After much experimenting, I've come to a solution which addresses the pre-multiplication issue inherent in CoreGraphics images. Thanks to Warren's tip regarding using an Accelerate function (vImageUnpremultiplyData_ARGB8888 in particular), I thought, why not build a CGImage using vImage_CGImageFormat which will allow me to play with the bitmapInfo setting that specifies how to interpret alpha...The result is not perfect, as demonstrated by the image attachment below:
Somehow, in the translation the alpha values are getting punched up slightly, (possibly the rgb as well, but not significantly). By the way, I should point out that the png pixel format is sRGB, and the MTKView I'm using is set to MTLPixelFormat.rgba16Float (app requirement)
Below is the full metalDrawStrokeUIImage routine I implemented. Of particular note is the line:
bitmapInfo: CGBitmapInfo(rawValue: CGImageAlphaInfo.last.rawValue)
which essentially unassociates the alpha (I think) without calling vImageUnpremultiplyData_ARGB8888. Looking at the resulting image certainly looks like an un-premultiplied image...
Lastly, to get back a premultiplied texture on the MTKView side, I let the fragment shader handle the pre-multiplication:
fragment float4 premult_fragment(VertexOut interpolated [[stage_in]],
texture2d<float> texture [[texture(0)]],
sampler sampler2D [[sampler(0)]]) {
float4 sampled = texture.sample(sampler2D, interpolated.texCoord);
// this fragment shader premultiplies incoming rgb with texture's alpha
return float4(sampled.r * sampled.a,
sampled.g * sampled.a,
sampled.b * sampled.a,
sampled.a );
} // end of premult_fragment
The result is pretty close to the input source, but the image is maybe 5% more opaque than the incoming png. Again, png pixel format is sRGB, and the MTKView I'm using to display is set to MTLPixelFormat.rgba16Float . So, I'm sure something is getting mushed somewhere. If anyone has any pointers, I'd sure appreciate it.
Below is the rest of the relevant code:
func metalDrawStrokeUIImage (strokeUIImage: UIImage, strokeBbox: CGRect) {
self.metalSetupRenderPipeline(compStyle: compMode.strokeCopy) // needed so stampTexture is not modified by fragmentFunction
let bytesPerPixel = 4
let bitsPerComponent = 8
let width = Int(strokeUIImage.size.width)
let height = Int(strokeUIImage.size.height)
let rowBytes = width * bytesPerPixel
//
let texDescriptor = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: .rgba8Unorm_srgb,
width: width,
height: height,
mipmapped: false)
guard let stampTexture = device!.makeTexture(descriptor: texDescriptor) else { return }
//let cgImage: CGImage = strokeUIImage.cgImage!
//let sourceColorSpace = cgImage.colorSpace else {
guard
let cgImage = strokeUIImage.cgImage,
let sourceColorSpace = cgImage.colorSpace else {
print("Unable to initialize cgImage or colorSpace.")
return
}
var format = vImage_CGImageFormat(
bitsPerComponent: UInt32(cgImage.bitsPerComponent),
bitsPerPixel: UInt32(cgImage.bitsPerPixel),
colorSpace: Unmanaged.passRetained(sourceColorSpace),
bitmapInfo: CGBitmapInfo(rawValue: CGImageAlphaInfo.last.rawValue),
version: 0, decode: nil,
renderingIntent: CGColorRenderingIntent.defaultIntent)
var sourceBuffer = vImage_Buffer()
defer {
free(sourceBuffer.data)
}
var error = vImageBuffer_InitWithCGImage(&sourceBuffer, &format, nil, cgImage, numericCast(kvImageNoFlags))
guard error == kvImageNoError else {
print ("[MetalBrushStrokeView]: can't vImageBuffer_InitWithCGImage")
return
}
//vImagePremultiplyData_RGBA8888(&sourceBuffer, &sourceBuffer, numericCast(kvImageNoFlags))
// create a CGImage from vImage_Buffer
var destCGImage = vImageCreateCGImageFromBuffer(&sourceBuffer, &format, nil, nil, numericCast(kvImageNoFlags), &error)?.takeRetainedValue()
guard error == kvImageNoError else {
print ("[MetalBrushStrokeView]: can't vImageCreateCGImageFromBuffer")
return
}
let dstData: CFData = (destCGImage!.dataProvider!.data)!
let pixelData = CFDataGetBytePtr(dstData)
destCGImage = nil
let region = MTLRegionMake2D(0, 0, Int(width), Int(height))
stampTexture.replace(region: region, mipmapLevel: 0, withBytes: pixelData!, bytesPerRow: Int(rowBytes))
let stampColor = UIColor.white
let stampCorners = self.stampSetVerticesFromBbox(bbox: strokeBbox)
self.stampAppendToVertexBuffer(stampLayer: stampLayerMode.stampLayerFG, stampCorners: stampCorners, stampColor: stampColor)
self.metalRenderStampSingle(stampTexture: stampTexture)
self.initializeStampArray() // clears out the stamp array so we always draw 1 stamp at a time
} // end of func metalDrawStrokeUIImage (strokeUIImage: UIImage, strokeBbox: CGRect)

Drawing a simple line on a JPEG image

I'm stuck again with an apparently simple question.
I loaded a JPEG file into a CGImage. I got the correct values for width and height (in pixels) and was able to show "myImage" in a ImageView Controller. But I wanted to add some graphics on this image and found that I should instead get it into a NSImage. So I did but got different (proportional) values for width and height: 595.08 instead for 1653, and 841.68 instead of 2338, respectively.
I tried to create a NSCGContext from a CGContext 'gc' for drawing (a simple line and a rectangle) which resulted in a "Value of optional type 'CGContext?' not unwrapped, did you mean to use '!' or '?'?"... I'm lost...
// with NSData
//
let imageAsData = try Data(contentsOf: chosenFiles[0])
let imageProvider = CGDataProvider(data: imageAsData as CFData)
var myImage = CGImage(jpegDataProviderSource: imageProvider!, decode: nil, shouldInterpolate: true, intent: .defaultIntent)
let imageWidth = myImage!.width
let imageHeight = myImage!.height
// with NSImage, now
//
let imageAsNSImage=NSImage(contentsOf: chosenFiles[0])
let imageSize=imageAsNSImage?.size // ---> 0.36 * pixels
// creating a CG context and drawing
//
let colorSpace:CGColorSpace = CGColorSpaceCreateDeviceRGB()
let gc = CGContext(data: nil, width: imageWidth, height: imageHeight, bitsPerComponent: 8, bytesPerRow: 0,space: colorSpace, bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue)
let NSGContext = NSGraphicsContext(cgContext: gc, flipped: true)
let currentContext = NSGraphicsContext.current() // Cocoa GC object appropriate for the current drawing environment
NSGraphicsContext.saveGraphicsState()
NSGraphicsContext.current = NSGContext
NSGContext?.beginPath()
NSGContext?.setStrokeColor(redColor)
NSGContext?.setLineWidth(50.0)
NSGContext?.move(to: targetStart)
NSGContext?.addLine(to: targetEnd)
NSGContext?.setStrokeColor(grayColor)
NSGContext?.setFillColor(grayColor)
NSGContext?.addRect(ROIRect)
NSGContext?.closePath()
NSGContext.restoreGraphicsState()
imageAsNSImage?.draw(at: NSZeroPoint, from: NSZeroRect, operation: NSCompositeSourceOver, fraction: 1.0)
imageAsNSImage?.unlockFocus()
NSGraphicsContext.setcurrent(currentContext)
myImageView.image = imageAsNSImage // image & drawings should show in View
Drawing a simple line on a JPEG image
// load JPEG from main bundle
guard let path = Bundle.main.pathForImageResource(NSImage.Name("picture.jpg")),
let image = NSImage(contentsOfFile: path)
else { fatalError() }
let size = image.size
image.lockFocus() // prepare image for drawing
NSColor.red.setStroke()
NSBezierPath.strokeLine(from: .zero, to: NSPoint(x: size.width, y: size.height))
image.unlockFocus() // drawing commands done
The code above strokes a red line from lower left corner to top right.
If you have an NSImageView at hand you can use the image directly:
#IBOutlet weak var imageView: NSImageView!
...
imageView.image = image
Thanks to djromero, here is the solution I just reached:
// Load the JPEG image from disk into a CGImage
//
let imageAsData = try Data(contentsOf: chosenFiles[0])
let imageProvider = CGDataProvider(data: imageAsData as CFData)
var myImage = CGImage(jpegDataProviderSource: imageProvider!, decode: nil, shouldInterpolate: true, intent: .defaultIntent)
// Create a NSImage from the CGImage (with the same width and height in pixels)
//
let imageAsNSImage=NSImage(cgImage: myImage!, size: NSZeroSize)
// Drawing a simple line
//
imageAsNSImage.lockFocusFlipped(true) // Otherwise, the origin is at the lower left corner
NSColor.red.setStroke()
NSBezierPath.strokeLine(from: targetStart, to: targetEnd)
imageAsNSImage.unlockFocus()
// Show the NSImage in the NSImageView
//
myImageView.image = imageAsNSImage

How to reconstruct grayscale image from intensity values?

It is commonly required to get the pixel data from an image or reconstruct that image from pixel data. How can I take an image, convert it to an array of pixel values and then reconstruct it using the pixel array in Swift using CoreGraphics?
The quality of the answers to this question have been all over the place so I'd like a canonical answer.
Get pixel values as an array
This function can easily be extended to a color image. For simplicity I'm using grayscale, but I have commented the changes to get RGB.
func pixelValuesFromImage(imageRef: CGImage?) -> (pixelValues: [UInt8]?, width: Int, height: Int)
{
var width = 0
var height = 0
var pixelValues: [UInt8]?
if let imageRef = imageRef {
let totalBytes = imageRef.width * imageRef.height
let colorSpace = CGColorSpaceCreateDeviceGray()
pixelValues = [UInt8](repeating: 0, count: totalBytes)
pixelValues?.withUnsafeMutableBytes({
width = imageRef.width
height = imageRef.height
let contextRef = CGContext(data: $0.baseAddress, width: width, height: height, bitsPerComponent: 8, bytesPerRow: width, space: colorSpace, bitmapInfo: 0)
let drawRect = CGRect(x: 0.0, y:0.0, width: CGFloat(width), height: CGFloat(height))
contextRef?.draw(imageRef, in: drawRect)
})
}
return (pixelValues, width, height)
}
Get image from pixel values
I reconstruct an image, in this case grayscale 8-bits per pixel, back into a CGImage.
func imageFromPixelValues(pixelValues: [UInt8]?, width: Int, height: Int) -> CGImage?
{
var imageRef: CGImage?
if let pixelValues = pixelValues {
let bitsPerComponent = 8
let bytesPerPixel = 1
let bitsPerPixel = bytesPerPixel * bitsPerComponent
let bytesPerRow = bytesPerPixel * width
let totalBytes = width * height
let unusedCallback: CGDataProviderReleaseDataCallback = { optionalPointer, pointer, valueInt in }
let providerRef = CGDataProvider(dataInfo: nil, data: pixelValues, size: totalBytes, releaseData: unusedCallback)
let bitmapInfo: CGBitmapInfo = [CGBitmapInfo(rawValue: CGImageAlphaInfo.none.rawValue), CGBitmapInfo(rawValue: CGImageByteOrderInfo.orderDefault.rawValue)]
imageRef = CGImage(width: width,
height: height,
bitsPerComponent: bitsPerComponent,
bitsPerPixel: bitsPerPixel,
bytesPerRow: bytesPerRow,
space: CGColorSpaceCreateDeviceGray(),
bitmapInfo: bitmapInfo,
provider: providerRef!,
decode: nil,
shouldInterpolate: false,
intent: .defaultIntent)
}
return imageRef
}
Demoing the code in a Playground
You'll need an image copied into the Playground's Resources folder and then change the filename and extension below to match. The result on the last line is a UIImage constructed from the CGImage.
import Foundation
import CoreGraphics
import UIKit
import PlaygroundSupport
let URL = playgroundSharedDataDirectory.appendingPathComponent("zebra.jpg")
print("URL \(URL)")
var image: UIImage? = nil
if FileManager().fileExists(atPath: URL.path) {
do {
try NSData(contentsOf: URL, options: .mappedIfSafe)
} catch let error as NSError {
print ("Error: \(error.localizedDescription)")
}
image = UIImage(contentsOfFile: URL.path)
} else {
print("File not found")
}
let (intensityValues, width, height) = pixelValuesFromImage(imageRef: image?.cgImage)
let roundTrippedImage = imageFromPixelValues(pixelValues: intensityValues, width: width, height: height)
let zebra = UIImage(cgImage: roundTrippedImage!)
I was having trouble getting Cameron's code above to work, so I wanted to test another method. I found Vacawama's code, which relies on ARGB pixels. You can use that solution and convert each grayscale value to an ARGB value by simply mapping on each value:
/// Assuming grayscale pixels contains floats in the range 0...1
let grayscalePixels: [Float] = ...
let pixels = grayscalePixels.map {
let intensity = UInt8(round($0 / Float(UInt8.max)))
return PixelData(a: UInt8.max, r: intensity, g: intensity, b: intensity)
}
let image = UIImage(pixels: pixels, width: width, height: height)