How to reconstruct grayscale image from intensity values? - swift

It is commonly required to get the pixel data from an image or reconstruct that image from pixel data. How can I take an image, convert it to an array of pixel values and then reconstruct it using the pixel array in Swift using CoreGraphics?
The quality of the answers to this question have been all over the place so I'd like a canonical answer.

Get pixel values as an array
This function can easily be extended to a color image. For simplicity I'm using grayscale, but I have commented the changes to get RGB.
func pixelValuesFromImage(imageRef: CGImage?) -> (pixelValues: [UInt8]?, width: Int, height: Int)
{
var width = 0
var height = 0
var pixelValues: [UInt8]?
if let imageRef = imageRef {
let totalBytes = imageRef.width * imageRef.height
let colorSpace = CGColorSpaceCreateDeviceGray()
pixelValues = [UInt8](repeating: 0, count: totalBytes)
pixelValues?.withUnsafeMutableBytes({
width = imageRef.width
height = imageRef.height
let contextRef = CGContext(data: $0.baseAddress, width: width, height: height, bitsPerComponent: 8, bytesPerRow: width, space: colorSpace, bitmapInfo: 0)
let drawRect = CGRect(x: 0.0, y:0.0, width: CGFloat(width), height: CGFloat(height))
contextRef?.draw(imageRef, in: drawRect)
})
}
return (pixelValues, width, height)
}
Get image from pixel values
I reconstruct an image, in this case grayscale 8-bits per pixel, back into a CGImage.
func imageFromPixelValues(pixelValues: [UInt8]?, width: Int, height: Int) -> CGImage?
{
var imageRef: CGImage?
if let pixelValues = pixelValues {
let bitsPerComponent = 8
let bytesPerPixel = 1
let bitsPerPixel = bytesPerPixel * bitsPerComponent
let bytesPerRow = bytesPerPixel * width
let totalBytes = width * height
let unusedCallback: CGDataProviderReleaseDataCallback = { optionalPointer, pointer, valueInt in }
let providerRef = CGDataProvider(dataInfo: nil, data: pixelValues, size: totalBytes, releaseData: unusedCallback)
let bitmapInfo: CGBitmapInfo = [CGBitmapInfo(rawValue: CGImageAlphaInfo.none.rawValue), CGBitmapInfo(rawValue: CGImageByteOrderInfo.orderDefault.rawValue)]
imageRef = CGImage(width: width,
height: height,
bitsPerComponent: bitsPerComponent,
bitsPerPixel: bitsPerPixel,
bytesPerRow: bytesPerRow,
space: CGColorSpaceCreateDeviceGray(),
bitmapInfo: bitmapInfo,
provider: providerRef!,
decode: nil,
shouldInterpolate: false,
intent: .defaultIntent)
}
return imageRef
}
Demoing the code in a Playground
You'll need an image copied into the Playground's Resources folder and then change the filename and extension below to match. The result on the last line is a UIImage constructed from the CGImage.
import Foundation
import CoreGraphics
import UIKit
import PlaygroundSupport
let URL = playgroundSharedDataDirectory.appendingPathComponent("zebra.jpg")
print("URL \(URL)")
var image: UIImage? = nil
if FileManager().fileExists(atPath: URL.path) {
do {
try NSData(contentsOf: URL, options: .mappedIfSafe)
} catch let error as NSError {
print ("Error: \(error.localizedDescription)")
}
image = UIImage(contentsOfFile: URL.path)
} else {
print("File not found")
}
let (intensityValues, width, height) = pixelValuesFromImage(imageRef: image?.cgImage)
let roundTrippedImage = imageFromPixelValues(pixelValues: intensityValues, width: width, height: height)
let zebra = UIImage(cgImage: roundTrippedImage!)

I was having trouble getting Cameron's code above to work, so I wanted to test another method. I found Vacawama's code, which relies on ARGB pixels. You can use that solution and convert each grayscale value to an ARGB value by simply mapping on each value:
/// Assuming grayscale pixels contains floats in the range 0...1
let grayscalePixels: [Float] = ...
let pixels = grayscalePixels.map {
let intensity = UInt8(round($0 / Float(UInt8.max)))
return PixelData(a: UInt8.max, r: intensity, g: intensity, b: intensity)
}
let image = UIImage(pixels: pixels, width: width, height: height)

Related

Incorrect saving of transparent UIImage to Photo Library as png with UIImageWriteToSavedPhotosAlbum

I have a function cropAlpha() that trims the extra space defined by the transparency.
func cropAlpha() -> UIImage {
let cgImage = self.cgImage!
let width = cgImage.width
let height = cgImage.height
let colorSpace = CGColorSpaceCreateDeviceRGB()
let bytesPerPixel:Int = 4
let bytesPerRow = bytesPerPixel * width
let bitsPerComponent = 8
let bitmapInfo: UInt32 = CGImageAlphaInfo.premultipliedLast.rawValue | CGBitmapInfo.byteOrder32Big.rawValue
guard let context = CGContext(data: nil, width: width, height: height, bitsPerComponent: bitsPerComponent, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo),
let ptr = context.data?.assumingMemoryBound(to: UInt8.self)
else { return self }
context.draw(self.cgImage!, in: CGRect(x: 0, y: 0, width: width, height: height))
var minX = width
var minY = height
var maxX: Int = 0
var maxY: Int = 0
for x in 1 ..< width {
for y in 1 ..< height {
let i = bytesPerRow * Int(y) + bytesPerPixel * Int(x)
let a = CGFloat(ptr[i + 3]) / 255.0
if a == 1 {
if (x < minX) { minX = x }
if (x > maxX) { maxX = x }
if (y < minY) { minY = y }
if (y > maxY) { maxY = y }
}
}
}
let rect = CGRect(x: CGFloat(minX),y: CGFloat(minY), width: CGFloat(maxX - minX), height: CGFloat(maxY-minY))
let croppedImage = self.cgImage!.cropping(to: rect)!
let ret = UIImage(cgImage: croppedImage)
return ret
}
The image returned by this function has transparent elements and I put it in the ImageView: presenterImageView.image = imagePNG. It works as it should. But when I try to save UIImage to Photo Gallery, transparent background turns white.
let image = maskedImage?.cropAlpha()
let imagePNGData = image!.pngData()
let imagePNG = UIImage(data: imagePNGData!)
UIImageWriteToSavedPhotosAlbum(imagePNG!, nil, nil, nil)
If I don't use that function, I get the result I want, but the image has too much wasted space. I don't understand what could be the reason. Any ideas?
The problem is that UIImageWriteToSavedPhotosAlbum does not handle properly saving a UIImage with premultiplied alpha (or at least the result of saving such image is not what you expect) and your cropping method uses premultipliedLast format. You also can't just simply change CGImageAlphaInfo to a non-premultiplied format because it is not supported there (you will see an error CGBitmapContextCreate: unsupported parameter combination if you try that). But what you can do is convert the cropped image to CIImage, unpremultiply alpha and convert back to UIImage. To do that your saving code could look like below (however I recommend removing force unwrapping from this code if you plan to use it in final app):
let image = maskedImage?.cropAlpha()
let ciImage = CIImage(image: image!)!.unpremultiplyingAlpha()
let uiImage = UIImage(ciImage: ciImage)
let imagePNGData = uiImage.pngData()
let imagePNG = UIImage(data: imagePNGData!)
UIImageWriteToSavedPhotosAlbum(imagePNG!, nil, nil, nil)

how to properly extract the array of numbers from an image in swift?

I'm trying to extract the array of numbers from a UIImage in swift but at the end I got only a bunch of zeros no useful information at all.
that's the code I wrote to try accomplishing this.
var photo = UIImage(named: "myphoto.jpg")!
var withAlpha = true
var bytesPerPixels: Int = withAlpha ? 4 : 3
var width: Int = Int(photo.size.width)
var height: Int = Int(photo.size.height)
var bitsPerComponent: Int = 8
var bytesPerRow = bytesPerPixels * width
var totalPixels = (bytesPerPixels * width) * height
var alignment = MemoryLayout<UInt32>.alignment
var data = UnsafeMutableRawPointer.allocate(byteCount: totalPixels, alignment: alignment )
var bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.premultipliedFirst.rawValue | CGBitmapInfo.byteOrder32Little.rawValue).rawValue
var colorSpace = CGColorSpaceCreateDeviceRGB()
let ctx = CGContext(data: data, width: width, height: height, bitsPerComponent: bitsPerComponent, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo)
let bindedPointer: UnsafeMutablePointer<UInt32> = data.bindMemory(to: UInt32.self, capacity: totalPixels)
var pixels = UnsafeMutableBufferPointer.init(start: bindedPointer, count: totalPixels)
for p in pixels{
print(p, Date())
}
At the end I tried to bind the unsafeMutableRawPointer to extract the values but got no success,
what could I be missing here?
Thank you all in advance.
A few observations:
You need to draw the image to the context.
I’d also suggest that rather than creating a buffer that you have to manage manually, that you pass nil and let the OS create (and manage) that buffer for you.
Note that totalPixels should be just width * height.
Your code assumes the scale of the image is 1. That’s not always a valid assumption. I’d grab the cgImage and use its width and height.
Even if you have only three components, you still need to use 4 bytes per pixel.
Thus:
guard
let photo = UIImage(named: "myphoto.jpg”),
let cgImage = photo.cgImage
else { return }
let bytesPerPixels = 4
let width = cgImage.width
let height = cgImage.height
let bitsPerComponent: Int = 8
let bytesPerRow = bytesPerPixels * width
let totalPixels = width * height
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.premultipliedFirst.rawValue | CGBitmapInfo.byteOrder32Little.rawValue).rawValue
let colorSpace = CGColorSpaceCreateDeviceRGB()
guard
let ctx = CGContext(data: nil, width: width, height: height, bitsPerComponent: bitsPerComponent, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo),
let data = ctx.data
else { return }
ctx.draw(cgImage, in: CGRect(x: 0, y: 0, width: width, height: height))
let pointer = data.bindMemory(to: UInt32.self, capacity: totalPixels)
let pixels = UnsafeMutableBufferPointer(start: pointer, count: totalPixels)
for p in pixels {
print(String(p, radix: 16), Date())
}
You need to draw the image into the context.
ctx?.draw(photo.cgImage!, in: CGRect(origin: .zero, size: photo.size))
Add that just after creating the CGContext.

Swift CGContext with retina

I have a UIImage extension that can change the color of it's image that I pulled off somewhere. The problem is that it downgrades it's resolution after it colors the image. I've seen other answers based on this, but I'm not sure how to adapt this to rendering a retina image in this instance:
extension UIImage {
func maskWithColor(color: UIColor) -> UIImage? {
let maskImage = cgImage!
let width = size.width
let height = size.height
let bounds = CGRect(x: 0, y: 0, width: width, height: height)
let colorSpace = CGColorSpaceCreateDeviceRGB()
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.premultipliedLast.rawValue)
let context = CGContext(data: nil, width: Int(width), height: Int(height), bitsPerComponent: 8, bytesPerRow: 0, space: colorSpace, bitmapInfo: bitmapInfo.rawValue)!
context.clip(to: bounds, mask: maskImage)
context.setFillColor(color.cgColor)
context.fill(bounds)
if let cgImage = context.makeImage() {
let coloredImage = UIImage(cgImage: cgImage)
return coloredImage
} else {
return nil
}
}
}
I've seen people using UIGraphicsBeginImageContextWithOptions and setting it's scale to the main screen, but I don't think it works if I'm using the CGContext function.
I think you want:
let width = size.width * scale
let height = size.height * scale
and:
let coloredImage = UIImage(cgImage: cgImage, scale:scale, orientation:.up)
(You may need to use imageOrientation instead of .up.)

Save NSImage as png in Swift

I'm trying to create and save a .png from raw pixel data. I create an array of UInt8 where each number is a rgba value, kinda like this: [r, g, b, a, r, g, b, a...]. I can use this array to create a CGImage just fine. I can even use the CGImage to create an NSImage. The NSImage displays how I expect it to display when its loaded into an NSImageView.
What I want to do is to save an NSImage to disk. I've tried calling TIFFRepresentation on the NSImage and saving the NSData to "~/Desktop", but no file is saved. Any thoughts?
var pixels = [UInt8]()
for wPixel in 0...width {
for hPixel in 0...height {
pixels.append(0xff)
pixels.append(0xaa)
pixels.append(UInt8(wPixel % 200))
pixels.append(0x00)
}
}
let image = createImage(100, height:100, pixels: pixels)
let nsImage = NSImage(CGImage: image, size: CGSize(width: 100, height: 100))
NSBitmapImageRep(data: nsImage.TIFFRepresentation!)!.representationUsingType(.NSPNGFileType, properties: [:])!.writeToFile("~/Desktop/image.png", atomically: true)
func createImage(width: Int, height: Int, pixels:Array<UInt8>) -> CGImage{
let componentsPerPixel: Int = 4; // rgba
let provider: CGDataProviderRef = CGDataProviderCreateWithData(nil,
pixels,
width * height * componentsPerPixel,
nil)!;
let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
let bitmapInfo:CGBitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.None.rawValue)
let bitsPerComponent = 8
let bitsPerPixel = 32
let bytesPerRow = (bitsPerComponent * width) ;
let cgImage = CGImageCreate(
width,
height,
bitsPerComponent,
bitsPerPixel,
bytesPerRow,
rgbColorSpace,
bitmapInfo,
provider,
nil,
true,
.RenderingIntentDefault
)
print(cgImage.debugDescription)
return cgImage!;
}

Convert an image into a 2D array (or equivalent) in Apple Swift

I'm wondering how I can turn a UIImage into something usable and modifiable. Java code to handle what I need would look something like this:
BufferedImage img= ImageIO.read(file);
Raster raster=img.getData();
int w=raster.getWidth(),h=raster.getHeight();
int pixels[][]=new int[w][h];
for (int x=0;x<w;x++)
{
for(int y=0;y<h;y++)
{
pixels[x][y]=raster.getSample(x,y,0);
}
}
I need to modify the alpha values in an image by visiting each pixel in the image.
Untested, but I think this will either work or should be very close.
import UIKit
import CoreGraphics
var uiimage = UIImage(contentsOfFile: "/PATH/TO/image.png")
var image = uiimage.CGImage
let width = CGImageGetWidth(image)
let height = CGImageGetHeight(image)
let colorspace = CGColorSpaceCreateDeviceRGB()
let bytesPerRow = (4 * width);
let bitsPerComponent :UInt = 8
let pixels = UnsafePointer<UInt8>(malloc(width*height*4))
var context = CGBitmapContextCreate(pixels, width, height, bitsPerComponent, bytesPerRow, colorspace,
CGBitmapInfo());
CGContextDrawImage(context, CGRectMake(0, 0, CGFloat(width), CGFloat(height)), image)
for x in 0..width {
for y in 0..height {
//Here is your raw pixels
let offset = 4*((Int(width) * Int(y)) + Int(x))
let alpha = pixels[offset]
let red = pixels[offset+1]
let green = pixels[offset+2]
let blue = pixels[offset+3]
}
}
If you really need conversion to 2D array, render image into byte array via CGContext and then split array to parts. CGContext uses 0...255 color range instead of 0...1. Byte array will be in rgba format.
Sample code with conversion to 0...1:
import UIKit
import CoreGraphics
func pixelData() -> [UInt8]? {
let dataSize = size.width * size.height * 4
var pixelData = [UInt8](repeating: 0, count: Int(dataSize))
let colorSpace = CGColorSpaceCreateDeviceRGB()
let context = CGContext(data: &pixelData,
width: Int(size.width),
height: Int(size.height),
bitsPerComponent: 8,
bytesPerRow: 4 * Int(size.width),
space: colorSpace,
bitmapInfo: CGImageAlphaInfo.noneSkipLast.rawValue)
guard let cgImage = self.cgImage,
let context = context else { return nil }
context.draw(cgImage, in: CGRect(origin: .zero, size: size))
return pixelData
}
func pixelMatrix() -> [[[Float]]]? {
guard let pixels = pixelData() else {
return nil
}
var data: [[[Float]]] = []
let width = Int(size.width)
let height = Int(size.height)
for y in 0..<height {
var row: [[Float]] = []
for x in 0..<width {
let offset = 4 * ((width * y) + x)
let red = Float(pixels[offset]) / 255
let green = Float(pixels[offset + 1]) / 255
let blue = Float(pixels[offset + 2]) / 255
let alpha = Float(pixels[offset + 3]) / 255
let pixel = [red, green, blue, alpha]
row.append(pixel)
}
data.append(row)
}
return data
}