Continually scrolling dynamic bitmap in Core Graphics - swift

I have been struggling with a performance related issue for a simple Cocoa based visualization program, that is continually drawing a heat map as a bitmap and displaing the bitmap on screen. Broadly, the heat map is a continually scrolling source of realtime information. At each redraw, new pixels are added at the right side of the bitmap and then the bitmap is shifted to the left, creating an on-going flow of information.
My current approach is implemented in a custom NVSView, with a circular buffer for the underlying pixels. The image is actually drawn rotated 90 degrees, so adding new data is simply a matter of appending the buffer (adding rows to the image). I then use a CGDataProviderCreateWithData to create a CoreGraphics image and draw that to a rotated context. (The code is below, although is less important to the question.)
My hope is to figure out a more performant way of achieving this. It seems like redrawing the full bitmap each time is excessive, and in my attempts, it uses a surprisingly high amount of CPU (~20-30%). I feel like there should be a way to somehow instruct the GPU to cache, but shift the existing pixels and then append new data.
I am considering an approach that will use two CGBitmapContextCreate, one context for what is currently on the screen and one context for modification. Prior pixels would be copied from one context to the other context, but I am not sure that will improve performance significantly. Before I proceed too far down that rabbit hole, are there better ways to handle such updating?
Here is the relevant code from my current implementation, although I think I am more in need of higher level guidance:
class PlotView: NSView
{
private let bytesPerPixel = 4
private let height = 512
private let bufferLength = 5242880
private var buffer: TPCircularBuffer
// ... Other code that appends data to buffer and set needsDisplay ...
override func drawRect(dirtyRect: NSRect) {
super.drawRect(dirtyRect)
// get context
guard let context = NSGraphicsContext.currentContext() else {
return
}
// get colorspace
guard let colorSpace = CGColorSpaceCreateDeviceRGB() else {
return
}
// number of sampels to draw
let maxSamples = Int(frame.width) * height
let maxBytes = maxSamples * bytesPerPixel
// bytes for reading
var availableBytes: Int32 = 0
let bufferStart = UnsafeMutablePointer<UInt8>(TPCircularBufferTail(&buffer, &availableBytes))
let samplesStart: UnsafeMutablePointer<UInt8>
let samplesCount: Int
let samplesBytes: Int
// buffer management
if Int(availableBytes) > maxBytes {
// advance buffer start
let bufferOffset = Int(availableBytes) - maxBytes
// set sample start
samplesStart = bufferStart.advancedBy(bufferOffset)
// consume values
TPCircularBufferConsume(&buffer, Int32(bufferOffset))
// number of samples
samplesCount = maxSamples
samplesBytes = maxBytes
}
else {
// set to start
samplesStart = bufferStart
// number of samples
samplesBytes = Int(availableBytes)
samplesCount = samplesBytes / bytesPerPixel
}
// get dimensions
let rows = height
let columns = samplesCount / rows
// bitmap info
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.NoneSkipFirst.rawValue)
// counts
let bytesPerComponent = sizeof(UInt8)
let bytesPerPixel = sizeof(UInt8) * bytesPerPixel
// prepare image
let provider = CGDataProviderCreateWithData(nil, samplesStart, samplesBytes, nil)
let image = CGImageCreate(rows, columns, 8 * bytesPerComponent, 8 * bytesPerPixel, rows * bytesPerPixel, colorSpace, bitmapInfo, provider, nil, false, .RenderingIntentDefault)
// make rotated size
let rotatedSize = CGSize(width: frame.width, height: frame.height)
let rotatedCenterX = rotatedSize.width / 2, rotatedCenterY = rotatedSize.height / 2
// rotate, from: http://stackoverflow.com/questions/10544887/rotating-a-cgimage
CGContextTranslateCTM(context.CGContext, rotatedCenterX, rotatedCenterY)
CGContextRotateCTM(context.CGContext, CGFloat(0 - M_PI_2))
CGContextScaleCTM(context.CGContext, 1, -1)
CGContextTranslateCTM(context.CGContext, -rotatedCenterY, -rotatedCenterX)
// draw
CGContextDrawImage(context.CGContext, NSRect(origin: NSPoint(x: rotatedSize.height - CGFloat(rows), y: 0), size: NSSize(width: rows, height: columns)), image)
}
}

Related

Why did my large for-loop go from a fraction of a second to almost 8 seconds after I updated to Xcode 11 and IOS13?

I am using a for-loop to iterate over the pixels in a camera image in order to determine the percentage of a certain color (green). The camera image is 12 megapixels. The speed was only a fraction of a second before, and now it takes around 8 seconds on all the iPhones I have tested after updating to Xcode 11.
func scanImageForPixels(_ image: CGImage) -> Float {
// Create the context where we'll draw the image.
let context = getContext(for: image)
// Specify the area in the context to draw in. It should be the same size as the image.
let imageWidth = image.width
let imageHeight = image.height
let imageRect = CGRect(x: 0, y: 0, width: CGFloat(imageWidth), height: CGFloat(imageHeight))
// Draw the image so the context will contain the raw image data.
context.draw(image, in: imageRect)
// Get a pointer to the raw image data.
let RGBAPointer = context.data!.assumingMemoryBound(to: UInt8.self)
// Set up the numbers used to calculate "% Green"
let totalPixels = imageWidth * imageHeight
var greenPixels = 0
// A 32-bit image will have 4 bytes per pixel: 1 byte for each component (RGBA). For example: 32bits / 8bits = 4bytes per pixel
let bytesPerPixel = image.bitsPerPixel / image.bitsPerComponent
let minRed: Int16 = 10
let minBlue: Int16 = 10
let start = Date()
for pixelIndex in 0..<totalPixels {
// move through the memory 4 bytes at a time
let offset = pixelIndex * bytesPerPixel
// change these from UInt8 to Int16 so we can do saturation/value calculations later (multiplying by 100 will make values greater than 255)
// also make them signed, so we don't have to worry about negative numbers causing errors
let red = Int16(RGBAPointer[offset])
let green = Int16(RGBAPointer[offset+1])
let blue = Int16(RGBAPointer[offset+2])
//let alpha = Int16(RGBAPointer[offset+3]) // not used
if (green - minRed) >= red && (green - minBlue) >= blue {
greenPixels += 1
}
}
let elapsed = NSDate().timeIntervalSince(start)
print(elapsed)
return Float(100 * Double(greenPixels) / Double(totalPixels))
}
Right now the for-loop completes in 7.063616037368774 seconds. I expected this to be 10 times faster based on when I was using the same code on previous versions of Xcode.

create transparent texture in swift

I just need to create a transparent texture.(pixels with alpha 0).
func layerTexture()-> MTLTexture {
let width = Int(self.drawableSize.width )
let height = Int(self.drawableSize.height )
let texDescriptor = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: .bgra8Unorm, width: width , height: height, mipmapped: false)
let temparyTexture = self.device?.makeTexture(descriptor: texDescriptor)
return temparyTexture!
}
when I open temparyTexture using preview,it's appeared to be black. What is the missing here?
UPDATE
I just tried to create texture using transparent image.
code.
func layerTexture(imageData:Data)-> MTLTexture {
let width = Int(self.drawableSize.width )
let height = Int(self.drawableSize.height )
let bytesPerRow = width * 4
let texDescriptor = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: .rgba8Unorm, width: width , height: height, mipmapped: false)
let temparyTexture = self.device?.makeTexture(descriptor: texDescriptor)
let region = MTLRegionMake2D(0, 0, width, height)
imageData.withUnsafeBytes { (u8Ptr: UnsafePointer<UInt8>) in
let rawPtr = UnsafeRawPointer(u8Ptr)
temparyTexture?.replace(region: region, mipmapLevel: 0, withBytes: rawPtr, bytesPerRow: bytesPerRow)
}
return temparyTexture!
}
method is get called as follows
let image = UIImage(named: "layer1.png")!
let imageData = UIImagePNGRepresentation(image)
self.layerTexture(imageData: imageData!)
where layer1.png is a transparent png. But even though it is crashing with message "Thread 1: EXC_BAD_ACCESS (code=1, address=0x107e8c000) " at the point I try to replace texture. I believe it's because image data is compressed and rawpointer should point to uncompressed data. How can I resolve this?
Am I in correct path or completely in wrong direction? Is there any other alternatives. What I just need is to create transparent texture.
Pre-edit: When you quick-look a transparent texture, it will appear black. I just double-checked with some code I have running stably in production - that is the expected result.
Post-edit: You are correct, you should not be copying PNG or JPEG data to a MTLTexture's contents directly. I would recommend doing something like this:
var pixelBuffer: CVPixelBuffer?
let attrs = [kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue,
kCVPixelBufferCGBitmapContextCompatibilityKey: kCFBooleanTrue,
kCVPixelBufferMetalCompatibilityKey: kCFBooleanTrue]
var status = CVPixelBufferCreate(nil, Int(image.size.width), Int(image.size.height),
kCVPixelFormatType_32BGRA, attrs as CFDictionary,
&pixelBuffer)
assert(status == noErr)
let coreImage = CIImage(image: image)!
let context = CIContext(mtlDevice: MTLCreateSystemDefaultDevice()!)
context.render(coreImage, to: pixelBuffer!)
var textureWrapper: CVMetalTexture?
status = CVMetalTextureCacheCreateTextureFromImage(kCFAllocatorDefault,
GPUManager.shared.textureCache, pixelBuffer!, nil, .bgra8Unorm,
CVPixelBufferGetWidth(pixelBuffer!), CVPixelBufferGetHeight(pixelBuffer!), 0, &textureWrapper)
let texture = CVMetalTextureGetTexture(textureWrapper!)!
// use texture now for your Metal texture. the texture is now map-bound to the CVPixelBuffer's underlying memory.
The issue you are running into is that it is actually pretty hard to fully grasp how bitmaps work and how they can be laid out differently. Graphics is a very closed field with lots of esoteric terminology, some of which refers to things that take years to grasp, some of which refers to things that are trivial but people just picked a weird word to call them by. My main pointers are:
Get out of UIImage land as early in your code as possible. The best way to avoiding overhead and delays when you go into Metal land is to get your images into a GPU-compatible representation as soon as you can.
Once you are outside of UIImage land, always know your channel order (RGBA, BGRA). At any point in code that you are editing, you should have a mental model of what pixel format each CVPixelBuffer / MTLTexture has.
Read up on premultiplied vs non-premultiplied alpha, you may not run into issues with this, but it threw me off repeatedly when I was first learning.
total byte size of a bitmap/pixelbuffer = bytesPerRow * height

Getting Pixel Information From CGImage and Change it with Swift 4

The thing is I can get pixel information with following code but can not write back.
var pixelsCopy:[UInt8] = []
let data: NSData = cgImage!.dataProvider!.data!
var pixels = data.bytes.assumingMemoryBound(to: UInt8.self)
while index < widthOfImage * heightOfImage * 4 {
pixelsCopy.append( pixels[index] )
pixelsCopy.append( pixels[index + 1] )
pixelsCopy.append( pixels[index + 2] )
pixelsCopy.append( pixels[index + 3] )
index += 4
}
With writing back I mean something like following
pixels[index] = newRedValue
I am having "subscript get-only" error.
We could do it at Obj-C as much as I recall.
You can't write pixels back to a CGImage. A CGImage is read-only. You need to:
Create a CGContext.
Draw the image into it.
Draw whatever you want on top of the image.
Ask the context to create a new image.
This is a kind of answer.
I could not change pixels arrays but I created pixelsCopy array played RGB and Alpha values of that array then created first a CGImage then to an UIImage.
But sometimes final images has becoming distorted I do not know why at the moment.
To convert pixelsCopy array to CGImage I used this code
let numComponents = 4
let lenght = givenImageWidth * givenImageHeight * numComponents
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.premultipliedLast.rawValue)
let colorspace = CGColorSpaceCreateDeviceRGB()
let rgbData = CFDataCreate(nil, pixelsCopy, lenght)!
let provider = CGDataProvider(data: rgbData)!
let rgbImageRef = CGImage(width: size_w,
height: size_h,
bitsPerComponent: 8,
bitsPerPixel: 8 * numComponents,
bytesPerRow: givenImageWidth * numComponents,
space: colorspace,
bitmapInfo: bitmapInfo,
provider: provider,
decode: nil,
shouldInterpolate: true,
intent: CGColorRenderingIntent.defaultIntent)!
let finalUIImage = UIImage(cgImage: rgbImageRef)
anImageView.image = finalUIImage
pixelsCopy.removeAll()
If you want to see good and and distorted images check out the R letters at the bottom of images below.
The R on the left is the given image the one at the right is my finalUIImage image.

Scaling an image OSX Swift

Im currently trying to scale an image using swift. This shouldnt be a difficult task, since i've implemented a scaling solution in C# in 30 mins - however, i've been stuck for 2 days now.
I've tried googling/crawling through stack posts but to no avail. The two main solutions i have seen people use are:
A function written in Swift to resize an NSImage proportionately
and
resizeNSImage.swift
An Obj C Implementation of the above link
So i would prefer to use the most efficient/least cpu intensive solution, which according to my research is option 2. Due to option 2 using NSImage.lockfocus() and NSImage.unlockFocus, the image will scale fine on non-retina Macs, but double the scaling size on retina macs. I know this is due to the pixel density of Retina macs, and is to be expected, but i need a scaling solution that ignores HiDPI specifications and just performs a normal scale operation.
This led me to do more research into option 1. It seems like a sound function, however it literally doesnt scale the input image, and then doubles the filesize as i save the returned image (presumably due to pixel density). I found another stack post with someone else having the exact same problem as i am, using the exact same implementation (found here). Of the two suggested answers, the first one doesnt work, and the second is the other implementation i've been trying to use.
If people could post Swift-ified answers, as opposed to Obj C, i'd appreciate it very much!
EDIT:
Here's a copy of my implementation of the first solution - I've divided it into 2 functions:
func getSizeProportions(oWidth: CGFloat, oHeight: CGFloat) -> NSSize {
var ratio:Float = 0.0
let imageWidth = Float(oWidth)
let imageHeight = Float(oHeight)
var maxWidth = Float(0)
var maxHeight = Float(600)
if ( maxWidth == 0 ) {
maxWidth = imageWidth
}
if(maxHeight == 0) {
maxHeight = imageHeight
}
// Get ratio (landscape or portrait)
if (imageWidth > imageHeight) {
// Landscape
ratio = maxWidth / imageWidth;
}
else {
// Portrait
ratio = maxHeight / imageHeight;
}
// Calculate new size based on the ratio
let newWidth = imageWidth * ratio
let newHeight = imageHeight * ratio
return NSMakeSize(CGFloat(newWidth), CGFloat(newHeight))
}
func resizeImage(image:NSImage) -> NSImage {
print("original: ", image.size.width, image.size.height )
// Cast the NSImage to a CGImage
var imageRect:CGRect = CGRect(x: 0, y: 0, width: image.size.width, height: image.size.height)
let imageRef = image.cgImage(forProposedRect: &imageRect, context: nil, hints: nil)
// Create a new NSSize object with the newly calculated size
let newSize = NSSize(width: CGFloat(450), height: CGFloat(600))
//let newSize = getSizeProportions(oWidth: CGFloat(image.size.width), oHeight: CGFloat(image.size.height))
// Create NSImage from the CGImage using the new size
let imageWithNewSize = NSImage(cgImage: imageRef!, size: newSize)
print("scaled: ", imageWithNewSize.size.width, imageWithNewSize.size.height )
return NSImage(data: imageWithNewSize.tiffRepresentation!)!
}
EDIT 2:
As pointed out by Zneak: i need to save the returned image to disk - Using both implementations, my save function writes the file to disk successfully. Although i dont think my save function could be screwing with my current resizing implementation, i've attached it anyways just in case:
func saveAction(image: NSImage, url: URL) {
if let tiffdata = image.tiffRepresentation,
let bitmaprep = NSBitmapImageRep(data: tiffdata) {
let props = [NSImageCompressionFactor: Appearance.imageCompressionFactor]
if let bitmapData = NSBitmapImageRep.representationOfImageReps(in: [bitmaprep], using: .JPEG, properties: props) {
let path: NSString = "~/Desktop/out.jpg"
let resolvedPath = path.expandingTildeInPath
try! bitmapData.write(to: URL(fileURLWithPath: resolvedPath), options: [])
print("Your image has been saved to \(resolvedPath)")
}
}
To anyone else experiencing this problem - I ended up spending countless hours trying to find a way to do this, and ended up just getting the scaling factor of the screen (1 for normal macs, 2 for retina)... The code looks like this:
func getScaleFactor() -> CGFloat {
return NSScreen.main()!.backingScaleFactor
}
Then once you have the scale factor you either scale normally or half the dimensions for retina:
if (scaleFactor == 2) {
//halve size proportions for saving on Retina Macs
return NSMakeSize(CGFloat(oWidth*ratio)/2, CGFloat(oHeight*ratio)/2)
} else {
return NSMakeSize(CGFloat(oWidth*ratio), CGFloat(oHeight*ratio))
}

My pixel map image (NSImage) sometimes doesn't look like the RGB values I pass in, Swift Core Graphics Quartz

I have two d pixel maps ([Double]) that I encode (correctly) into a [UInt32]), which is fed into a CGDataProvider, which is the source for a CGImage, converted to an NSImage and finally displayed in a custom view with
class SingleStepMapView:NSView {
#IBOutlet var dataSource : SingleStepViewController!
override func drawRect(dirtyRect: NSRect) {
let mapImage = TwoDPixMap(data: dataSource.mapData,
width: dataSource.mapWidth, color: dataSource.mapColor).image
mapImage.drawInRect(self.bounds, fromRect: NSZeroRect, operation: NSCompositingOperation.CompositeSourceAtop, fraction: 1.0)
return
}
The part where the NSImage is being constructed is in the image property of the TwoDPixMap instance...
var image : NSImage {
var buffer = [UInt32]()
let bufferScale = 1.0 / (mapMax - mapMin)
for d in data {
let scaled = bufferScale * (d - mapMin)
buffer.append( color.rgba(scaled) )
}
let bufferLength = 4 * height * width
let dataProvider = CGDataProviderCreateWithData(nil, buffer, bufferLength, nil)!
let bitsPerComponent = 8
let bitsPerPixel = 32
let bytesPerRow = 4 * width
let colorSpace = CGColorSpaceCreateDeviceRGB()
var bitMapInfo = CGBitmapInfo()
bitMapInfo.insert(.ByteOrderDefault)
// let bitMapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.PremultipliedFirst.rawValue)
let interpolate = false
let renderingIntent = CGColorRenderingIntent.RenderingIntentDefault
let theImage = CGImageCreate(width, height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpace, bitMapInfo, dataProvider, nil, interpolate, renderingIntent)!
let value = NSImage(CGImage: theImage, size: NSSize(width: width, height: height))
return value
}
I have checked that the created buffer values are always correct when fed into the CGDataProviderCreateWithData call. In particular for the following examples, which are sixty four Double values (from 0.0 to 63.0) which will be mapped into sixty four UInt32 values using a color bar that I've constructed such that the first pixel is pure red (0xff0000ff), and the last pixel is pure white (0xffffffff). In none of the values do we see anything that translates to black. When arrayed as a 64 by one map (it is essentially the color bar) it should look like this...
but sometimes, just reinvokeing the drawrect method, with identical data and created buffer it looks completely wrong
or sometimes almost right (one pixel the wrong color). I would post more examples but I'm restricted to only two links
The problem is that you've created a data provider with an unsafe pointer to buffer, filled that pixel buffer, created the image using that provider, but then allowed buffer fall out of scope and be released even though the data provider still had an unsafe reference to that memory address.
But once that memory was deallocated, it can be used for other things, rendering weird behavior ranging from a single pixel that is off, or wholesale alteration of that memory range as it's used for other things.
In Objective-C environment, I'd do a malloc of the memory when I created the provider, and then the last parameter to CGDataProviderCreateWithData would be a C function that I'd write to free that memory. (And that memory-freeing function may not be called until much later, not until the image was released.)
Another approach that I've used is to call CGBitmapContextCreate to create a context, and then use CGBitmapContextGetData to get the buffer it created for the image. I then can fill that buffer as I see fit. But because that buffer was created for me, the OS takes care of the memory management:
func createImageWithSize(size: NSSize) -> NSImage {
let width = Int(size.width)
let height = Int(size.height)
let bitsPerComponent = 8
let bytesPerRow = 4 * width
let colorSpace = CGColorSpaceCreateDeviceRGB()
let bitmapInfo = CGImageAlphaInfo.PremultipliedLast.rawValue // this depends upon how your `rgba` method was written; use whatever `bitmapInfo` that makes sense for your app
let context = CGBitmapContextCreate(nil, width, height, bitsPerComponent, bytesPerRow, colorSpace, bitmapInfo)!
let pixelBuffer = UnsafeMutablePointer<UInt32>(CGBitmapContextGetData(context))
var currentPixel = pixelBuffer
for row in 0 ..< height {
for column in 0 ..< width {
let red = ...
let green = ...
let blue = ...
currentPixel.memory = rgba(red: red, green: green, blue: blue, alpha: 255)
currentPixel++
}
}
let cgImage = CGBitmapContextCreateImage(context)!
return NSImage(CGImage: cgImage, size: size)
}
That yields:
One could make an argument that the first technique (create the data provider, doing a malloc of the memory and then providing CGDataProviderCreateWithData a function parameter to free it) might be better, but then you're stuck writing a C function to do that cleanup, something I'd rather not do in a Swift environment. But it's up to you.