Memory issue in using CVPixelBufferPoolCreatePixelBuffer - swift

I'm converting a CIImage to CVPixelBuffer for using in a streaming, so this conversion will happen 25-30 times per sec and need to be fast
So i heard if I use buffer pool, the performance will be good, here is the process that I do for it
private var bufferPool: CVPixelBufferPool?
private let context = CIContext(options: [.cacheIntermediates: false])
and then conversion
var mergedImageBuffer: CVPixelBuffer?
guard let bufferPool = bufferPool else {
Logger.logError("Error retrieving final buffer pool.")
return
}
CVPixelBufferPoolCreatePixelBuffer(nil, bufferPool, &mergedImageBuffer)
guard let validMergedImageBuffer = mergedImageBuffer else {
Logger.logError("Error creating CVPixelBuffer for output image.")
return
}
context.render(inputCIImage, to: validMergedImageBuffer)
But seems there will be memory leak and app will be crashed after 30-40 sec, and it point to CVPixelBufferPoolCreatePixelBuffer(nil, bufferPool, &mergedImageBuffer) line with error of EXC_RESOURCE RESOURCE_TYPE_MEMORY I read with CVPixelBufferLockBaseAddress and CVPixelBufferUnlockBaseAddress are possible to fix it but I can't make it work. could anyone help me on that? Thanks
Update:
I changed the process:
var pixelBuffer: CVPixelBuffer?
let attrs = [kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue,
kCVPixelBufferCGBitmapContextCompatibilityKey: kCFBooleanTrue] as CFDictionary
let width:Int = Int(bufferPoolWidth)
let height:Int = Int(bufferPoolHeight)
CVPixelBufferCreate(kCFAllocatorDefault,
width,
height,
kCVPixelFormatType_32BGRA,
attrs,
&pixelBuffer)
guard let mergedPixelBuffer = pixelBuffer else { return }
CVPixelBufferLockBaseAddress(mergedPixelBuffer, .readOnly)
let context = CIContext()
context.render(inputCIImage, to: mergedPixelBuffer)
... Here I use the pixel buffer to send to the stream, and then
CVPixelBufferUnlockBaseAddress(mergedPixelBuffer, .readOnly)
It's now much much better (no crashing), but still have some large memory peak. Is there any way to improve it?

Related

Get RGB average of "CIAreaAverage" from CMSampleBuffer in Float precision in Swift

I am trying to get the average RGB value for my "AVCaptureVideoDataOutput" feed. I found the following solution on StackOverflow:
let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
let cameraImage = CIImage(CVPixelBuffer: pixelBuffer!)
let filter = CIFilter(name: "CIAreaAverage")
filter!.setValue(cameraImage, forKey: kCIInputImageKey)
let outputImage = filter!.valueForKey(kCIOutputImageKey) as! CIImage!
let ctx = CIContext(options:nil)
let cgImage = ctx.createCGImage(outputImage, fromRect:outputImage.extent)
let rawData:NSData = CGDataProviderCopyData(CGImageGetDataProvider(cgImage))!
let pixels = UnsafePointer<UInt8>(rawData.bytes)
let bytes = UnsafeBufferPointer<UInt8>(start:pixels, count:rawData.length)
var BGRA_index = 0
for pixel in UnsafeBufferPointer(start: bytes.baseAddress, count: bytes.count) {
switch BGRA_index {
case 0:
bluemean = CGFloat (pixel)
case 1:
greenmean = CGFloat (pixel)
case 2:
redmean = CGFloat (pixel)
case 3:
break
default:
break
}
BGRA_index++
}
But this produces the average as an Int but I need it in a Float format with the precision kept. The rounding is quite problematic in the problem domain I'm working with. Is there a way to a Float average efficiently?
Thanks a lot!
May I recommend using our library CoreImageExtensions for reading the value? We added methods for reading pixel values from CIImages in different formats. For your case it would look like this:
import CoreImageExtensions
let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
let cameraImage = CIImage(cvPixelBuffer: pixelBuffer!)
let filter = CIFilter(name: "CIAreaAverage")!
filter.setValue(cameraImage, forKey: kCIInputImageKey)
filter.setValue(CIVector(cgRect: cameraImage.extent), forKey: kCIInputExtentKey)
let outputImage = filter.outputImage!
let context = CIContext()
// get the value of a specific pixel as a `SIMD4<Float32>`
let average = context.readFloat32PixelValue(from: outputImage, at: CGPoint.zero)
Also keep in mind, if you want to compute the average regularly (not just once), to only create a single instance of CIContext and reuse it for every camera frame. Creating it is expensive and it actually increases performance to use the same instance since it caches internal resources.

Application performance issue with CGImageSourceCreateThumbnailAtIndex

I m using CGImageSourceCreateThumbnailAtIndex to convert the Data into UIImage , but if I convert around 7-8 image using this method application gets slow, instead of this If I use UIImage(data:imageData) everything works fine. How to fix this issue , I need to use CGImageSourceCreateThumbnailAtIndex to resize the image.
Below is the code I m using.
convenience init?(data: Data, maxSize: CGSize) {
let imageSourceOptions = [kCGImageSourceShouldCache: false] as CFDictionary
guard let imageSource = CGImageSourceCreateWithData(data as CFData, imageSourceOptions) else {
return nil
}
let options = [
// The size of the longest edge of the thumbnail
kCGImageSourceThumbnailMaxPixelSize: max(maxSize.width, maxSize.width),
kCGImageSourceCreateThumbnailFromImageAlways: true,
kCGImageSourceShouldCacheImmediately: true,
kCGImageSourceCreateThumbnailWithTransform: true,
] as CFDictionary
// Generage the thumbnail
guard let cgImage = CGImageSourceCreateThumbnailAtIndex(imageSource, 0, options) else {
return nil
}
print("Generating Image....")
self.init(cgImage: cgImage)
}
I had the same problem when batch processing images. Take a look at your RAM usage, it's off the charts. According to Apple, with CGImageSourceCreateWithData and CGImageSourceCreateWithURL, "You’re responsible for releasing this type using CFRelease."
Apple Docs
With Swift, you can do it using:
autoreleasepool {
let img = CGImageSourceCreateWithURL ...
}

How to manually release CMSampleBuffer

This code leads to memory leak and app crash:
var outputSamples = [Float]()
assetReader.startReading()
while assetReader.status == .reading {
let trackOutput = assetReader.outputs.first!
if let sampleBuffer = trackOutput.copyNextSampleBuffer(),
let blockBuffer = CMSampleBufferGetDataBuffer(sampleBuffer) {
let blockBufferLength = CMBlockBufferGetDataLength(blockBuffer)
let sampleLength = CMSampleBufferGetNumSamples(sampleBuffer) * channelCount(from: assetReader)
var data = Data(capacity: blockBufferLength)
data.withUnsafeMutableBytes { (blockSamples: UnsafeMutablePointer<Int16>) in
CMBlockBufferCopyDataBytes(blockBuffer, atOffset: 0, dataLength: blockBufferLength, destination: blockSamples)
CMSampleBufferInvalidate(sampleBuffer)
let processedSamples = process(blockSamples,
ofLength: sampleLength,
from: assetReader,
downsampledTo: targetSampleCount)
outputSamples += processedSamples
}
}
}
var paddedSamples = [Float](repeating: silenceDbThreshold, count: targetSampleCount)
paddedSamples.replaceSubrange(0..<min(targetSampleCount, outputSamples.count), with: outputSamples)
This is due to copyNextSampleBuffer() and The Create Rule.
In turn, we can not use CFRelease() in Swift. The reason why a link to the Objective-C only rule is there is beyond my understanding.
Is there a way to release CMSampleBuffer manually in Swift?
I recently solved a similar issue by using an autoreleasepool
Try wrapping the area where sampleBuffer is used in an autoreleasepool. Something like this:
var outputSamples = [Float]()
assetReader.startReading()
while assetReader.status == .reading {
let trackOutput = assetReader.outputs.first!
autoreleasepool {
if let sampleBuffer = trackOutput.copyNextSampleBuffer(),
let blockBuffer = CMSampleBufferGetDataBuffer(sampleBuffer) {
let blockBufferLength = CMBlockBufferGetDataLength(blockBuffer)
let sampleLength = CMSampleBufferGetNumSamples(sampleBuffer) * channelCount(from: assetReader)
var data = Data(capacity: blockBufferLength)
data.withUnsafeMutableBytes { (blockSamples: UnsafeMutablePointer<Int16>) in
CMBlockBufferCopyDataBytes(blockBuffer, atOffset: 0, dataLength: blockBufferLength, destination: blockSamples)
CMSampleBufferInvalidate(sampleBuffer)
let processedSamples = process(blockSamples,
ofLength: sampleLength,
from: assetReader,
downsampledTo: targetSampleCount)
outputSamples += processedSamples
}
}
}
}
var paddedSamples = [Float](repeating: silenceDbThreshold, count: targetSampleCount)
paddedSamples.replaceSubrange(0..<min(targetSampleCount, outputSamples.count), with: outputSamples)
If I understand correctly, once it moves out of the scope of autoreleasepool, the sampleBuffer will be released
This is not really a solution, because it seems that releasing memory manually is impossible and using while loop in conjunction with assetReader results in memory not being released when unsafe mutable bytes are read.
The problem was solved by a workaround: converting the audio file into CAF format before exposing it to the while loop.
Downside: it takes a hot second, the longer the audio file - the more time it takes.
Upside: it only used minuscule amount of memory, which was the problem in the first place.
Inspired by: https://stackoverflow.com/users/2907715/carpsen90 answer in Extract meter levels from audio file

Why is an iPhone XS getting worse CPU performance when using the camera live than an iPhone 6S Plus?

I'm using live camera output to update a CIImage on a MTKView. My main issue is that I have a large, negative performance difference where an older iPhone gets better CPU performance than a newer one, despite all their settings I've come across are the same.
This is a lengthy post, but I decided to include these details since they could be important to the cause of this problem. Please let me know what else I can include.
Below, I have my captureOutput function with two debug bools that I can turn on and off while running. I used this to try to determine the cause of my issue.
applyLiveFilter - bool whether or not to manipulate the CIImage with a CIFilter.
updateMetalView - bool whether or not to update the MTKView's CIImage.
// live output from camera
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection){
/*
Create CIImage from camera.
Here I save a few percent of CPU by using a function
to convert a sampleBuffer to a Metal texture, but
whether I use this or the commented out code
(without captureOutputMTLOptions) does not have
significant impact.
*/
guard let texture:MTLTexture = convertToMTLTexture(sampleBuffer: sampleBuffer) else{
return
}
var cameraImage:CIImage = CIImage(mtlTexture: texture, options: captureOutputMTLOptions)!
var transform: CGAffineTransform = .identity
transform = transform.scaledBy(x: 1, y: -1)
transform = transform.translatedBy(x: 0, y: -cameraImage.extent.height)
cameraImage = cameraImage.transformed(by: transform)
/*
// old non-Metal way of getting the ciimage from the cvPixelBuffer
guard let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else
{
return
}
var cameraImage:CIImage = CIImage(cvPixelBuffer: pixelBuffer)
*/
var orientation = UIImage.Orientation.right
if(isFrontCamera){
orientation = UIImage.Orientation.leftMirrored
}
// apply filter to camera image
if debug_applyLiveFilter {
cameraImage = self.applyFilterAndReturnImage(ciImage: cameraImage, orientation: orientation, currentCameraRes:currentCameraRes!)
}
DispatchQueue.main.async(){
if debug_updateMetalView {
self.MTLCaptureView!.image = cameraImage
}
}
}
Below is a chart of results between both phones toggling the different combinations of bools discussed above:
Even without the Metal view's CIIMage updating and no filters being applied, the iPhone XS's CPU is 2% greater than iPhone 6S Plus's, which isn't a significant overhead, but makes me suspect that somehow how the camera is capturing is different between the devices.
My AVCaptureSession's preset is set identically between both phones
(AVCaptureSession.Preset.hd1280x720)
The CIImage created from captureOutput is the same size (extent)
between both phones.
Are there any settings I need to set manually between these two phones AVCaptureDevice's settings, including activeFormat properties, to make them the same between devices?
The settings I have now are:
if let captureDevice = AVCaptureDevice.default(for:AVMediaType.video) {
do {
try captureDevice.lockForConfiguration()
captureDevice.isSubjectAreaChangeMonitoringEnabled = true
captureDevice.focusMode = AVCaptureDevice.FocusMode.continuousAutoFocus
captureDevice.exposureMode = AVCaptureDevice.ExposureMode.continuousAutoExposure
captureDevice.unlockForConfiguration()
} catch {
// Handle errors here
print("There was an error focusing the device's camera")
}
}
My MTKView is based off code written by Simon Gladman, with some edits for performance and to scale the render before it is scaled up to the width of the screen using Core Animation suggested by Apple.
class MetalImageView: MTKView
{
let colorSpace = CGColorSpaceCreateDeviceRGB()
var textureCache: CVMetalTextureCache?
var sourceTexture: MTLTexture!
lazy var commandQueue: MTLCommandQueue =
{
[unowned self] in
return self.device!.makeCommandQueue()
}()!
lazy var ciContext: CIContext =
{
[unowned self] in
return CIContext(mtlDevice: self.device!)
}()
override init(frame frameRect: CGRect, device: MTLDevice?)
{
super.init(frame: frameRect,
device: device ?? MTLCreateSystemDefaultDevice())
if super.device == nil
{
fatalError("Device doesn't support Metal")
}
CVMetalTextureCacheCreate(kCFAllocatorDefault, nil, self.device!, nil, &textureCache)
framebufferOnly = false
enableSetNeedsDisplay = true
isPaused = true
preferredFramesPerSecond = 30
}
required init(coder: NSCoder)
{
fatalError("init(coder:) has not been implemented")
}
// The image to display
var image: CIImage?
{
didSet
{
setNeedsDisplay()
}
}
override func draw(_ rect: CGRect)
{
guard var
image = image,
let targetTexture:MTLTexture = currentDrawable?.texture else
{
return
}
let commandBuffer = commandQueue.makeCommandBuffer()
let customDrawableSize:CGSize = drawableSize
let bounds = CGRect(origin: CGPoint.zero, size: customDrawableSize)
let originX = image.extent.origin.x
let originY = image.extent.origin.y
let scaleX = customDrawableSize.width / image.extent.width
let scaleY = customDrawableSize.height / image.extent.height
let scale = min(scaleX*IVScaleFactor, scaleY*IVScaleFactor)
image = image
.transformed(by: CGAffineTransform(translationX: -originX, y: -originY))
.transformed(by: CGAffineTransform(scaleX: scale, y: scale))
ciContext.render(image,
to: targetTexture,
commandBuffer: commandBuffer,
bounds: bounds,
colorSpace: colorSpace)
commandBuffer?.present(currentDrawable!)
commandBuffer?.commit()
}
}
My AVCaptureSession (captureSession) and AVCaptureVideoDataOutput (videoOutput) are setup below:
func setupCameraAndMic(){
let backCamera = AVCaptureDevice.default(for:AVMediaType.video)
var error: NSError?
var videoInput: AVCaptureDeviceInput!
do {
videoInput = try AVCaptureDeviceInput(device: backCamera!)
} catch let error1 as NSError {
error = error1
videoInput = nil
print(error!.localizedDescription)
}
if error == nil &&
captureSession!.canAddInput(videoInput) {
guard CVMetalTextureCacheCreate(kCFAllocatorDefault, nil, MetalDevice, nil, &textureCache) == kCVReturnSuccess else {
print("Error: could not create a texture cache")
return
}
captureSession!.addInput(videoInput)
setDeviceFrameRateForCurrentFilter(device:backCamera)
stillImageOutput = AVCapturePhotoOutput()
if captureSession!.canAddOutput(stillImageOutput!) {
captureSession!.addOutput(stillImageOutput!)
let q = DispatchQueue(label: "sample buffer delegate", qos: .default)
videoOutput.setSampleBufferDelegate(self, queue: q)
videoOutput.videoSettings = [
kCVPixelBufferPixelFormatTypeKey as AnyHashable as! String: NSNumber(value: kCVPixelFormatType_32BGRA),
kCVPixelBufferMetalCompatibilityKey as String: true
]
videoOutput.alwaysDiscardsLateVideoFrames = true
if captureSession!.canAddOutput(videoOutput){
captureSession!.addOutput(videoOutput)
}
captureSession!.startRunning()
}
}
setDefaultFocusAndExposure()
}
The video and mic are recorded on two separate streams. Details on the microphone and recording video have been left out since my focus is performance of live camera output.
UPDATE - I have a simplified test project on GitHub that makes it a lot easier to test the problem I'm having: https://github.com/PunchyBass/Live-Filter-test-project
From the top of my mind, you are not comparing pears with pears, even if you are running with the 2.49 GHz of A12 against 1.85 GHz of A9, the differences between the cameras are also huge, even if you use them with the same parameters there are several features from XS's camera that require more CPU resources (dual camera, stabilization, smart HDR, etc).
Sorry for the sources, I tried to find metrics of the CPU cost of those features, but I couldn't find it, unfortunately for your needs, that information is not relevant for marketing, when they are selling it as the best camera ever for an smartphone.
They are selling it as the best processor as well, we don't know what would happen using the XS camera with an A9 processor, it would probably crash, we will never know...
PS.... Your metrics are for the whole processor or for the used core? For the whole processor, you also need to consider other tasks that the devices can be executing, for the single core, is 21% of 200% against 39% of 600%

CMSampleBuffer frame converted to vImage has wrong colors

I’m trying to convert CMSampleBuffer from camera output to vImage and later apply some processing. Unfortunately, even without any further editing, frame I get from buffer has wrong colors:
Implementation (Memory management and errors are not considered in question):
Configuring video output device:
videoDataOutput = AVCaptureVideoDataOutput()
videoDataOutput.videoSettings = [String(kCVPixelBufferPixelFormatTypeKey): kCVPixelFormatType_32BGRA]
videoDataOutput.alwaysDiscardsLateVideoFrames = true
videoDataOutput.setSampleBufferDelegate(self, queue: captureQueue)
videoConnection = videoDataOutput.connection(withMediaType: AVMediaTypeVideo)
captureSession.sessionPreset = AVCaptureSessionPreset1280x720
let videoDevice = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeVideo)
guard let videoDeviceInput = try? AVCaptureDeviceInput(device: videoDevice) else {
return
}
Creating vImage from CASampleBuffer received from camera:
// Convert `CASampleBuffer` to `CVImageBuffer`
guard let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return }
var buffer: vImage_Buffer = vImage_Buffer()
buffer.data = CVPixelBufferGetBaseAddress(pixelBuffer)
buffer.rowBytes = CVPixelBufferGetBytesPerRow(pixelBuffer)
buffer.width = vImagePixelCount(CVPixelBufferGetWidth(pixelBuffer))
buffer.height = vImagePixelCount(CVPixelBufferGetHeight(pixelBuffer))
let vformat = vImageCVImageFormat_CreateWithCVPixelBuffer(pixelBuffer)
let bitmapInfo:CGBitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.last.rawValue | CGBitmapInfo.byteOrder32Little.rawValue)
var cgFormat = vImage_CGImageFormat(bitsPerComponent: 8,
bitsPerPixel: 32,
colorSpace: nil,
bitmapInfo: bitmapInfo,
version: 0,
decode: nil,
renderingIntent: .defaultIntent)
// Create vImage
vImageBuffer_InitWithCVPixelBuffer(&buffer, &cgFormat, pixelBuffer, vformat!.takeRetainedValue(), cgColor, vImage_Flags(kvImageNoFlags))
Converting buffer to UIImage:
For the sake of tests CVPixelBuffer is exported to UIImage, but adding it to video buffer has the same result.
var dstPixelBuffer: CVPixelBuffer?
let status = CVPixelBufferCreateWithBytes(nil, Int(buffer.width), Int(buffer.height),
kCVPixelFormatType_32BGRA, buffer.data,
Int(buffer.rowBytes), releaseCallback,
nil, nil, &dstPixelBuffer)
let destCGImage = vImageCreateCGImageFromBuffer(&buffer, &cgFormat, nil, nil, numericCast(kvImageNoFlags), nil)?.takeRetainedValue()
// create a UIImage
let exportedImage = destCGImage.flatMap { UIImage(cgImage: $0, scale: 0.0, orientation: UIImageOrientation.right) }
DispatchQueue.main.async {
self.previewView.image = exportedImage
}
Try setting the color space on your CV image format:
let vformat = vImageCVImageFormat_CreateWithCVPixelBuffer(pixelBuffer).takeRetainedValue()
vImageCVImageFormat_SetColorSpace(vformat,
CGColorSpaceCreateDeviceRGB())
...and update your call to vImageBuffer_InitWithCVPixelBuffer to reflect the fact vformat is now a managed reference:
let error = vImageBuffer_InitWithCVPixelBuffer(&buffer, &cgFormat, pixelBuffer, vformat, nil, vImage_Flags(kvImageNoFlags))
Finally, your can remove the following lines, vImageBuffer_InitWithCVPixelBuffer is doing that work for you:
// buffer.data = CVPixelBufferGetBaseAddress(pixelBuffer)
// buffer.rowBytes = CVPixelBufferGetBytesPerRow(pixelBuffer)
// buffer.width = vImagePixelCount(CVPixelBufferGetWidth(pixelBuffer))
// buffer.height = vImagePixelCount(CVPixelBufferGetHeight(pixelBuffer))
Note that you don't need to lock the Core Video pixel buffer, if you check the headerdoc, it says "It is not necessary to lock the CVPixelBuffer before calling this function".
The call to vImageBuffer_InitWithCVPixelBuffer is performing modifying your vImage_Buffer and CVPixelBuffer's contents, which is a bit naughty because in your (linked) code you promise not to modify the pixels when you say
CVPixelBufferLockBaseAddress(pixelBuffer, .readOnly)
The correct way to initialise the CGBitmapInfo for BGRA8888 is alpha first, 32bit little endian, which is non obvious, but covered in the header file for vImage_CGImageFormat in vImage_Utilities.h:
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.first.rawValue | CGImageByteOrderInfo.order32Little.rawValue)
What I don't get is why vImageBuffer_InitWithCVPixelBuffer is modifying your buffer, as cgFormat (desiredFormat) should match vformat, although it is documented to modify the buffer, so maybe you should copy the data first.