Swift Array(bufferPointer) EXC_BAD_ACCESS crash - swift

I am receiving audio buffers and I am converting them into a conventional array for ease of use. This code has always been reliable. However, recently it began crashing quite frequently. I am using Airpods when it crashes, which may or may not be part of the problem. The mic object is an AKMicrophone object from AudioKit.
func tap(){
let recordingFormat = mic.outputNode.inputFormat(forBus: 0)
mic.outputNode.removeTap(onBus: 0)
mic.outputNode.installTap(onBus: 0,
bufferSize: UInt32(recordingBufferSize),
format: recordingFormat)
{ (buffer, when) in
let stereoDataUnsafePointer = buffer.floatChannelData!
let monoPointer = stereoDataUnsafePointer.pointee
let count = self.recordingBufferSize
let bufferPointer = UnsafeBufferPointer(start: monoPointer, count: count)
let array = Array(bufferPointer) //CRASHES HERE
}
mic.start()
}
When running on iPhone 7 with airpods, this crashes about 7/10 times with one of two different error messages:
EXC_BAD_ACCESS
Fatal error: UnsafeMutablePointer.initialize overlapping range
If the way I was converting the array was wrong I would expect it to crash every time. I speculate that the recording sample rate could be an issue.

I figured out the answer. I hardcoded the buffer size to 10000, and when initiating the tap, specified buffer size of 10000. However, the device ignored this buffer size and instead sent me buffers that were 6400. This meant when I tried to initialize an array of with size 10000 it went off the end. I modified code to check the actual buffer size, not the size I requested:
let stereoDataUnsafePointer = buffer.floatChannelData!
let monoPointer = stereoDataUnsafePointer.pointee
let count = buffer.frameLength //<--Check actual buffer size
let bufferPointer = UnsafeBufferPointer(start: monoPointer, count: Int(count))
let array = Array(bufferPointer)

Related

UnsafeMutableAudioBufferListPointer allocation before calling AudioObjectGetPropertyData

I've been trying to use Apple's CoreAudio from swift.
I found many examples on how to enumerate streams and channels on a device.
However, all of them seem to use incorrect size when calling UnsafeMutablePointer<AudioBufferList>.allocate().
They first request property data size, which returns the number of bytes.
Then they use this number of bytes to allocate an (unsafe) AudioBufferList of that size (using the number of bytes as the size of the list!).
Please see my comments inline below:
var address = AudioObjectPropertyAddress(
mSelector:AudioObjectPropertySelector(kAudioDevicePropertyStreamConfiguration),
mScope:AudioObjectPropertyScope(kAudioDevicePropertyScopeInput),
mElement:0)
var propsize = UInt32(0);
var result:OSStatus = AudioObjectGetPropertyDataSize(self.id, &address, 0, nil, &propsize);
if (result != 0) {
return false;
}
// ABOVE: propsize is set to number of bytes that property data contains, typical number are 8 (no streams), 24 (1 stream, 2 interleaved channels)
// BELOW: propsize is used for AudioBufferList capacity (in number of buffers!)
let bufferList = UnsafeMutablePointer<AudioBufferList>.allocate(capacity:Int(propsize))
result = AudioObjectGetPropertyData(self.id, &address, 0, nil, &propsize, bufferList);
if (result != 0) {
return false
}
let buffers = UnsafeMutableAudioBufferListPointer(bufferList)
for bufferNum in 0..<buffers.count {
if buffers[bufferNum].mNumberChannels > 0 {
return true
}
}
This works all of the time, because it allocates much more memory than needed for UnsafeMutablePointer<AudioBufferList>, but this is obviously wrong.
I've been searching for a way to correctly allocate UnsafeMutablePointer<AudioBufferList> from the number of bytes that is returned by AudioObjectGetPropertyDataSize(), but I cannot find anything for the whole day. Please help ;)
to correctly allocate UnsafeMutablePointer<AudioBufferList> from the number of bytes that is returned by AudioObjectGetPropertyDataSize()
You should not allocate UnsafeMutablePointer<AudioBufferList>, but allocate raw bytes of the exact size and cast it to UnsafeMutablePointer<AudioBufferList>.
Some thing like this:
let propData = UnsafeMutableRawPointer.allocate(byteCount: Int(propsize), alignment: MemoryLayout<AudioBufferList>.alignment)
result = AudioObjectGetPropertyData(self.id, &address, 0, nil, &propsize, propData);
if (result != 0) {
return false
}
let bufferList = propData.assumingMemoryBound(to: AudioBufferList.self)
I fully agree with the accepted answer of using UnsafeMutableRawPointer.allocate(byteCount:alignment:) (though it should also be paired with a call to deallocate() for getting the device stream configuration, but just wanted to share another option for completeness (this shouldn't be upvoted for this question)
If you truly need to calculate the number of buffers from the number of bytes (I'm not sure there is actually any such need), it can be done.
When first converting code to Swift, I used something like :
let numBuffers = (Int(propsize) - MemoryLayout<AudioBufferList>.offset(of: \AudioBufferList.mBuffers)!) / MemoryLayout<AudioBuffer>.size
if numBuffers == 0 { // Avoid trying to allocate zero buffers
return false
}
let bufferList = AudioBufferList.allocate(maximumBuffers: numBuffers)
defer { bufferList.unsafeMutablePointer.deallocate() }
err = AudioObjectGetPropertyData(id, &address, 0, nil, &propsize, bufferList.unsafeMutablePointer)
Again, I do NOT actually recommend this approach to get the stream configuration - it's unnecessarily complex IMO, and I've since adopted something like the accepted answer. So this may not have value other than as an academic exercise.

Problem accessing MTLBuffer via typed UnsafeMutualPointer

I have a function that is passed an optional MTLBuffer. My goal is to iteratively change values in that buffer using an index into a Typed pointer to the same buffer. However, when I run the code, I'm getting an error "Thread 1: EXC_BAD_ACCESS (code=2, address=0x1044f1000)".
Am I converting to the Typed UnsafeMutablePointer correctly?
Would it be better to covert to a Typed UnsafeMutableBufferPointer? If so, how would I convert from the MTLBuffer to Typed UnsafeMutableBufferPointer?
Any idea why I'm getting this error?
Note: I've removed most guard checks to keep this simple. I've confirmed the MTLDevice (via device), bufferA allocation, dataPtr and floatPtr are all non-nil. floatPtr and dataPtr do point to the same memory address.
This is how I allocate the buffer:
bufferSize = 16384
bufferA = device?.makeBuffer(length: bufferSize, options: MTLResourceOptions.storageModeShared)`
Here's my code operating on the buffer:
guard let dataPtr = bufferA?.contents() else {
fatalError("error retrieving buffer?.contents() in generateRandomFloatData()")
}
let floatPtr = dataPtr.bindMemory(to: Float.self, capacity: bufferA!.length)
for index in 0...bufferSize - 1 {
floatPtr[index] = 1.0 // Float.random(in: 0...Float.greatestFiniteMagnitude)
}
Thank you!
Am I converting to the Typed UnsafeMutablePointer correctly?
NO.
When you call makeBuffer(length:options:) you pass the length in bytes.
But, a Float occupies 4 bytes in memory.
So, you may need to modify some parts related to number of elements:
let floatPtr = dataPtr.bindMemory(to: Float.self, capacity: bufferA!.length/MemoryLayout<Float>.stride)
for index in 0..<bufferSize/MemoryLayout<Float>.stride {
floatPtr[index] = 1.0 // Float.random(in: 0...Float.greatestFiniteMagnitude)
}

Swift 4: Detecting strongest frequency or presence of frequency in audio stream.

I am writing an application that needs to detect a frequency in the audio stream. I have read about a million articles and am having problems crossing the finish line. I have my audio data coming to me in this function via the AVFoundation Framework from Apple.
I am using Swift 4.2 and have tried playing with the FFT functions, but they are a little over my head at the current moment.
Any thoughts?
// get's the data as a call back for the AVFoundation framework.
public func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
// prints the whole sample buffer and tells us alot of information about what's inside
print(sampleBuffer);
// create a buffer, ready out the data, and use the CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer method to put
// it into a buffer
var buffer: CMBlockBuffer? = nil
var audioBufferList = AudioBufferList(mNumberBuffers: 1,
mBuffers: AudioBuffer(mNumberChannels: 1, mDataByteSize: 0, mData: nil))
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer, bufferListSizeNeededOut: nil, bufferListOut: &audioBufferList, bufferListSize: MemoryLayout<AudioBufferList>.size, blockBufferAllocator: nil, blockBufferMemoryAllocator: nil, flags: UInt32(kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment), blockBufferOut: &buffer);
let abl = UnsafeMutableAudioBufferListPointer(&audioBufferList)
var sum:Int64 = 0
var count:Int = 0
var bufs:Int = 0
var max:Int64 = 0;
var min:Int64 = 0
// loop through the samples and check for min's and maxes.
for buff in abl {
let samples = UnsafeMutableBufferPointer<Int16>(start: UnsafeMutablePointer(OpaquePointer(buff.mData)),
count: Int(buff.mDataByteSize)/MemoryLayout<Int16>.size)
for sample in samples {
let s = Int64(sample)
sum = (sum + s*s)
count += 1
if(s > max) {
max = s;
}
if(s < min) {
min = s;
}
print(sample)
}
bufs += 1
}
// debug
print("min - \(min), max = \(max)");
// update the interface
DispatchQueue.main.async {
self.frequencyDataOutLabel.text = "min - \(min), max = \(max)";
}
// stop the capture session
self.captureSession.stopRunning();
}
After much research I found that the answer is to use an FFT method (Fast Fourier Transform). It takes the raw input from the iPhone's code above and converts it into an array of values representing magnitude of each frequency in bands.
Much props to the open code here https://github.com/jscalo/tempi-fft that created a visualizer that captures the data, and displays it. From there, it was a matter of manipulating it to meet the needs. In my case I was looking for frequencies very high above human hearing (20kHz range). By scanning the later half of the array in the tempi-fft code I was able to determine if frequencies I was looking for were present and loud enough.

AVAssetReader trouble getting pixel buffer from copyNextSampleBuffer(), Swift

I'm trying to read the image frames from a Quicktime movie file using AVFoundation and AVAssetReader on macOSX. I want to display the frames via a texture map in Metal. There are many examples of using AVAssetReader online, but I cannot get it working for what I want.
I can read the basic frame data from the movie -- the time values, size, and durations in the printout look correct. However, when I try to get the pixelBuffer, CMSampleBufferGetImageBuffer returns NULL.
let track = asset.tracks(withMediaType: AVMediaType.video)[0]
let videoReaderSettings : [String : Int] = [kCVPixelBufferPixelFormatTypeKey as String : Int(kCVPixelFormatType_32BGRA)]
let output = AVAssetReaderTrackOutput(track:track, outputSettings:nil) // using videoReaderSettings causes it to no longer report frame data
guard let reader = try? AVAssetReader(asset: asset) else {exit(1)}
output.alwaysCopiesSampleData = true
reader.add(output)
reader.startReading()
while(reader.status == .reading){
if let sampleBuffer = output.copyNextSampleBuffer(), CMSampleBufferIsValid(sampleBuffer) {
let frameTime = CMSampleBufferGetOutputPresentationTimeStamp(sampleBuffer)
if (frameTime.isValid){
print("frame: \(frameNumber), time: \(String(format:"%.3f", frameTime.seconds)), size: \(CMSampleBufferGetTotalSampleSize(sampleBuffer)), duration: \( CMSampleBufferGetOutputDuration(sampleBuffer).value)")
if let pixelBuffer : CVPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) {
getTextureFromCVBuffer(pixelBuffer)
// break
}
frameNumber += 1
}
}
}
This problem was addressed here (Why does CMSampleBufferGetImageBuffer return NULL) where it is suggested that the problem is that one must specify a video format in the settings argument instead of 'nil'. So I tried replacing 'nil' with 'videoReaderSettings' above, with various values for the format: kCVPixelFormatType_32BGRA, kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange, and others.
The result is that the frame 'time' values are still correct, but the 'size' and 'duration' values are 0. However, CMSampleBufferGetImageBuffer DOES return something, where before it was 0. But garbage shows up onscreen.
Here is the function which converts the pixelBuffer to a Metal texture.
func getTextureFromCVBuffer(_ pixelBuffer:CVPixelBuffer) {
// Get width and height for the pixel buffer
let width = CVPixelBufferGetWidth(pixelBuffer)
let height = CVPixelBufferGetHeight(pixelBuffer)
// Converts the pixel buffer in a Metal texture.
var cvTextureOut: CVMetalTexture?
if CVMetalTextureCacheCreateTextureFromImage(kCFAllocatorDefault, self.textureCache!, pixelBuffer, nil, .bgra8Unorm, width, height, 0, &cvTextureOut) != kCVReturnSuccess {
print ("CVMetalTexture create failed!")
}
guard let cvTexture = cvTextureOut, let inputTexture = CVMetalTextureGetTexture(cvTexture) else {
print("Failed to create metal texture")
return
}
texture = inputTexture
}
When I'm able to pass a pixelBuffer to this function, it does report the correct size for the image. But as I said, what appears onscreen is garbage -- its composed of chunks of recent Safari browser pages actually. I'm not sure if the problem is in the first function or the second function. A nonzero return value from CMSampleBufferGetImageBuffer is encouraging, but the 0's for size and duration are not.
I found this thread (Buffer size of CMSampleBufferRef) which suggests that showing 0 for the size and duration may not be a problem, so maybe the issue is in the conversion to the Metal texture?
Any idea what I am doing wrong?
Thanks!
put videoReaderSetting at AVAssetReaderTrackOutput.
let videoReaderSettings : [String : Int] = [kCVPixelBufferPixelFormatTypeKey as String : Int(kCVPixelFormatType_32BGRA)]
let output = AVAssetReaderTrackOutput(track:track, outputSettings: videoReaderSettings)

Chunk Rendering in Metal

I'm trying to create a procedural game using Metal, and I'm using an octree based chunk approach for a Level of Detail implementation.
The method I'm using involves the CPU creating the octree nodes for the terrain, which then has its mesh created on the GPU using a compute shader. This mesh is stored in a vertex buffer and index buffer in the chunk object for rendering.
All of this seems to work fairly well, however when it comes to rendering chunks I'm hitting performance issues early on. Currently I gather an array of chunks to draw, then submit that to my renderer, that will create an MTLParallelRenderCommandEncoder to then create an MTLRenderCommandEncoder for each chunk, which is then submitted to the GPU.
By the looks of it around 50% of the CPU time is spent on creating the MTLRenderCommandEncoder for each chunk. Currently I'm just creating a simple 8 vertex cube mesh for each chunk, and I have an 4x4x4 array of chunks and I'm dropping to around 50fps in these early stages. (In reality it seems that there can only be up to 63 MTLRenderCommandEncoder in each MTLParallelRenderCommandEncoder so it's not fully 4x4x4)
I've read that the point of the MTLParallelRenderCommandEncoder is to create each MTLRenderCommandEncoder in a separate thread, yet I've not had much luck with getting this to work. Also multithreading it wouldn't get around the cap of 63 chunks being rendered as a max.
I feel that somehow consolidating the vertex and index buffers for each chunk into one or two larger buffers for submission would help, but I'm not sure how to do this without copious memcpy() calls and whether or not this would even improve efficiency.
Here's my code that takes in the array of nodes and draws them:
func drawNodes(nodes: [OctreeNode], inView view: AHMetalView){
// For control of several rotating buffers
dispatch_semaphore_wait(displaySemaphore, DISPATCH_TIME_FOREVER)
makeDepthTexture()
updateUniformsForView(view, duration: view.frameDuration)
let commandBuffer = commandQueue.commandBuffer()
let optDrawable = layer.nextDrawable()
guard let drawable = optDrawable else{
return
}
let passDescriptor = MTLRenderPassDescriptor()
passDescriptor.colorAttachments[0].texture = drawable.texture
passDescriptor.colorAttachments[0].clearColor = MTLClearColorMake(0.2, 0.2, 0.2, 1)
passDescriptor.colorAttachments[0].storeAction = .Store
passDescriptor.colorAttachments[0].loadAction = .Clear
passDescriptor.depthAttachment.texture = depthTexture
passDescriptor.depthAttachment.clearDepth = 1
passDescriptor.depthAttachment.loadAction = .Clear
passDescriptor.depthAttachment.storeAction = .Store
let parallelRenderPass = commandBuffer.parallelRenderCommandEncoderWithDescriptor(passDescriptor)
// Currently 63 nodes as a maximum
for node in nodes{
// This line is taking up around 50% of the CPU time
let renderPass = parallelRenderPass.renderCommandEncoder()
renderPass.setRenderPipelineState(renderPipelineState)
renderPass.setDepthStencilState(depthStencilState)
renderPass.setFrontFacingWinding(.CounterClockwise)
renderPass.setCullMode(.Back)
let uniformBufferOffset = sizeof(AHUniforms) * uniformBufferIndex
renderPass.setVertexBuffer(node.vertexBuffer, offset: 0, atIndex: 0)
renderPass.setVertexBuffer(uniformBuffer, offset: uniformBufferOffset, atIndex: 1)
renderPass.setTriangleFillMode(.Lines)
renderPass.drawIndexedPrimitives(.Triangle, indexCount: AHMaxIndicesPerChunk, indexType: AHIndexType, indexBuffer: node.indexBuffer, indexBufferOffset: 0)
renderPass.endEncoding()
}
parallelRenderPass.endEncoding()
commandBuffer.presentDrawable(drawable)
commandBuffer.addCompletedHandler { (commandBuffer) -> Void in
self.uniformBufferIndex = (self.uniformBufferIndex + 1) % AHInFlightBufferCount
dispatch_semaphore_signal(self.displaySemaphore)
}
commandBuffer.commit()
}
You note:
I've read that the point of the MTLParallelRenderCommandEncoder is to create each MTLRenderCommandEncoder in a separate thread...
And you're correct. What you're doing is sequentially creating, encoding with, and ending command encoders — there's nothing parallel going on here, so MTLParallelRenderCommandEncoder is doing nothing for you. You'd have roughly the same performance if you eliminated the parallel encoder and just created encoders with renderCommandEncoderWithDescriptor(_:) on each pass through your for loop... which is to say, you'd still have the same performance problem due to the overhead of creating all those encoders.
So, if you're going to encode sequentially, just reuse the same encoder. Also, you should reuse as much of your other shared state as possible. Here's a quick pass at a possible refactoring (untested):
let passDescriptor = MTLRenderPassDescriptor()
// call this once before your render loop
func setup() {
makeDepthTexture()
passDescriptor.colorAttachments[0].clearColor = MTLClearColorMake(0.2, 0.2, 0.2, 1)
passDescriptor.colorAttachments[0].storeAction = .Store
passDescriptor.colorAttachments[0].loadAction = .Clear
passDescriptor.depthAttachment.texture = depthTexture
passDescriptor.depthAttachment.clearDepth = 1
passDescriptor.depthAttachment.loadAction = .Clear
passDescriptor.depthAttachment.storeAction = .Store
// set up render pipeline state and depthStencil state
}
func drawNodes(nodes: [OctreeNode], inView view: AHMetalView) {
updateUniformsForView(view, duration: view.frameDuration)
// Set up completed handler ahead of time
let commandBuffer = commandQueue.commandBuffer()
commandBuffer.addCompletedHandler { _ in // unused parameter
self.uniformBufferIndex = (self.uniformBufferIndex + 1) % AHInFlightBufferCount
dispatch_semaphore_signal(self.displaySemaphore)
}
// Semaphore should be tied to drawable acquisition
dispatch_semaphore_wait(displaySemaphore, DISPATCH_TIME_FOREVER)
guard let drawable = layer.nextDrawable()
else { return }
// Set up the one part of the pass descriptor that changes per-frame
passDescriptor.colorAttachments[0].texture = drawable.texture
// Get one render pass descriptor and reuse it
let renderPass = commandBuffer.renderCommandEncoderWithDescriptor(passDescriptor)
renderPass.setTriangleFillMode(.Lines)
renderPass.setRenderPipelineState(renderPipelineState)
renderPass.setDepthStencilState(depthStencilState)
for node in nodes {
// Update offsets and draw
let uniformBufferOffset = sizeof(AHUniforms) * uniformBufferIndex
renderPass.setVertexBuffer(node.vertexBuffer, offset: 0, atIndex: 0)
renderPass.setVertexBuffer(uniformBuffer, offset: uniformBufferOffset, atIndex: 1)
renderPass.drawIndexedPrimitives(.Triangle, indexCount: AHMaxIndicesPerChunk, indexType: AHIndexType, indexBuffer: node.indexBuffer, indexBufferOffset: 0)
}
renderPass.endEncoding()
commandBuffer.presentDrawable(drawable)
commandBuffer.commit()
}
Then, profile with Instruments to see what, if any, further performance issues you might have. There's a great WWDC 2015 session about that showing several of the common "gotchas", how to diagnose them in profiling, and how to fix them.