Adding UV Map to Model IO MDLMesh - swift

I'm trying to generate UV map for a mesh using Model/IO. The code runs on the simulator and generates a UV map for the input mesh but when I run it on a device it crashes on
mdlMesh.addUnwrappedTextureCoordinates(forAttributeNamed: MDLVertexAttributeTextureCoordinate)
with this error displayed multiple times on the console:
Can't choose for edge creation.
The fatal error that terminates the app is:
libc++abi: terminating with uncaught exception of type std::out_of_range: unordered_map::at: key not found
The code:
let asset = MDLAsset()
let allocator = MTKMeshBufferAllocator(device: device)
let zoneSize = MemoryLayout<Float>.size * 3 * mesh.vertices.count + MemoryLayout<UInt32>.size * indexCount
let zone = allocator.newZone(zoneSize)
let data = Data.init(bytes: vertexBuffer.contents(), count: MemoryLayout<Float>.size * 3 * mesh.vertices.count)
let vBuffer = allocator.newBuffer(from: zone, data: data, type: .vertex)!
let indexData = Data.init(bytes: indexBuffer.contents(), count: MemoryLayout<UInt32>.size * indexCount)
let iBuffer = allocator.newBuffer(from: zone, data: indexData, type: .index)!
let submesh = MDLSubmesh(indexBuffer: iBuffer,
indexCount: indexCount,
indexType: .uint32,
geometryType: .triangles,
material: nil)
let vDescriptor = MDLVertexDescriptor()
// Vertex Positions
vDescriptor.attributes[0] = MDLVertexAttribute(name: MDLVertexAttributePosition,
format: .float3,
offset: 0,
bufferIndex: 0)
vDescriptor.layouts[0] = MDLVertexBufferLayout(stride: MemoryLayout<Float>.size * 3)
let mdlMesh = MDLMesh(vertexBuffer: vBuffer,
vertexCount: mesh.vertices.count,
descriptor: vDescriptor,
submeshes: [submesh])
mdlMesh.addAttribute(withName: MDLVertexAttributeTextureCoordinate, format: .float2)
mdlMesh.addUnwrappedTextureCoordinates(forAttributeNamed: MDLVertexAttributeTextureCoordinate)
asset.add(mdlMesh)

Related

Converting AvAudioInputNode to S16LE PCM

I'm trying to convert input node format to S16LE format. I've tried it with AVAudioMixerNode
First I create audio session
do {
try audioSession.setCategory(.record)
try audioSession.setActive(true)
} catch {
...
}
//Define formats
let inputNodeOutputFormat = audioEngine.inputNode.outputFormat(forBus: 0)
guard let wantedFormat = AVAudioFormat(commonFormat: AVAudioCommonFormat.pcmFormatInt16, sampleRate: 16000, channels: 1, interleaved: false) else {
return;
}
//Create mixer node and attach it to the engine
audioEngine.attach(mixerNode)
//Connect the input node to mixer node and mixer node to mainMixerNode
audioEngine.connect(audioEngine.inputNode, to: mixerNode, format: inputNodeOutputFormat)
audioEngine.connect(mixerNode, to: audioEngine.mainMixerNode, format: wantedFormat)
//Install the tab on the output of the mixerNode
mixerNode.installTap(onBus: 0, bufferSize: bufferSize, format: wantedFormat) { (buffer, time) in
let theLength = Int(buffer.frameLength)
var bufferData: [Int16] = []
for i in 0 ..< theLength
{
let char = Int16((buffer.int16ChannelData?.pointee[i])!)
bufferData.append(char)
}
}
I get the following error.
Exception 'I[busArray objectAtindexedSubscript:
(NSUlnteger)element] setFormat:format
error:&nsErr]: returned false, error Error
Domain=NSOSStatusErrorDomain Code=-10868
"(null)"' was thrown
What part of the graph did I mess up?
You have to set the format of nodes to match the actual format of the data. Setting the node's format doesn't cause any conversions to happen, except that mixer nodes can convert sample rates (but not data formats). You'll need to use an AVAudioConverter in your tap to do the conversion.
As an example of what this code would look like, to handle arbitrary conversions:
let inputNode = audioEngine.inputNode
let inputFormat = inputNode.inputFormat(forBus: 0)
let outputFormat = AVAudioFormat(... define your format ...)
guard let converter = AVAudioConverter(from: inputFormat, to: outputFormat) else {
throw ...some error...
}
inputNode.installTap(onBus: 0, bufferSize: 1024, format: inputFormat) {[weak self] (buffer, time) in
let inputBlock: AVAudioConverterInputBlock = {inNumPackets, outStatus in
outStatus.pointee = AVAudioConverterInputStatus.haveData
return buffer
}
let targetFrameCapacity = AVAudioFrameCount(outputFormat.sampleRate) * buffer.frameLength / AVAudioFrameCount(buffer.format.sampleRate)
if let convertedBuffer = AVAudioPCMBuffer(pcmFormat: outputFormat, frameCapacity: targetFrameCapacity) {
var error: NSError?
let status = converter.convert(to: convertedBuffer, error: &error, withInputFrom: inputBlock)
assert(status != .error)
let sampleCount = convertedBuffer.frameLength
let rawData = convertedBuffer.int16ChannelData![0]
// ... and here you have your data ...
}
}
If you don't need to change the sample rate, and you're converting from uncompressed audio to uncompressed audio, you may be able to use the simpler convert(to:from:) method in your tap.
Since iOS 13, you can also do this with AVAudioSinkNode rather than a tap, which can be more convenient.

Get SCNGeometry from Model I/O

I am trying to import a Mesh into a SCNGeometry. I do want to manipulate the vertices individually from the CPU. Therefore, I want to do it according to the following post: https://developer.apple.com/forums/thread/91618 .
So far I have imported it into the Model I/O Framework and created a MTLBuffer.
let MDLPositionData = mesh?.vertexAttributeData(forAttributeNamed: "position", as: .float3)
let vertexBuffer1 = device.makeBuffer(bytes: MDLPositionData!.dataStart,
length: MDLPositionData!.bufferSize,
options: [.cpuCacheModeWriteCombined])
let vertexSource = SCNGeometrySource(
buffer: vertexBuffer1!,
vertexFormat: vertexFormat,
semantic: SCNGeometrySource.Semantic.vertex,
vertexCount: mesh!.vertexCount,
dataOffset: 0,
dataStride: MemoryLayout<vector_float3>.size)
The SCNGeometry needs index elements to properly show the mesh. Where do I get those?
I have tried to use the submeshes from Model I/O:
let submesh = mesh?.submeshes?[0]
let indexBuffer = (submesh as? MDLSubmesh)?.indexBuffer(asIndexType: .uInt32)
let indexBufferData = Data(bytes: indexBuffer!.map().bytes, count: indexBuffer!.length)
let indexElement = SCNGeometryElement(
data: indexBufferData,
primitiveType: SCNGeometryPrimitiveType.triangles,
primitiveCount: indexBuffer!.length,
bytesPerIndex: 32)
let geo = SCNGeometry(sources: [vertexSource, normalSource], elements: [indexElement])
But this trows the error
[SceneKit] Error: C3DMeshElementSetPrimitives invalid index buffer size and shows the following geometry: The Teapot. It seems like the vertices aren't connected properly.
How do I get the correct Index data? Thank you!

Metal Command Buffer Internal Error: What is Internal Error (IOAF code 2067)?

Attempting to run a compute kernel results in the following message:
Execution of the command buffer was aborted due to an error during execution. Internal Error (IOAF code 2067)
To get more specific information I query the command encoder's user info and manage to extract more details. I followed instructions from this video to yield the following message:
[Metal Diagnostics] __message__: MTLCommandBuffer execution failed: The commands
associated with the encoder were affected by an error, which may or may not have been
caused by the commands themselves, and failed to execute in full __:::__
__delegate_identifier__: GPUToolsDiagnostics
The breakpoint triggered by the API Validation and Shader Validation results in a record stack frame - not a GPU backtrace. The breakpoint does not indicate any new information apart from the above message.
I cannot find any reference to the mentioned IOAF code in documentation. The additional information printed reveals nothing of assistance. The kernel is quite divergent and I am speculating that may be causing the GPU to take too much time to complete. That may be to blame but I have nothing supporting this apart from a gut feeling.
Here is the thread setup for the group:
let threadExecutionWidth = pipeline.threadExecutionWidth
let threadgroupsPerGrid = MTLSize(width: (Int(pixelCount) + threadExecutionWidth - 1) / threadExecutionWidth, height: 1, depth: 1)
let threadsPerThreadgroup = MTLSize(width: threadExecutionWidth, height: 1, depth: 1)
commandEncoder.dispatchThreadgroups(threadgroupsPerGrid, threadsPerThreadgroup: threadsPerThreadgroup)
The GPU commands are being committed and waited upon for completion:
commandEncoder.endEncoding()
commandBuffer.commit()
commandBuffer.waitUntilCompleted()
Here is my application side code in it's entirety:
import Metal
import Foundation
import simd
typealias Float4 = SIMD4<Float>
struct SimpleFileWriter {
var fileHandle: FileHandle
init(filePath: String, append: Bool = false) {
if !FileManager.default.fileExists(atPath: filePath) {
FileManager.default.createFile(atPath: filePath, contents: nil, attributes: nil)
}
fileHandle = FileHandle(forWritingAtPath: filePath)!
if !append {
fileHandle.truncateFile(atOffset: 0)
}
}
func write(content: String) {
fileHandle.seekToEndOfFile()
guard let data = content.data(using: String.Encoding.ascii) else {
fatalError("Could not convert \(content) to ascii data!")
}
fileHandle.write(data)
}
}
var imageWidth = 480
var imageHeight = 270
var sampleCount = 16
var bounceCount = 3
let device = MTLCreateSystemDefaultDevice()!
let library = try! device.makeDefaultLibrary(bundle: Bundle.module)
let primaryRayFunc = library.makeFunction(name: "ray_trace")!
let pipeline = try! device.makeComputePipelineState(function: primaryRayFunc)
var pixelData: [Float4] = (0..<(imageWidth * imageHeight)).map{ _ in Float4(0, 0, 0, 0)}
var pixelCount = UInt(pixelData.count)
let pixelDataBuffer = device.makeBuffer(bytes: &pixelData, length: Int(pixelCount) * MemoryLayout<Float4>.stride, options: [])!
let pixelDataMirrorPointer = pixelDataBuffer.contents().bindMemory(to: Float4.self, capacity: Int(pixelCount))
let pixelDataMirrorBuffer = UnsafeBufferPointer(start: pixelDataMirrorPointer, count: Int(pixelCount))
let commandQueue = device.makeCommandQueue()!
let commandBufferDescriptor = MTLCommandBufferDescriptor()
commandBufferDescriptor.errorOptions = MTLCommandBufferErrorOption.encoderExecutionStatus
let commandBuffer = commandQueue.makeCommandBuffer(descriptor: commandBufferDescriptor)!
let commandEncoder = commandBuffer.makeComputeCommandEncoder()!
commandEncoder.setComputePipelineState(pipeline)
commandEncoder.setBuffer(pixelDataBuffer, offset: 0, index: 0)
commandEncoder.setBytes(&pixelCount, length: MemoryLayout<Int>.stride, index: 1)
commandEncoder.setBytes(&imageWidth, length: MemoryLayout<Int>.stride, index: 2)
commandEncoder.setBytes(&imageHeight, length: MemoryLayout<Int>.stride, index: 3)
commandEncoder.setBytes(&sampleCount, length: MemoryLayout<Int>.stride, index: 4)
commandEncoder.setBytes(&bounceCount, length: MemoryLayout<Int>.stride, index: 5)
// We have to calculate the sum `pixelCount` times
// => amount of threadgroups is `resultsCount` / `threadExecutionWidth` (rounded up)
// because each threadgroup will process `threadExecutionWidth` threads
let threadExecutionWidth = pipeline.threadExecutionWidth;
let threadgroupsPerGrid = MTLSize(width: (Int(pixelCount) + threadExecutionWidth - 1) / threadExecutionWidth, height: 1, depth: 1)
// Here we set that each threadgroup should process `threadExecutionWidth` threads
// the only important thing for performance is that this number is a multiple of
// `threadExecutionWidth` (here 1 times)
let threadsPerThreadgroup = MTLSize(width: threadExecutionWidth, height: 1, depth: 1)
commandEncoder.dispatchThreadgroups(threadgroupsPerGrid, threadsPerThreadgroup: threadsPerThreadgroup)
commandEncoder.endEncoding()
commandBuffer.commit()
commandBuffer.waitUntilCompleted()
if let error = commandBuffer.error as NSError? {
if let encoderInfo = error.userInfo[MTLCommandBufferEncoderInfoErrorKey] as? [MTLCommandBufferEncoderInfo] {
for info in encoderInfo {
print(info.label + info.debugSignposts.joined())
}
}
}
let sfw = SimpleFileWriter(filePath: "/Users/pprovins/Desktop/render.ppm")
sfw.write(content: "P3\n")
sfw.write(content: "\(imageWidth) \(imageHeight)\n")
sfw.write(content: "255\n")
for pixel in pixelDataMirrorBuffer {
sfw.write(content: "\(UInt8(pixel.x * 255)) \(UInt8(pixel.y * 255)) \(UInt8(pixel.z * 255)) ")
}
sfw.write(content: "\n")
Additionally, here is the shader being ran. I have not included all function definition for brevity's sake:
kernel void ray_trace(device float4 *result [[ buffer(0) ]],
const device uint& dataLength [[ buffer(1) ]],
const device int& imageWidth [[ buffer(2) ]],
const device int& imageHeight [[ buffer(3) ]],
const device int& samplesPerPixel [[ buffer(4) ]],
const device int& rayBounces [[ buffer (5)]],
const uint index [[thread_position_in_grid]]) {
if (index >= dataLength) {
return;
}
const float3 origin = float3(0.0);
const float aspect = float(imageWidth) / float(imageHeight);
const float3 vph = float3(0.0, 2.0, 0.0);
const float3 vpw = float3(2.0 * aspect, 0.0, 0.0);
const float3 llc = float3(-(vph / 2.0) - (vpw / 2.0) - float3(0.0, 0.0, 1.0));
float3 accumulatedColor = float3(0.0);
thread float seed = getSeed(index, index % imageWidth, index / imageWidth);
float row = float(index / imageWidth);
float col = float(index % imageWidth);
for (int aai = 0; aai < samplesPerPixel; ++aai) {
float ranX = fract(rand(seed));
float ranY = fract(rand(seed));
float u = (col + ranX) / float(imageWidth - 1);
float v = 1.0 - (row + ranY) / float(imageHeight - 1);
Ray r(origin, llc + u * vpw + v * vph - origin);
float3 color = float3(0.0);
HitRecord hr = {0.0, 0.0, false};
float attenuation = 1.0;
for (int bounceIndex = 0; bounceIndex < rayBounces; ++bounceIndex) {
testForHit(sceneDistance, r, hr);
if (hr.h) {
float3 target = hr.p + hr.n + random_f3_in_unit_sphere(seed);
attenuation *= 0.5;
r = Ray(hr.p, target - hr.p);
} else {
color = default_atmosphere_color(r) * attenuation;
break;
}
}
accumulatedColor += color / samplesPerPixel;
}
result[index] = float4(sqrt(accumulatedColor), 1.0);
}
Oddly enough, it occasionally shall run. Changing the number of samples to 16 or above will always results in the mention IOAF code. Less than 16 samples, the code will run ~25% of the time. The more samples, the more likely it is to results in the error code.
Is there anyway to get additional on IOAF code 2067?
Determining the error code with Metal API + Shader Validation was not possible.
By testing individual portions of the kernel, the particular error was narrowed down to a while loop that caused the GPU to hang.
The problem can essentially be boiled down to code that looks like:
while(true) {
// ad infinitum
}
or, in the case of the code above in the call to random_f3_in_unit_sphere(seed):
while(randNum(seed) < threshold) {
// the while loop is not "bounded"
// in any sense. Whoops.
++seed;
}

Save ARFaceGeometry to OBJ file

In an iOS ARKit app, I've been trying to save the ARFaceGeometry data to an OBJ file. I followed the explanation here: How to make a 3D model from AVDepthData?. However, the OBJ isn't created correctly. Here's what I have:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
guard let faceAnchor = anchor as? ARFaceAnchor else { return }
currentFaceAnchor = faceAnchor
// If this is the first time with this anchor, get the controller to create content.
// Otherwise (switching content), will change content when setting `selectedVirtualContent`.
if node.childNodes.isEmpty, let contentNode = selectedContentController.renderer(renderer, nodeFor: faceAnchor) {
node.addChildNode(contentNode)
}
// https://stackoverflow.com/questions/52953590/how-to-make-a-3d-model-from-avdepthdata
let geometry = faceAnchor.geometry
let allocator = MDLMeshBufferDataAllocator()
let vertices = allocator.newBuffer(with: Data(fromArray: geometry.vertices), type: .vertex)
let textureCoordinates = allocator.newBuffer(with: Data(fromArray: geometry.textureCoordinates), type: .vertex)
let triangleIndices = allocator.newBuffer(with: Data(fromArray: geometry.triangleIndices), type: .index)
let submesh = MDLSubmesh(indexBuffer: triangleIndices, indexCount: geometry.triangleIndices.count, indexType: .uInt16, geometryType: .triangles, material: MDLMaterial(name: "mat1", scatteringFunction: MDLPhysicallyPlausibleScatteringFunction()))
let vertexDescriptor = MDLVertexDescriptor()
// Attributes
vertexDescriptor.addOrReplaceAttribute(MDLVertexAttribute(name: MDLVertexAttributePosition, format: .float3, offset: 0, bufferIndex: 0))
vertexDescriptor.addOrReplaceAttribute(MDLVertexAttribute(name: MDLVertexAttributeNormal, format: .float3, offset: MemoryLayout<float3>.stride, bufferIndex: 0))
vertexDescriptor.addOrReplaceAttribute(MDLVertexAttribute(name: MDLVertexAttributeTextureCoordinate, format: .float2, offset: MemoryLayout<float3>.stride + MemoryLayout<float3>.stride, bufferIndex: 0))
// Layouts
vertexDescriptor.layouts.add(MDLVertexBufferLayout(stride: MemoryLayout<float3>.stride + MemoryLayout<float3>.stride + MemoryLayout<float2>.stride))
let mdlMesh = MDLMesh(vertexBuffers: [vertices, textureCoordinates], vertexCount: geometry.vertices.count, descriptor: vertexDescriptor, submeshes: [submesh])
mdlMesh.addNormals(withAttributeNamed: MDLVertexAttributeNormal, creaseThreshold: 0.5)
let asset = MDLAsset(bufferAllocator: allocator)
asset.add(mdlMesh)
let documentsPath = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first!
let exportUrl = documentsPath.appendingPathComponent("face.obj")
try! asset.export(to: exportUrl)
}
The resulting OBJ file looks like this:
# Apple ModelIO OBJ File: face
mtllib face.mtl
g
v -0.000128156 -0.0277879 0.0575149
vn 0 0 0
vt -9.36008e-05 -0.0242016
usemtl material_1
f 1/1/1 1/1/1 1/1/1
f 1/1/1 1/1/1 1/1/1
f 1/1/1 1/1/1 1/1/1
... and many more lines
I would expect many more vertices, and the index values look wrong.
The core issue is that your vertex data isn't described correctly. When you provide a vertex descriptor to Model I/O while constructing a mesh, it represents the layout the data actually has, not your desired layout. You're supplying two vertex buffers, but your vertex descriptor describes an interleaved data layout with only one vertex buffer.
The easiest way to remedy this is to fix the vertex descriptor to reflect the data you're providing:
let vertexDescriptor = MDLVertexDescriptor()
// Attributes
vertexDescriptor.attributes[0] = MDLVertexAttribute(name: MDLVertexAttributePosition,
format: .float3,
offset: 0,
bufferIndex: 0)
vertexDescriptor.attributes[1] = MDLVertexAttribute(name: MDLVertexAttributeTextureCoordinate,
format: .float2,
offset: 0,
bufferIndex: 1)
// Layouts
vertexDescriptor.layouts[0] = MDLVertexBufferLayout(stride: MemoryLayout<float3>.stride)
vertexDescriptor.layouts[1] = MDLVertexBufferLayout(stride: MemoryLayout<float2>.stride)
When you later call addNormals(...), Model I/O will allocate the necessary space and update the vertex descriptor to reflect the new data. Since you're not rendering from the data and are instead immediately exporting it, the internal layout it chooses for the normals isn't important.

Get all sound frequencies of a WAV-file using Swift and AVFoundation

I would like to capture all frequencies between given timespans in a Wav-file. The intent is to do some audio analysis in a later step. For test, I’ve used the application “Sox” to generate a 1 second long Wav-file which includes only a single tone at 13000Hz. I want to read the file and find that frequency.
I’m using AVFoundation (which is important) to read the file. Since the input data is in PCM, I need to use FFT to get the actual frequencies which I do using the Accelerate framework. However, I don’t get the expected result (13000Hz), but rather a lot of values I don’t understand. I’m new to audio development, so any hint about where my code is failing is appreciated. The code includes a few comments where the issue occurs.
Thanks in advance!
Code:
import AVFoundation
import Accelerate
class Analyzer {
// This function is implemented using the code from the following tutorial:
// https://developer.apple.com/documentation/accelerate/vdsp/fast_fourier_transforms/finding_the_component_frequencies_in_a_composite_sine_wave
func fftTransform(signal: [Float], n: vDSP_Length) -> [Int] {
let observed: [DSPComplex] = stride(from: 0, to: Int(n), by: 2).map {
return DSPComplex(real: signal[$0],
imag: signal[$0.advanced(by: 1)])
}
let halfN = Int(n / 2)
var forwardInputReal = [Float](repeating: 0, count: halfN)
var forwardInputImag = [Float](repeating: 0, count: halfN)
var forwardInput = DSPSplitComplex(realp: &forwardInputReal,
imagp: &forwardInputImag)
vDSP_ctoz(observed, 2,
&forwardInput, 1,
vDSP_Length(halfN))
let log2n = vDSP_Length(log2(Float(n)))
guard let fftSetUp = vDSP_create_fftsetup(
log2n,
FFTRadix(kFFTRadix2)) else {
fatalError("Can't create FFT setup.")
}
defer {
vDSP_destroy_fftsetup(fftSetUp)
}
var forwardOutputReal = [Float](repeating: 0, count: halfN)
var forwardOutputImag = [Float](repeating: 0, count: halfN)
var forwardOutput = DSPSplitComplex(realp: &forwardOutputReal,
imagp: &forwardOutputImag)
vDSP_fft_zrop(fftSetUp,
&forwardInput, 1,
&forwardOutput, 1,
log2n,
FFTDirection(kFFTDirection_Forward))
let componentFrequencies = forwardOutputImag.enumerated().filter {
$0.element < -1
}.map {
return $0.offset
}
return componentFrequencies
}
func run() {
// The frequencies array is a array of frequencies which is then converted to points on sinus curves (signal)
let n = vDSP_Length(4*4096)
let frequencies: [Float] = [1, 5, 25, 30, 75, 100, 300, 500, 512, 1023]
let tau: Float = .pi * 2
let signal: [Float] = (0 ... n).map { index in
frequencies.reduce(0) { accumulator, frequency in
let normalizedIndex = Float(index) / Float(n)
return accumulator + sin(normalizedIndex * frequency * tau)
}
}
// These signals are then restored using the fftTransform function above, giving the exact same values as in the "frequencies" variable
let frequenciesRestored = fftTransform(signal: signal, n: n).map({Float($0)})
assert(frequenciesRestored == frequencies)
// Now I want to do the same thing, but reading the frequencies from a file (which includes a constant tone at 13000 Hz)
let file = { PATH TO A WAV-FILE WITH A SINGLE TONE AT 13000Hz RUNNING FOR 1 SECOND }
let asset = AVURLAsset(url: URL(fileURLWithPath: file))
let track = asset.tracks[0]
do {
let reader = try AVAssetReader(asset: asset)
let sampleRate = 48000.0
let outputSettingsDict: [String: Any] = [
AVFormatIDKey: kAudioFormatLinearPCM,
AVSampleRateKey: Int(sampleRate),
AVLinearPCMIsNonInterleaved: false,
AVLinearPCMBitDepthKey: 16,
AVLinearPCMIsFloatKey: false,
AVLinearPCMIsBigEndianKey: false,
]
let output = AVAssetReaderTrackOutput(track: track, outputSettings: outputSettingsDict)
output.alwaysCopiesSampleData = false
reader.add(output)
reader.startReading()
typealias audioBuffertType = Int16
autoreleasepool {
while (reader.status == .reading) {
if let sampleBuffer = output.copyNextSampleBuffer() {
var audioBufferList = AudioBufferList(mNumberBuffers: 1, mBuffers: AudioBuffer(mNumberChannels: 0, mDataByteSize: 0, mData: nil))
var blockBuffer: CMBlockBuffer?
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(
sampleBuffer,
bufferListSizeNeededOut: nil,
bufferListOut: &audioBufferList,
bufferListSize: MemoryLayout<AudioBufferList>.size,
blockBufferAllocator: nil,
blockBufferMemoryAllocator: nil,
flags: kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment,
blockBufferOut: &blockBuffer
);
let buffers = UnsafeBufferPointer<AudioBuffer>(start: &audioBufferList.mBuffers, count: Int(audioBufferList.mNumberBuffers))
for buffer in buffers {
let samplesCount = Int(buffer.mDataByteSize) / MemoryLayout<audioBuffertType>.size
let samplesPointer = audioBufferList.mBuffers.mData!.bindMemory(to: audioBuffertType.self, capacity: samplesCount)
let samples = UnsafeMutableBufferPointer<audioBuffertType>(start: samplesPointer, count: samplesCount)
let myValues: [Float] = samples.map {
let value = Float($0)
return value
}
// Here I would expect my array to include multiple "13000" which is the frequency of the tone in my file
// I'm not sure what the variable 'n' does in this case, but changing it seems to change the result.
// The value should be twice as high as the highest measurable frequency (Nyquist frequency) (13000),
// but this crashes the application:
let mySignals = fftTransform(signal: myValues, n: vDSP_Length(2 * 13000))
assert(mySignals[0] == 13000)
}
}
}
}
}
catch {
print("error!")
}
}
}
The test clip can be generated using:
sox -G -n -r 48000 ~/outputfile.wav synth 1.0 sine 13000