How to use Spotify iOS SDK attemptToDeliverAudioFrames:ofCount:streamDescription function? - swift

I am trying to perform some magic on Spotify's audio stream based on this. I have subclassed SPTCoreAudioController.
It seems Spotify pointer, which is passed into the overridden function, points to a 16-bit integer. I have tried to create AVAudioPCMBuffer based on audioFrames and audioDescription and pass it playerNode. The player node which is the node in Audio Engine works properly if I use an audio file.
override func attempt(toDeliverAudioFrames audioFrames: UnsafeRawPointer!, ofCount frameCount: Int, streamDescription audioDescription: AudioStreamBasicDescription) -> Int {
let ptr = audioFrames.bindMemory(to: Int16.self, capacity: frameCount)
let framePtr = UnsafeBufferPointer(start: ptr, count: frameCount)
let frames = Array(framePtr)
var newAudioDescription = audioDescription
let audioFormat = AVAudioFormat(streamDescription: &newAudioDescription)!
let audioPCMBuffer = AVAudioPCMBuffer(pcmFormat: audioFormat, frameCapacity: AVAudioFrameCount(frameCount))!
audioPCMBuffer.frameLength = audioPCMBuffer.frameCapacity
let channelCount = Int(audioDescription.mChannelsPerFrame)
if let int16ChannelData = audioPCMBuffer.int16ChannelData {
for channel in 0..<channelCount {
for sampleIndex in 0..<frameLength {
int16ChannelData[channel][Int(sampleIndex)] = frames[Int(sampleIndex)]
}
}
}
didReceive(pcmBuffer: audioPCMBuffer)
return super.attempt(toDeliverAudioFrames: audioFrames, ofCount: frameCount, streamDescription: audioDescription)
}
func didReceive(pcmBuffer: AVAudioPCMBuffer) {
playerNode.scheduleBuffer(pcmBuffer) {
}
}
I get AURemoteIO::IOThread (19): EXC_BAD_ACCESS (code=1, address=0x92e370f25cc0)
the error which I think the data is moved before I can copy it to the pcm buffer.
I was wondering if someone knows what is the proper way of using attemptToDeliverAudioFrames:ofCount:streamDescription: function?

Related

How to route audio to default speakers in Swift for macOS?

I have a function playing audio for a macOS swiftUI app but I want it to play the sound through the default built in speakers every single time. Does anyone know of any reliable method for this?
I've researched a lot but haven't found a solid method for macos. This is what I've tried:
AVRoutePickerView
This was only availble for ios and Mac catalyst but not macOS
Getting Device ID in AVAudioEngine
I found this code snippet but it assumes that the built in speaker device ID stays the same which it doesnt so that doesn't help.
engine = AVAudioEngine()
let output = engine.outputNode
// get the low level input audio unit from the engine:
let outputUnit = output.audioUnit!
// use core audio low level call to set the input device:
var outputDeviceID: AudioDeviceID = 51 // replace with actual, dynamic value
AudioUnitSetProperty(outputUnit,
kAudioOutputUnitProperty_CurrentDevice,
kAudioUnitScope_Global,
0,
&outputDeviceID,
UInt32(MemoryLayout<AudioDeviceID>.size))
Disabling bluetooth so the audio only goes through main speakers and not bluetooth speaker. This didn't seem the best approach so I havent' tested it.
The following is the code I have for playing sound:
func playTheSound() {
let url = Bundle.main.url(forResource: "Blow", withExtension: "mp3")
player = try! AVAudioPlayer(contentsOf: url!)
player?.play()
print("Sound was played")
//
So, any recommendations on how to route the audio to main speakers for macOS?
By "default built-in" I assume you actually just mean "built-in." The default speakers are the ones the audio will route to already.
The simplest solution to this that will probably always work is to route to the UID "BuiltInSpeakerDevice". For example, this does what you want:
let player = AVPlayer()
func playTheSound() {
let url = URL(filePath: "/System/Library/Sounds/Blow.aiff")
let item = AVPlayerItem(url: url)
player.replaceCurrentItem(with: item)
player.audioOutputDeviceUniqueID = "BuiltInSpeakerDevice"
player.play()
}
Note the use of AVPlayer and audioOutputDeviceUniqueID here. I'm betting this will work in approximately 100% of cases. It should even "work" if there were no built-in speakers, in that this silently fails (without crashing) if the UID doesn't exist.
But...sigh...I can't find anywhere that this is documented or any system constant for this string. And I really hate magic, undocumented strings. So, let's do it right. Besides, if we do it right, it'll work with AVAudioEngine, too. So let's get there.
First, you should always take a look at the invaluable CoreAudio output device useful methods in Swift 4. I don't know if anyone has turned this into a real framework, but this is a treasure trove of examples. The following code is a modernized version of that.
struct AudioDevice {
let id: AudioDeviceID
static func getAll() -> [AudioDevice] {
var propertyAddress = AudioObjectPropertyAddress(
mSelector: kAudioHardwarePropertyDevices,
mScope: kAudioObjectPropertyScopeGlobal,
mElement: kAudioObjectPropertyElementMain)
// Get size of buffer for list
var devicesBufferSize: UInt32 = 0
AudioObjectGetPropertyDataSize(AudioObjectID(kAudioObjectSystemObject), &propertyAddress,
0, nil,
&devicesBufferSize)
let devicesCount = Int(devicesBufferSize) / MemoryLayout<AudioDeviceID>.stride
// Get list
let devices = Array<AudioDeviceID>(unsafeUninitializedCapacity: devicesCount) { buffer, initializedCount in
AudioObjectGetPropertyData(AudioObjectID(kAudioObjectSystemObject), &propertyAddress,
0, nil,
&devicesBufferSize, buffer.baseAddress!)
initializedCount = devicesCount
}
return devices.map(Self.init)
}
var hasOutputStreams: Bool {
var propertySize: UInt32 = 256
var propertyAddress = AudioObjectPropertyAddress(
mSelector: kAudioDevicePropertyStreams,
mScope: kAudioDevicePropertyScopeOutput,
mElement: kAudioObjectPropertyElementMain)
AudioObjectGetPropertyDataSize(id, &propertyAddress, 0, nil, &propertySize)
return propertySize > 0
}
var isBuiltIn: Bool {
transportType == kAudioDeviceTransportTypeBuiltIn
}
var transportType: AudioDevicePropertyID {
var deviceTransportType = AudioDevicePropertyID()
var propertySize = UInt32(MemoryLayout<AudioDevicePropertyID>.size)
var propertyAddress = AudioObjectPropertyAddress(
mSelector: kAudioDevicePropertyTransportType,
mScope: kAudioObjectPropertyScopeGlobal,
mElement: kAudioObjectPropertyElementMain)
AudioObjectGetPropertyData(id, &propertyAddress,
0, nil, &propertySize,
&deviceTransportType)
return deviceTransportType
}
var uid: String {
var propertySize = UInt32(MemoryLayout<CFString>.size)
var propertyAddress = AudioObjectPropertyAddress(
mSelector: kAudioDevicePropertyDeviceUID,
mScope: kAudioObjectPropertyScopeGlobal,
mElement: kAudioObjectPropertyElementMain)
var result: CFString = "" as CFString
AudioObjectGetPropertyData(id, &propertyAddress, 0, nil, &propertySize, &result)
return result as String
}
}
And with that in place, you can fetch the first built-in output device:
player.audioOutputDeviceUniqueID = AudioDevice.getAll()
.first(where: {$0.hasOutputStreams && $0.isBuiltIn })?
.uid
Or you can use your AVAudioEngine approach if you want more control (note difference between uid and id here):
let player = AVAudioPlayerNode()
let engine = AVAudioEngine()
func playTheSound() {
let output = engine.outputNode
let outputUnit = output.audioUnit!
var outputDeviceID = AudioDevice.getAll()
.first(where: {$0.hasOutputStreams && $0.isBuiltIn })!
.id
AudioUnitSetProperty(outputUnit,
kAudioOutputUnitProperty_CurrentDevice,
kAudioUnitScope_Global,
0,
&outputDeviceID,
UInt32(MemoryLayout<AudioDeviceID>.size))
engine.attach(player)
engine.connect(player, to: engine.outputNode, format: nil)
try! engine.start()
let url = URL(filePath: "/System/Library/Sounds/Blow.aiff")
let file = try! AVAudioFile(forReading: url)
player.scheduleFile(file, at: nil)
player.play()
}

Export a video with dynamic text per frame in Swift AVFoundation

I fetch the timestamps from every frame and store them in an array using the showTimestamps function. I now want to "draw" each timestamp on each frame of the video, and export it.
func showTimestamps(videoFile : URL) -> [String] {
let asset = AVAsset(url:videoFile)
let track = asset.tracks(withMediaType: AVMediaType.video)[0]
let output = AVAssetReaderTrackOutput(track: track, outputSettings: nil)
guard let reader = try? AVAssetReader(asset: asset) else {exit(1)}
output.alwaysCopiesSampleData = false
reader.add(output)
reader.startReading()
var times : [String] = []
while(reader.status == .reading){
if let sampleBuffer = output.copyNextSampleBuffer() , CMSampleBufferIsValid(sampleBuffer) && CMSampleBufferGetTotalSampleSize(sampleBuffer) != 0 {
let frameTime = CMSampleBufferGetOutputPresentationTimeStamp(sampleBuffer)
if (frameTime.isValid){
times.append(String(format:"%.3f", frameTime.seconds))
}
}
}
return times.sorted()
}
However, I cannot figure out how to export a new video with each frame containing it's respectful timestamp? i.e How can I implement this code:
func generateNewVideoWithTimestamps(videoFile: URL, timestampsForFrames: [String]) {
// TODO
}
I want to keep the framerate, video quality, etc., the same. The only thing that should differ is to add some text on the bottom.
To get this far, I used these guides and failed: Frames, Static Text, Watermark

Stream Microphone Audio from one device to another using Multipeer connectivy and EZAudio

[TLDR: Receiving an ASSERTION FAILURE on CABufferList.h (find error at the bottom) when trying to save streamed audio data]
I am having trouble saving microphone audio that is streamed between devices using Multipeer Connectivity. So far I have two devices connected to each other using Multipeer Connectivity and have them sending messages and streams to each other.
Finally I have the StreamDelegate method
func stream(_ aStream: Stream, handle eventCode: Stream.Event) {
// create a buffer for capturing the inputstream data
let bufferSize = 2048
let buffer = UnsafeMutablePointer<UInt8>.allocate(capacity: bufferSize)
defer {
buffer.deallocate()
}
var audioBuffer: AudioBuffer!
var audioBufferList: AudioBufferList!
switch eventCode {
case .hasBytesAvailable:
// if the input stream has bytes available
// return the actual number of bytes placed in the buffer;
let read = self.inputStream.read(buffer, maxLength: bufferSize)
if read < 0 {
//Stream error occured
print(self.inputStream.streamError!)
} else if read == 0 {
//EOF
break
}
guard let mData = UnsafeMutableRawPointer(buffer) else { return }
audioBuffer = AudioBuffer(mNumberChannels: 1, mDataByteSize: UInt32(read), mData: mData)
audioBufferList = AudioBufferList(mNumberBuffers: 1, mBuffers: audioBuffer)
let audioBufferListPointer = UnsafeMutablePointer<AudioBufferList>.allocate(capacity: read)
audioBufferListPointer.pointee = audioBufferList
DispatchQueue.main.async {
if self.ezRecorder == nil {
self.recordAudio()
}
self.ezRecorder?.appendData(from: audioBufferListPointer, withBufferSize: UInt32(read))
}
print("hasBytesAvailable \(audioBuffer!)")
case .endEncountered:
print("endEncountered")
if self.inputStream != nil {
self.inputStream.delegate = nil
self.inputStream.remove(from: .current, forMode: .default)
self.inputStream.close()
self.inputStream = nil
}
case .errorOccurred:
print("errorOccurred")
case .hasSpaceAvailable:
print("hasSpaceAvailable")
case .openCompleted:
print("openCompleted")
default:
break
}
}
I am getting the stream of data however when I try to save it as an audio file using EZRecorder, I get the following error message
[default] CABufferList.h:184 ASSERTION FAILURE [(nBytes <= buf->mDataByteSize) != 0 is false]:
I suspect the error could be arising when I create AudioStreamBasicDescription for EZRecorder.
I understand there may be other errors here and I appreciate any suggestions to solve the bug and improve the code. Thanks
EZAudio comes with TPCircularBuffer - use that.
Because writing the buffer to file is an async operation, this becomes a great use case for a circular buffer where we have one producer and one consumer.
Use the EZAudioUtilities where possible.
Update: EZRecorder write expects bufferSize to be number of frames to write and not bytes
So something like this should work:
class StreamDelegateInstance: NSObject {
private static let MaxReadSize = 2048
private static let BufferSize = MaxReadSize * 4
private var availableReadBytesPtr = UnsafeMutablePointer<Int32>.allocate(capacity: 1)
private var availableWriteBytesPtr = UnsafeMutablePointer<Int32>.allocate(capacity: 1)
private var ezRecorder: EZRecorder?
private var buffer = UnsafeMutablePointer<TPCircularBuffer>.allocate(capacity: 1)
private var inputStream: InputStream?
init(inputStream: InputStream? = nil) {
self.inputStream = inputStream
super.init()
EZAudioUtilities.circularBuffer(buffer, withSize: Int32(StreamDelegateInstance.BufferSize))
ensureWriteStream()
}
deinit {
EZAudioUtilities.freeCircularBuffer(buffer)
buffer.deallocate()
availableReadBytesPtr.deallocate()
availableWriteBytesPtr.deallocate()
self.ezRecorder?.closeAudioFile()
self.ezRecorder = nil
}
private func ensureWriteStream() {
guard self.ezRecorder == nil else { return }
// stores audio to temporary folder
let audioOutputPath = NSTemporaryDirectory() + "audioOutput2.aiff"
let audioOutputURL = URL(fileURLWithPath: audioOutputPath)
print(audioOutputURL)
// let audioStreamBasicDescription = AudioStreamBasicDescription(mSampleRate: 44100.0, mFormatID: kAudioFormatLinearPCM, mFormatFlags: kAudioFormatFlagIsSignedInteger | kAudioFormatFlagsNativeEndian | kAudioFormatFlagIsPacked, mBytesPerPacket: 4, mFramesPerPacket: 1, mBytesPerFrame: 4, mChannelsPerFrame: 1, mBitsPerChannel: 32, mReserved: 1081729024)
// EZAudioUtilities.audioBufferList(withNumberOfFrames: <#T##UInt32#>,
// numberOfChannels: 1,
// interleaved: true)
// if you don't need a custom format, consider using EZAudioUtilities.m4AFormat
let format = EZAudioUtilities.aiffFormat(withNumberOfChannels: 1,
sampleRate: 44800)
self.ezRecorder = EZRecorder.init(url: audioOutputURL,
clientFormat: format,
fileType: .AIFF)
}
private func writeStream() {
let ptr = TPCircularBufferTail(buffer, availableWriteBytesPtr)
// ensure we have non 0 bytes to write - which should always be true, but you may want to refactor things
guard availableWriteBytesPtr.pointee > 0 else { return }
let framesToWrite = availableWriteBytesPtr.pointee / 4 // sizeof(float)
let audioBuffer = AudioBuffer(mNumberChannels: 1,
mDataByteSize: UInt32(availableWriteBytesPtr.pointee),
mData: ptr)
let audioBufferList = AudioBufferList(mNumberBuffers: 1, mBuffers: audioBuffer)
self.ezRecorder?.appendData(from: &audioBufferList,
withBufferSize: UInt32(framesToWrite))
TPCircularBufferConsume(buffer, framesToWrite * 4)
}
}
extension StreamDelegateInstance: StreamDelegate {
func stream(_ aStream: Stream, handle eventCode: Stream.Event) {
switch eventCode {
case .hasBytesAvailable:
// if the input stream has bytes available
// return the actual number of bytes placed in the buffer;
guard let ptr = TPCircularBufferHead(buffer, availableReadBytesPtr) else {
print("couldn't get buffer ptr")
break;
}
let bytedsToRead = min(Int(availableReadBytesPtr.pointee), StreamDelegateInstance.MaxReadSize)
let mutablePtr = ptr.bindMemory(to: UInt8.self, capacity: Int(bytedsToRead))
let bytesRead = self.inputStream?.read(mutablePtr,
maxLength: bytedsToRead) ?? 0
if bytesRead < 0 {
//Stream error occured
print(self.inputStream?.streamError! ?? "No bytes read")
break
} else if bytesRead == 0 {
//EOF
break
}
TPCircularBufferProduce(buffer, Int32(bytesRead))
DispatchQueue.main.async { [weak self] in
self?.writeStream()
}
case .endEncountered:
print("endEncountered")
if self.inputStream != nil {
self.inputStream?.delegate = nil
self.inputStream?.remove(from: .current, forMode: .default)
self.inputStream?.close()
self.inputStream = nil
}
case .errorOccurred:
print("errorOccurred")
case .hasSpaceAvailable:
print("hasSpaceAvailable")
case .openCompleted:
print("openCompleted")
default:
break
}
}
}

My videos have a naturalSize of only (4.0, 3.0) pixels, which is also extracted frame size

Context
I'm dealing with video files that are 1280x920, that's their actual pixel size when displayed in QuickTime, or even played in my AVPlayer.
I have a bunch of videos in a folder and I need to stick them together on a AVMutableComposition and play it.
I also need, for each video, to extract the last frame.
What I did so far was using AVAssetImageGenerator on each on my individual AVAsset and it worked, whether I was using generateCGImagesAsynchronously or copyCGImage.
But I thought it would be more efficient to run generateCGImagesAsynchronously on my composition asset, so I have only one call instead of looping with each original tracks.
Instead of :
v-Get Frame
AVAsset1 |---------|
AVAsset2 |---------|
AVAsset3 |---------|
I want to do :
v----------v----------v- Get Frames
AVMutableComposition: |---------||---------||---------|
Problem
Here is the actual issue:
import AVKit
var video1URL = URL(fileReferenceLiteralResourceName: "video_bad.mp4") // One of my video file
let asset1 = AVAsset(url: video1URL)
let track1 = asset1.tracks(withMediaType: .video).first!
_ = track1.naturalSize // {w 4 h 3}
var video2URL = URL(fileReferenceLiteralResourceName: "video_ok.mp4") // Some mp4 I got from internet
let asset2 = AVAsset(url: video2URL)
let track2 = asset2.tracks(withMediaType: .video).first!
_ = track2.naturalSize // {w 1920 h 1080}
Here is the actual screenshot of the playground (that you can download here):
And here is something else :
Look at the "Current Scale" information in QuickTime inspector. The video displays just fine, but it's showed as being really magnified (note that no pixel is blurry or anything, it has to do with some metadata)
The video file I'm working with in QuickTime:
The video file from internet:
Question
What metadata that information is and how to deal with it?
Why it is different on the original track than when put on a different composition?
How I can extract a frame on such videos?
So if you stumble across this post, it's probably you are trying to figure out Tesla's way of writing videos.
There is no easy solution to that issue, that is caused by Tesla software incorrectly setting metadata in .mov video files. I opened an incident with Apple and they were able to confirm this.
So I wrote some code to actually go and fix the video file by rewriting the bytes where it indicates the video track size.
Here we go, it's ugly but for the sake of completeness I wanted to post a solution here, if not the best.
import Foundation
struct VideoFixer {
var url: URL
private var fh: FileHandle?
static func fix(_ url: URL) {
var fixer = VideoFixer(url)
fixer.fix()
}
init(_ url: URL) {
self.url = url
}
mutating func fix() {
guard let fh = try? FileHandle(forUpdating: url) else {
return
}
var atom = Atom(fh)
atom.seekTo(AtomType.moov)
atom.enter()
if atom.atom_type != AtomType.trak {
atom.seekTo(AtomType.trak)
}
atom.enter()
if atom.atom_type != AtomType.tkhd {
atom.seekTo(AtomType.tkhd)
}
atom.seekTo(AtomType.tkhd)
let data = atom.data()
let width = data?.withUnsafeBytes { $0.load(fromByteOffset: 76, as: UInt16.self).bigEndian }
let height = data?.withUnsafeBytes { $0.load(fromByteOffset: 80, as: UInt16.self).bigEndian }
if width==4 && height==3 {
guard let offset = try? fh.offset() else {
return
}
try? fh.seek(toOffset: offset+76)
//1280x960
var newWidth = UInt16(1280).byteSwapped
var newHeight = UInt16(960).byteSwapped
let dataWidth = Data(bytes: &newWidth, count: 2)
let dataHeight = Data(bytes: &newHeight, count: 2)
fh.write(dataWidth)
try? fh.seek(toOffset: offset+80)
fh.write(dataHeight)
}
try? fh.close()
}
}
typealias AtomType = UInt32
extension UInt32 {
static var ftyp = UInt32(1718909296)
static var mdat = UInt32(1835295092)
static var free = UInt32(1718773093)
static var moov = UInt32(1836019574)
static var trak = UInt32(1953653099)
static var tkhd = UInt32(1953196132)
}
struct Atom {
var fh: FileHandle
var atom_size: UInt32 = 0
var atom_type: UInt32 = 0
init(_ fh: FileHandle) {
self.fh = fh
self.read()
}
mutating func seekTo(_ type:AtomType) {
while self.atom_type != type {
self.next()
}
}
mutating func next() {
guard var offset = try? fh.offset() else {
return
}
offset = offset-8+UInt64(atom_size)
if (try? self.fh.seek(toOffset: UInt64(offset))) == nil {
return
}
self.read()
}
mutating func read() {
self.atom_size = fh.nextUInt32().bigEndian
self.atom_type = fh.nextUInt32().bigEndian
}
mutating func enter() {
self.atom_size = fh.nextUInt32().bigEndian
self.atom_type = fh.nextUInt32().bigEndian
}
func data() -> Data? {
guard let offset = try? fh.offset() else {
return nil
}
let data = fh.readData(ofLength: Int(self.atom_size))
try? fh.seek(toOffset: offset)
return data
}
}
extension FileHandle {
func nextUInt32() -> UInt32 {
let data = self.readData(ofLength: 4)
let i32array = data.withUnsafeBytes { $0.load(as: UInt32.self) }
//print(i32array)
return i32array
}
}

AVFoundation route audio between two non-system input and ouputs

I've been trying to route audio from a virtual Soundflower device to another hardware speaker. The Soundflower virtual device is my system output. I want my AVEAudioEngine to take Soundflower input and output to the hardware speaker.
However having researched it seems AVAudioEngine only support RIO devices. I've looked AudioKit and Output Splitter example however I was getting crackling and unsatisfactory results. My bones of my code is as follows
static func set(device: String, isInput: Bool, toUnit unit: AudioUnit) -> Int {
let devs = (isInput ? EZAudioDevice.inputDevices() : EZAudioDevice.outputDevices()) as! [EZAudioDevice]
let mic = devs.first(where: { $0.name == device})!
var inputID = mic.deviceID // replace with actual, dynamic value
AudioUnitSetProperty(unit, kAudioOutputUnitProperty_CurrentDevice,
kAudioUnitScope_Global, 0, &inputID, UInt32(MemoryLayout<AudioDeviceID>.size))
return Int(inputID)
}
let outputRenderCallback: AURenderCallback = {
(inRefCon: UnsafeMutableRawPointer,
ioActionFlags: UnsafeMutablePointer<AudioUnitRenderActionFlags>,
inTimeStamp: UnsafePointer<AudioTimeStamp>,
inBusNumber: UInt32,
inNumberFrames: UInt32,
ioData: UnsafeMutablePointer<AudioBufferList>?) -> OSStatus in
// Get Refs
let buffer = UnsafeMutableAudioBufferListPointer(ioData)
let engine = Unmanaged<Engine>.fromOpaque(inRefCon).takeUnretainedValue()
// If Engine hasn't saved any data yet just output silence
if (engine.latestSampleTime == nil) {
//makeBufferSilent(buffer!)
return noErr
}
// Read the latest available Sample
let sampleTime = engine.latestSampleTime
if let err = checkErr(engine.ringBuffer.fetch(ioData!, framesToRead: inNumberFrames, startRead: sampleTime!).rawValue) {
//makeBufferSilent(buffer!)
return err
}
return noErr
}
private let trailEngine: AVAudioEngine
private let subEngine: AVAudioEngine
init() {
subEngine = AVAudioEngine()
let inputUnit = subEngine.inputNode.audioUnit!
print(Engine.set(device: "Soundflower (2ch)", isInput: true, toUnit: inputUnit))
trailEngine = AVAudioEngine()
let outputUnit = trailEngine.outputNode.audioUnit!
print(Engine.set(device: "Boom 3", isInput: false, toUnit: outputUnit))
subEngine.inputNode.installTap(onBus: 0, bufferSize: 2048, format: nil) { [weak self] (buffer, time) in
guard let self = self else { return }
let sampleTime = time.sampleTime
self.latestSampleTime = sampleTime
// Write to RingBuffer
if let _ = checkErr(self.ringBuffer.store(buffer.audioBufferList, framesToWrite: 2048, startWrite: sampleTime).rawValue) {
//makeBufferSilent(UnsafeMutableAudioBufferListPointer(buffer.mutableAudioBufferList))
}
}
var renderCallbackStruct = AURenderCallbackStruct(
inputProc: outputRenderCallback,
inputProcRefCon: UnsafeMutableRawPointer(Unmanaged<Engine>.passUnretained(self).toOpaque())
)
if let _ = checkErr(
AudioUnitSetProperty(
trailEngine.outputNode.audioUnit!,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Global,
0,
&renderCallbackStruct,
UInt32(MemoryLayout<AURenderCallbackStruct>.size)
)
) {
return
}
subEngine.prepare()
trailEngine.prepare()
ringBuffer = RingBuffer<Float>(numberOfChannels: 2, capacityFrames: UInt32(4800 * 20))
do {
try self.subEngine.start()
} catch {
print("Error starting the input engine: \(error)")
}
DispatchQueue.main.asyncAfter(deadline: .now() + 0.01) {
do {
try self.trailEngine.start()
} catch {
print("Error starting the output engine: \(error)")
}
}
}
For reference the RingBuffer implementation is at:
https://github.com/vgorloff/CARingBuffer
and the AudioKit example
https://github.com/AudioKit/OutputSplitter/tree/master/OutputSplitter
I was using AudioKit 4 (however the example only uses AudioKit's device wrappers). The result of this code is super crackly audio through the speakers which suggests the signal is getting completely mangled in the transfer between the two engines. I am not too worried about latency between the two engines.