Audiokit logging - 55: EXCEPTION (-1): "" - repeatedly - swift

I updated to latest version of Audiokit 4.5
and my Audiokit class that is meant to listen to microphone amplitude is now printing: 55: EXCEPTION (-1): "" infinitely on the console. the app doesnt crash or anything.. but is logging that.
My app is a video camera app that records using GPUImage library.
The logs appear only when I start recording for some reason.
In addition to that. My onAmplitudeUpdate callback method no longer outputs anything, just 0.0 values. This didnt happen before updating Audiokit. Any ideas here ?
Here is my class:
// G8Audiokit.swift
// GenerateToolkit
//
// Created by Omar Juarez Ortiz on 2017-08-03.
// Copyright © 2017 All rights reserved.
//
import Foundation
import AudioKit
class G8Audiokit{
//Variables for Audio audioAnalysis
var microphone: AKMicrophone! // Device Microphone
var amplitudeTracker: AKAmplitudeTracker! // Tracks the amplitude of the microphone
var signalBooster: AKBooster! // boosts the signal
var audioAnalysisTimer: Timer? // Continuously calls audioAnalysis function
let amplitudeBuffSize = 10 // Smaller buffer will yield more amplitude responiveness and instability, higher value will respond slower but will be smoother
var amplitudeBuffer: [Double] // This stores a rolling window of amplitude values, used to get average amplitude
public var onAmplitudeUpdate: ((_ value: Float) -> ())?
static let sharedInstance = G8Audiokit()
private init(){ //private because that way the class can only be initialized by itself.
self.amplitudeBuffer = [Double](repeating: 0.0, count: amplitudeBuffSize)
startAudioAnalysis()
}
// public override init() {
// // Initialize the audio buffer with zeros
//
// }
/**
Set up AudioKit Processing Pipeline and start the audio analysis.
*/
func startAudioAnalysis(){
stopAudioAnalysis()
// Settings
AKSettings.bufferLength = .medium // Set's the audio signal buffer size
do {
try AKSettings.setSession(category: .playAndRecord)
} catch {
AKLog("Could not set session category.")
}
// ----------------
// Input + Pipeline
// Initialize the built-in Microphone
microphone = AKMicrophone()
// Pre-processing
signalBooster = AKBooster(microphone)
signalBooster.gain = 5.0 // When video recording starts, the signal gets boosted to the equivalent of 5.0, so we're setting it to 5.0 here and changing it to 1.0 when we start video recording.
// Filter out anything outside human voice range
let highPass = AKHighPassFilter(signalBooster, cutoffFrequency: 55) // Lowered this a bit to be more sensitive to bass-drums
let lowPass = AKLowPassFilter(highPass, cutoffFrequency: 255)
// At this point you don't have much signal left, so you balance it against the original signal!
let rebalanced = AKBalancer(lowPass, comparator: signalBooster)
// Track the amplitude of the rebalanced signal, we use this value for audio reactivity
amplitudeTracker = AKAmplitudeTracker(rebalanced)
// Mute the audio that gets routed to the device output, preventing feedback
let silence = AKBooster(amplitudeTracker, gain:0)
// We need to complete the chain, routing silenced audio to the output
AudioKit.output = silence
// Start the chain and timer callback
do{ try AudioKit.start(); }
catch{}
audioAnalysisTimer = Timer.scheduledTimer(timeInterval: 0.01,
target: self,
selector: #selector(audioAnalysis),
userInfo: nil,
repeats: true)
// Put the timer on the main thread so UI updates don't interrupt
RunLoop.main.add(audioAnalysisTimer!, forMode: RunLoopMode.commonModes)
}
// Call this when closing the app or going to background
public func stopAudioAnalysis(){
audioAnalysisTimer?.invalidate()
AudioKit.disconnectAllInputs() // Disconnect all AudioKit components, so they can be relinked when we call startAudioAnalysis()
}
// This is called on the audioAnalysisTimer
#objc func audioAnalysis(){
writeToBuffer(val: amplitudeTracker.amplitude) // Write an amplitude value to the rolling buffer
let val = getBufferAverage()
onAmplitudeUpdate?(Float(val))
}
// Writes amplitude values to a rolling window buffer, writes to index 0 and pushes the previous values to the right, removes the last value to preserve buffer length.
func writeToBuffer(val: Double) {
for (index, _) in amplitudeBuffer.enumerated() {
if (index == 0) {
amplitudeBuffer.insert(val, at: 0)
_ = amplitudeBuffer.popLast()
}
else if (index < amplitudeBuffer.count-1) {
amplitudeBuffer.rearrange(from: index-1, to: index+1)
}
}
}
// Returns the average of the amplitudeBuffer, resulting in a smoother audio reactivity signal
func getBufferAverage() -> Double {
var avg:Double = 0.0
for val in amplitudeBuffer {
avg = avg + val
}
avg = avg / amplitudeBuffer.count
return avg
}
}

Related

AudioKit v5 - What is the best way to program polyphony?

I am trying to use AudioKit v5 to build a simple synthesizer app that plays a certain frequency whenever a button is pressed.
I would like there to be 28 buttons. However, I do not know if I should use the DunneAudioKit Synth class or create a dictionary of 28 AudioKit DynamicOscillators.
If I use the Synth class, I currently have no way of changing the waveform of the synth. If I use the dictionary of DynamicOscillators, I will have to start 28 oscillators and keep them running throughout the lifetime of the app. Neither scenario seems that great. One option only allows for a certain sound while the other one is energy inefficient.
Is there a better way to allow for polyphony using AudioKit? A way that is efficient and also able to produce many different kinds of sound? AudioKit SynthOne is a great example of what I am trying to achieve.
I downloaded "AudioKit Synth One - The Ultimate Guide" by Francis Preve and from that I learned that SynthOne uses 2 Oscillators, a Sub-Oscillator, an FM Pair, and a Noise Generator to produce its sounds. However, the eBook does not explain how to actually code a polyphonic synthesizer using these 5 generators. I know that SynthOne's source code is online. I have downloaded it, but it is a little too advanced for me to understand. However, if someone can help explain how to use just those 5 objects to create a polyphonic synthesizer, that would be incredible.
Thanks in advance.
I'm not sure how they did things in AudioKit 4 which Synth One uses. I would speculate that it has an internal oscillator array bank when polyphonic mode is enabled. So essentially one instance of an oscillator per voice.
In the AudioKit 5 documentation it says Dunne Synth is the only polyphonic oscillator at this time, but I did added a WIP polyphonic oscillator example in the AudioKit Cookbook. I'm not sure how much of a resource hog it is. 28 instances seems excessive so you might be able to get by with around 10 and change the frequencies for each voice with button presses.
The third option would be to use something like AppleSampler or DunneSampler and make instruments based on single cycle wavetable audio files. This is more of a workaround and wouldn't give as much control over certain parameters, but it would be lighter on the resources.
I had a similar question and tried several ways of making a versatile polyphonic sampler.
It's true that the AppleSampler and the DunneSampler support polyphone; however, I needed a sampler that I could control with more precision on a note-by-note bases; i.e. playing each "voice" with unique playback parameters like playspeed, etc.
I found that building a sampler based on the AudioPlayer was the right path for me; and there, I created a member variable inside my sampler "voice" that kept track of when that voice was "busy"; when a "voice" is assigned a note to play, it marks itself as "busy", and when it's done, the callback from the AudioPlayer executes a function that sets the voice's "busy" variable to "false".
I then use a "conductor" to find the first available voice that is not "busy" to play a sound.
Here is a snippet:
import AudioKit
import AudioKitUI
import AVFoundation
import Keyboard
import Combine
import SwiftUI
import DunneAudioKit
class AudioPlayerVoice: ObservableObject, HasAudioEngine {
// For audio playback
let engine = AudioEngine()
let player = AudioPlayer()
let variSpeed: VariSpeed
var voiceNumber = 0
var busy : Bool
init() {
variSpeed = VariSpeed(player)
engine.output = variSpeed
do {
try engine.start()
} catch {
Log("AudioKit did not start!")
}
busy = false
variSpeed.rate = 1.0
player.isBuffered = true
player.completionHandler = donePlaying
}
func play(buffer: AVAudioPCMBuffer) {
// Set this voice to busy so that new incoming notes are not palyed here
busy = true
// Load buffer into player
player.load(buffer: buffer)
// Compare buffer and audioplayer formats
// print("Player format 1: ")
// print(player.outputFormat)
// print("Buffer format: ")
// print(buffer.format)
// Set AudioPlayer format to be the same as buffer format
player.playerNode.engine?.connect( player.playerNode, to: player.mixerNode, format: buffer.format)
// Compare buffer and audioplayer formats again to see if the above line changed anything
// print("Player format 2: ")
// print(player.outputFormat)
// Play sound with a completion callback
player.play(completionCallbackType: .dataPlayedBack)
}
func donePlaying() {
print("done!")
busy = false
}
}
class AudioPlayerConductor: ObservableObject {
// Mark Published so View updates label on changes
#Published private(set) var lastPlayed: String = "None"
let voiceCount = 16
var soundFileList: [String] = []
var buffers : [AVAudioPCMBuffer] = []
var players: [AudioPlayerVoice] = []
var sampleDict: [String: AVAudioPCMBuffer] = [:]
func loadAudioFiles() {
// Build audio file name list
let fileNameExtension = ".wav"
if let files = try? FileManager.default.contentsOfDirectory(atPath: Bundle.main.bundlePath + "/Samples" ){
// var counter = 0
///print("Files... " + files)
for file in files {
if file.hasSuffix(fileNameExtension) {
let name = file.prefix(file.count - fileNameExtension.count)
// add sound file name without extension to our soundFileist
soundFileList.append(String(name))
// get url for current sound
let url = Bundle.main.url(forResource: String(name), withExtension: "wav", subdirectory: "Samples")
// read audiofile into an AVAudioFile
let audioFile = try! AVAudioFile(forReading: url!)
// find the audio format and frame count
let audioFormat = audioFile.processingFormat
let audioFrameCount = UInt32(audioFile.length)
// create a new AVAudioPCMBuffer and read from the AVAudioFile into the AVAudioPCMBuffer
let audioFileBuffer = AVAudioPCMBuffer(pcmFormat: audioFormat, frameCapacity: audioFrameCount)
try! audioFile.read(into: audioFileBuffer!)
// updated the sampleDict dictionary with "name" / "buffer" key / value
sampleDict[String(name)] = audioFileBuffer
//print("loading... " + name)
//print(".......... " + url!.absoluteString)
}
}
}
print("Loaded Samples:")
print(soundFileList)
}
func initializeSamplerVoices() {
for i in 1...voiceCount {
let newAudioPlayerVoice = AudioPlayerVoice()
newAudioPlayerVoice.voiceNumber = i
players.append(newAudioPlayerVoice)
}
}
func playWithAvailableVoice (bufferToPlay: AVAudioPCMBuffer, playspeed: Float) {
for i in 0...(voiceCount-1) {
if (!players[i].busy) {
players[i].variSpeed.rate = playspeed
players[i].play(buffer: bufferToPlay)
break
}
}
}
func playXY(x: Double, y: Double) {
let playspeed = Float(AliSwift.scale(x, 0.0, UIScreen.screenWidth, 0.1, 3.0))
let soundNumber = Int(AliSwift.scale(y, 0.0, UIScreen.screenHeight, 0 , Double(soundFileList.count - 1)))
let soundBuffer = sampleDict[soundFileList[soundNumber]]
playWithAvailableVoice(bufferToPlay: soundBuffer!, playspeed: playspeed)
}
init() {
loadAudioFiles()
initializeSamplerVoices()
}
}
struct ContentViewAudioPlayer: View {
#StateObject var conductor = AudioPlayerConductor()
// #StateObject var samplerVoice = AudioPlayerVoice()
var body: some View {
ZStack {
VStack {
Rectangle()
.fill(.red)
.frame(maxWidth: .infinity)
.frame(maxHeight: .infinity)
.onTapGesture { location in
print("Tapped at \(location)")
let someSound = conductor.sampleDict.randomElement()!
let someSoundName = someSound.key
let someSoundBuffer = someSound.value
print("Playing: " + someSoundName)
conductor.playXY(x: location.x, y: location.y)
}
}
.onAppear {
// conductor.start()
}
.onDisappear {
// conductor.stop()
}
}
}
struct ContentViewAudioPlayer_Previews: PreviewProvider {
static var previews: some View {
ContentView()
}
}

Swift Charts delay for realtime data

I am using Swift 5 with Charts 3.6.0 ( Line Chart - cubic lines ) to plot real-time watchOS core motion. The goal is to display watch movement as quickly as possible. Since my sample rate is high, I suspect there will be a bottleneck in updating the view, and as such, would only like to display N most recent items.
Here is the watch function that sends the data immediately, as mentioned in Apple docs, and numerous tutorials:
motion.deviceMotionUpdateInterval = 1.0 / 120.0
motion.startDeviceMotionUpdates(using: .xArbitraryZVertical, to: queue) { [self] (deviceMotion: CMDeviceMotion?, _ : Error?) in
guard let motion = deviceMotion else { return }
self.sendDataToPhone(quaternion: motion.attitude.quaternion, time: Double(Date().timeIntervalSince1970))
}
private func sendDataToPhone(quaternion: CMQuaternion, time: Double) {
if WCSession.default.isReachable {
WCSession.default.sendMessageData(try! NSKeyedArchiver.archivedData(withRootObject: [quaternion.x, quaternion.y, quaternion.z, quaternion.w, time], requiringSecureCoding: false), replyHandler: nil, errorHandler: nil);
}
}
Once received, the packets are interpreted by the session() function on the iPhone:
onViewDidLoad() {
self.lineChartView.leftAxis.axisMinimum = -1;
self.lineChartView.leftAxis.axisMaximum = 1;
}
func session(_ session: WCSession, didReceiveMessageData messageData: Data) {
let record : [Double] = try! NSKeyedUnarchiver.unarchivedObject(ofClasses: [NSArray.self], from: messageData) as! [Double]
laggyFunction(qaternions: [simd_quatd.init(ix: record[0], iy: record[1], iz: record[2], r: record[3])], quaternionTimes: [record[4]])
}
private func laggyFunction(qaternions: [simd_quatd], quaternionTimes: [Double]) {
DispatchQueue.main.sync {
let dataset = self.lineChartView.data!.getDataSetByIndex(0)!
var x = dataset.entryCount + 1;
for quaternion in qaternions {
let _ = dataset.addEntry(ChartDataEntry(x: Double(x), y: quaternion.vector.w))
x += 1;
}
// limit the amount of points
while (dataset.entryCount > 5) {
let _ = dataset.removeFirst()
}
// - re index so entries start from 1
for startIdx in 1..<dataset.entryCount {
dataset.entryForIndex(startIdx - 1)!.x = Double(startIdx);
}
self.lineChartView.data!.notifyDataChanged()
self.lineChartView.notifyDataSetChanged()
}
}
Logic flow:
As packets come in, new entries are added to the initial dataset from the lineChartView. In the event there are more than 5, first N are removed. Then, the x values are re-indexed on the chart to ensure a sequential flow.
The problem:
The delay in updating the UI chart is very high. The elapsed time of both functions to complete is plotted below. At the 73rd percentile, the laggy function is able to accommodate the sample rate of incoming packets ( < 1/120 = 0.008) . The session function seems to accommodate the sample rate throughout. The CDF plot, in my opinion, does not do it justice. Visually, the chart is very "sluggish". As an experiment, if I throw the watch against the wall, I can observe it hitting the concrete well before the chart is updated.
My goal is to update the Chart as quickly as possible to observe watch motion and discard new entries until the UI is updated. What is the correct way to do this with my chart choice?

Why do I get popping noises from my Core Audio program?

I am trying to figure out how to use Apple's Core Audio APIs to record and play back linear PCM audio without any file I/O. (The recording side seems to work just fine.)
The code I have is pretty short, and it works somewhat. However, I am having trouble with identifying the source of clicks and pops in the output. I've been beating my head against this for many days with no success.
I have posted a git repo here, with a command-line program program that shows where I'm at: https://github.com/maxharris9/AudioRecorderPlayerSwift/tree/main/AudioRecorderPlayerSwift
I put in a couple of functions to prepopulate the recording. The tone generator (makeWave) and noise generator (makeNoise) are just in here as debugging aids. I'm ultimately trying to identify the source of the messed up output when you play back a recording in audioData:
// makeWave(duration: 30.0, frequency: 441.0) // appends to `audioData`
// makeNoise(frameCount: Int(44100.0 * 30)) // appends to `audioData`
_ = Recorder() // appends to `audioData`
_ = Player() // reads from `audioData`
Here's the player code:
var lastIndexRead: Int = 0
func outputCallback(inUserData: UnsafeMutableRawPointer?, inAQ: AudioQueueRef, inBuffer: AudioQueueBufferRef) {
guard let player = inUserData?.assumingMemoryBound(to: Player.PlayingState.self) else {
print("missing user data in output callback")
return
}
let sliceStart = lastIndexRead
let sliceEnd = min(audioData.count, lastIndexRead + bufferByteSize - 1)
print("slice start:", sliceStart, "slice end:", sliceEnd, "audioData.count", audioData.count)
if sliceEnd >= audioData.count {
player.pointee.running = false
print("found end of audio data")
return
}
let slice = Array(audioData[sliceStart ..< sliceEnd])
let sliceCount = slice.count
// doesn't fix it
// audioData[sliceStart ..< sliceEnd].withUnsafeBytes {
// inBuffer.pointee.mAudioData.copyMemory(from: $0.baseAddress!, byteCount: Int(sliceCount))
// }
memcpy(inBuffer.pointee.mAudioData, slice, sliceCount)
inBuffer.pointee.mAudioDataByteSize = UInt32(sliceCount)
lastIndexRead += sliceCount + 1
// enqueue the buffer, or re-enqueue it if it's a used one
check(AudioQueueEnqueueBuffer(inAQ, inBuffer, 0, nil))
}
struct Player {
struct PlayingState {
var packetPosition: UInt32 = 0
var running: Bool = false
var start: Int = 0
var end: Int = Int(bufferByteSize)
}
init() {
var playingState: PlayingState = PlayingState()
var queue: AudioQueueRef?
// this doesn't help
// check(AudioQueueNewOutput(&audioFormat, outputCallback, &playingState, CFRunLoopGetMain(), CFRunLoopMode.commonModes.rawValue, 0, &queue))
check(AudioQueueNewOutput(&audioFormat, outputCallback, &playingState, nil, nil, 0, &queue))
var buffers: [AudioQueueBufferRef?] = Array<AudioQueueBufferRef?>.init(repeating: nil, count: BUFFER_COUNT)
print("Playing\n")
playingState.running = true
for i in 0 ..< BUFFER_COUNT {
check(AudioQueueAllocateBuffer(queue!, UInt32(bufferByteSize), &buffers[i]))
outputCallback(inUserData: &playingState, inAQ: queue!, inBuffer: buffers[i]!)
if !playingState.running {
break
}
}
check(AudioQueueStart(queue!, nil))
repeat {
CFRunLoopRunInMode(CFRunLoopMode.defaultMode, BUFFER_DURATION, false)
} while playingState.running
// delay to ensure queue emits all buffered audio
CFRunLoopRunInMode(CFRunLoopMode.defaultMode, BUFFER_DURATION * Double(BUFFER_COUNT + 1), false)
check(AudioQueueStop(queue!, true))
check(AudioQueueDispose(queue!, true))
}
}
I captured the audio with Audio Hijack, and noticed that the jumps are indeed correlated with the size of the buffer:
Why is this happening, and what can I do to fix it?
I believe you were beginning to zero in on, or at least suspect, the cause of the popping you are hearing: it's caused by discontinuities in your waveform.
My initial hunch was that you were generating the buffers independently (i.e. assuming that each buffer starts at time=0), but I checked out your code and it wasn't that. I suspect some of the calculations in makeWave were at fault. To check this theory I replaced your makeWave with the following:
func makeWave(offset: Double, numSamples: Int, sampleRate: Float64, frequency: Float64, numChannels: Int) -> [Int16] {
var data = [Int16]()
for sample in 0..<numSamples / numChannels {
// time in s
let t = offset + Double(sample) / sampleRate
let value = Double(Int16.max) * sin(2 * Double.pi * frequency * t)
for _ in 0..<numChannels {
data.append(Int16(value))
}
}
return data
}
This function removes the double loop in the original, accepts an offset so it knows which part of the wave is being generated and makes some changes to the sampling of the sine wave.
When Player is modified to use this function you get a lovely steady tone. I'll add the changes to player soon. I can't in good conscience show the quick and dirty mess it is now to the public.
Based on your comments below I refocused on your player. The issue was that the audio buffers expect byte counts but the slice count and some other calculations were based on Int16 counts. The following version of outputCallback will fix it. Concentrate on the use of the new variable bytesPerChannel.
func outputCallback(inUserData: UnsafeMutableRawPointer?, inAQ: AudioQueueRef, inBuffer: AudioQueueBufferRef) {
guard let player = inUserData?.assumingMemoryBound(to: Player.PlayingState.self) else {
print("missing user data in output callback")
return
}
let bytesPerChannel = MemoryLayout<Int16>.size
let sliceStart = lastIndexRead
let sliceEnd = min(audioData.count, lastIndexRead + bufferByteSize/bytesPerChannel)
if sliceEnd >= audioData.count {
player.pointee.running = false
print("found end of audio data")
return
}
let slice = Array(audioData[sliceStart ..< sliceEnd])
let sliceCount = slice.count
print("slice start:", sliceStart, "slice end:", sliceEnd, "audioData.count", audioData.count, "slice count:", sliceCount)
// need to be careful to convert from counts of Ints to bytes
memcpy(inBuffer.pointee.mAudioData, slice, sliceCount*bytesPerChannel)
inBuffer.pointee.mAudioDataByteSize = UInt32(sliceCount*bytesPerChannel)
lastIndexRead += sliceCount
// enqueue the buffer, or re-enqueue it if it's a used one
check(AudioQueueEnqueueBuffer(inAQ, inBuffer, 0, nil))
}
I did not look at the Recorder code, but you may want to check if the same sort of error crept in there.

How to prevent Timer slowing down in background

I am writing a Mac OS app in Swift and want to repeat a task every 0.5s (more or less, high precision is not required). This app should run in the background while I use other applications.
I'm currently using a Timer:
Timer.scheduledTimer(withTimeInterval: 0.5, repeats: true)
It starts fine with updates roughly every 0.5s but after some time in the background, the Timer slows down considerably to intervals of roughly 1s or 2s (it's very close to these values to it seems that the timer skips ticks or has a slowdown of a factor 2 or 4).
I suspect it's because the app is given a low priority after a few seconds in the background. Is there a way to avoid this? It can be either in the app settings in XCode by asking to stay active all the time, or possibly from the system when the app is run (or even but doing things differently without Timer but I'd rather keep it simple if possible).
Here is a minimal working example: the app only has a ViewController with this code
import Cocoa
class ViewController: NSViewController {
var lastUpdate = Date().timeIntervalSince1970
override func viewDidLoad() {
super.viewDidLoad()
let timer = Timer.scheduledTimer(withTimeInterval: 0.5, repeats: true) {
timer in
let now = Date().timeIntervalSince1970
print(now - self.lastUpdate)
self.lastUpdate = now
}
RunLoop.current.add(timer, forMode: .common)
}
}
Output at start is
0.5277011394500732
0.5008649826049805
0.5000109672546387
0.49898695945739746
0.5005381107330322
0.5005340576171875
0.5000457763671875
...
But after a few seconds in the background it becomes
0.49993896484375
0.49997520446777344
0.5000619888305664
1.5194149017333984
1.0009620189666748
0.9984869956970215
2.0002501010894775
2.001321792602539
1.9989290237426758
...
If I bring the app back to the foreground, the timer goes back to 0.5s increments.
Note: I'm running Mac OSX 10.15.5 (Catalina) on an iMac
This is because of the App Nap. You can disable App Nap but it is not recommended.
var activity: NSObjectProtocol?
activity = ProcessInfo().beginActivity(options: .userInitiatedAllowingIdleSystemSleep, reason: "Timer delay")
The default tolerance value of timer is zero but The system reserves the right to apply a small amount of tolerance to certain timers regardless of the value of tolerance property.
As I stated in my comment below, if you want granularities lower than 1.0 s, you should not use Timer objects, but rather GCD. I wrote a class MilliTimer you can use where you have improved granularity down to a few milliseconds. Please try this in a Playground and then in your app. In this example, I set the granularity of the timer based on GCD to 50 milliseconds. To adjust the delay pass the delay you want in milliseconds in the respective parameter of the initializer. In your case, you might be interested in 500 ms = 0.5 s.
import Cocoa
public class MilliTimer
{
static let µseconds = 1000000.0
static var lastUpdate = DispatchTime.now()
var delay = 0
var doStop = false
var runs = 0
let maxRuns = 50
private class func timer(_ milliseconds:Int, closure:#escaping ()->())
{
let when = DispatchTime.now() + DispatchTimeInterval.milliseconds(milliseconds)
DispatchQueue.main.asyncAfter(deadline: when, execute: closure)
}
init(delay:Int) {
self.delay = delay
}
func delta() -> Double {
let now = DispatchTime.now()
let nowInMilliseconds = Double(now.uptimeNanoseconds) / MilliTimer.µseconds
let lastUpdateInMilliseconds = Double(MilliTimer.lastUpdate.uptimeNanoseconds) / MilliTimer.µseconds
let delta = nowInMilliseconds - lastUpdateInMilliseconds
MilliTimer.lastUpdate = now
return delta
}
func scheduleTimer()
{
MilliTimer.timer(delay) {
print(self.delta())
if self.doStop == false {
self.scheduleTimer()
self.runs += 1
if self.runs > self.maxRuns {
self.stop()
}
}
}
}
func stop() {
doStop = true
}
}
MilliTimer(delay: 50).scheduleTimer()
CFRunLoopRun()

Connecting AVAudioSourceNode to AVAudioSinkNode does not work

Context
I am writing a signal interpreter using AVAudioEngine which will analyse microphone input. During development, I want to use a default input buffer so I don't have to make noises for the microphone to test my changes.
I am developing using Catalyst.
Problem
I am using AVAudioSinkNode to get the sound buffer (the performance is allegedly better than using .installTap). I am using (a subclass of) AVAudioSourceNode to generate a sine wave. When I connect these two together, I expect the sink node's callback to be called, but it is not. Neither is the source node's render block called.
let engine = AVAudioEngine()
let output = engine.outputNode
let outputFormat = output.inputFormat(forBus: 0)
let sampleRate = Float(outputFormat.sampleRate)
let sineNode440 = AVSineWaveSourceNode(
frequency: 440,
amplitude: 1,
sampleRate: sampleRate
)
let sink = AVAudioSinkNode { _, frameCount, audioBufferList -> OSStatus in
print("[SINK] + \(frameCount) \(Date().timeIntervalSince1970)")
return noErr
}
engine.attach(sineNode440)
engine.attach(sink)
engine.connect(sineNode440, to: sink, format: nil)
try engine.start()
Additional tests
If I connect engine.inputNode to the sink (i.e., engine.connect(engine.inputNode, to: sink, format: nil)), the sink callback is called as expected.
When I connect sineNode440 to engine.outputNode, I can hear the sound and the render block is called as expected.
So both the source and the sink work individually when connected to device input/output, but not together.
AVSineWaveSourceNode
Not important to the question but relevant: AVSineWaveSourceNode is based on Apple sample code. This node produces the correct sound when connected to engine.outputNode.
class AVSineWaveSourceNode: AVAudioSourceNode {
/// We need this separate class to be able to inject the state in the render block.
class State {
let amplitude: Float
let phaseIncrement: Float
var phase: Float = 0
init(frequency: Float, amplitude: Float, sampleRate: Float) {
self.amplitude = amplitude
phaseIncrement = (2 * .pi / sampleRate) * frequency
}
}
let state: State
init(frequency: Float, amplitude: Float, sampleRate: Float) {
let state = State(
frequency: frequency,
amplitude: amplitude,
sampleRate: sampleRate
)
self.state = state
let format = AVAudioFormat(standardFormatWithSampleRate: Double(sampleRate), channels: 1)!
super.init(format: format, renderBlock: { isSilence, _, frameCount, audioBufferList -> OSStatus in
print("[SINE GENERATION \(frequency) - \(frameCount)]")
let tau = 2 * Float.pi
let ablPointer = UnsafeMutableAudioBufferListPointer(audioBufferList)
for frame in 0..<Int(frameCount) {
// Get signal value for this frame at time.
let value = sin(state.phase) * amplitude
// Advance the phase for the next frame.
state.phase += state.phaseIncrement
if state.phase >= tau {
state.phase -= tau
}
if state.phase < 0.0 {
state.phase += tau
}
// Set the same value on all channels (due to the inputFormat we have only 1 channel though).
for buffer in ablPointer {
let buf: UnsafeMutableBufferPointer<Float> = UnsafeMutableBufferPointer(buffer)
buf[frame] = value
}
}
return noErr
})
for i in 0..<self.numberOfInputs {
print("[SINEWAVE \(frequency)] BUS \(i) input format: \(self.inputFormat(forBus: i))")
}
for i in 0..<self.numberOfOutputs {
print("[SINEWAVE \(frequency)] BUS \(i) output format: \(self.outputFormat(forBus: i))")
}
}
}
outputNode drives the audio processing graph when AVAudioEngine is configured normally ("online"). outputNode pulls audio from its input node, which pulls audio from its input node(s), etc. When you connect sineNode and sink to each other without making a connection to outputNode, there is nothing attached to an output bus of sink or an input bus of outputNode, and therefore when the hardware asks for audio from outputNode it has nowhere to get it.
If I understand correctly I think you can accomplish what you'd like to do by getting rid of sink, connecting sineNode to outputNode, and running AVAudioEngine in manual rendering mode. In manual rendering mode you pass a manual render block to receive audio (similar to AVAudioSinkNode) and drive the graph manually by calling renderOffline(_:to:).