I've hunted high and low and cannot find a solution to this problem. I am looking for a method to change the input/output devices which an AVAudioEngine will use on macOS.
When simply playing back an audio file the following works as expected:
var outputDeviceID:AudioDeviceID = xxx
let result:OSStatus = AudioUnitSetProperty(outputUnit, kAudioOutputUnitProperty_CurrentDevice, kAudioUnitScope_Global, 0, &outputDeviceID, UInt32(MemoryLayout<AudioObjectPropertyAddress>.size))
if result != 0 {
print("error setting output device \(result)")
return
}
However if I initialize the audio input (with let input = engine.inputNode) then I get an error once I attempt to start the engine:
AVAEInternal.h:88 required condition is false: [AVAudioEngine.mm:1055:CheckCanPerformIO: (canPerformIO)]
I know that my playback code is OK since, if I avoid changing the output device then I can hear the microphone and the audio file, and if I change the output device but don't initialize the inputNode the file plays to the specified destination.
Additionally to this I have been trying to change the input device, I understood from various places that the following should do this:
let result1:OSStatus = AudioUnitSetProperty(inputUnit, kAudioOutputUnitProperty_CurrentDevice, kAudioUnitScope_Output, 0, &inputDeviceID, UInt32(MemoryLayout<AudioObjectPropertyAddress>.size))
if result1 != 0 {
print("failed with error \(result1)")
return
}
However, this doesn't work - in most cases it throws an error (10853) although if I select a sound card that has both inputs and outputs it succeeds - it appears that when I am attempting to set the output or the input node it is actually setting the device for both.
I would think that this meant that an AVAudioEngine instance can only deal with one device, however it is quite happy working with the default devices (mic and speakers/headphones) so I am confident that isn't the issue. Looking at some solutions I have seen online people simply change the default input, but this isn't a massively nice solution.
Does anyone have any ideas as to whether this is possible?
It's worth noting that kAudioOutputUnitProperty_CurrentDevice is the only property available, there is not an equivalent kAudioInputUnitProperty_CurrentDevice key, due to the fact that as I understand it both the inputNode and outputNode are classed as "Output Units" (as they both emit sound somewhere).
Any ideas would be much appreciated as this is very very frustrating!!
Thanks
So I filed a support request with apple on this and another issue and the response confirms that an AVAudioEngine can only be assigned to a single Aggregate device (that is, a device with both input and output channels) - the system default units create effectively an aggregate device internally which is why they work, although I've found an additional issue in that if the input device also has output capabilities (and you activate the inputNode) then that device has to be both the input and output device as otherwise the output appears not to work.
So answer is that I think there is no answer..
I'm using the (very cool) AudioKit framework to process audio for a macOS music visualizer app. My audio source ("mic") is iTunes 12 via Rogue Amoeba Loopback.
In the Xcode debug window, I'm seeing the following error message each time I launch my app:
kAudioUnitErr_TooManyFramesToProcess : inFramesToProcess=513, mMaxFramesPerSlice=512
I've gathered from searches that this is probably related to sample rate, but I haven't found a clear description of what this error indicates (or if it even matters). My app is functioning normally, but I'm wondering if this could be affecting efficiency.
EDIT: The error message does not appear if I use Audio MIDI Setup to set the Loopback device output to 44.1kHz. (I set it initially to 48.0kHz to match my other audio devices, which I keep configured to the video standard.)
Keeping Loopback at 44.1kHz is an acceptable solution, but now my question would be: Is it possible to avoid this error even with a 48.0kHz input? (I tried AKSettings.sampleRate = 48000 but that made no difference.) Or can I just safely ignore the error in any case?
AudioKit is initialized thusly:
AKSettings.audioInputEnabled = true
mic = AKMicrophone()
do {
try mic.setDevice(AudioKit.inputDevices![inputDeviceNumber])
}
catch {
AKLog("Device not set")
}
amplitudeTracker = AKAmplitudeTracker(mic)
AudioKit.output = AKBooster(amplitudeTracker, gain: 0)
do {
try AudioKit.start()
} catch {
AKLog("AudioKit did not start")
}
mic.start()
amplitudeTracker?.start()
This string saved my app
try? AVAudioSession.sharedInstance().setPreferredIOBufferDuration(0.02)
I am developing a small audio sequencer application using AudioKit. I only need to play back 4 channels of audio. However I need to play them back perfectly synchronized down to the sample level. When I run a test using just two audio files, I can hear that they are not synchronized. The difference is only a few samples, but even a one sample discrepancy would be a problem. I am currently using multiple AKClipPlayer objects routed to an AKMixer object. I called him with the basics for loop like this:
private var clipPlayers : [AKClipPlayer] = []
func play(){
for player in clipPlayers{
player.play()
}
}
Is sample accurate playback timing of multiple audio files possible using AudioKit?
Yes, you need to schedule playback to start in the future with play(at:).
// This can take longer than expected, so do this before choosing a future time
clipPlayers.forEach { $0.prepare(withFrameCount: 10_000) }
let nearFuture = AVAudioTime.now() + 0.2
clipPlayers.forEach { $0.play(at: nearFuture) }
I've been trying to create a car head using the raspberry pi and android things. In order to power the car audio I bought this amp Suptronics X400 but I haven't been able to use it as the default output for audio and I'm trying to integrate the Spotify SDK. I tried to create the drive but most of the Documentation here has been removed from the libraries. I'm a bit lost
The audio driver user driver is no longer available in Android Things. The right way is to use the AudioTrack class and set the preferred device type, as is done in this sample project.
You may need to specify the audio bus you want to send sounds to:
private AudioDeviceInfo findAudioDevice(int deviceFlag, int deviceType) {
AudioManager manager = (AudioManager) this.getSystemService(Context.AUDIO_SERVICE);
AudioDeviceInfo[] adis = manager.getDevices(deviceFlag);
for (AudioDeviceInfo adi : adis) {
if (adi.getType() == deviceType) {
return adi;
}
}
return null;
}
Then find the I2S bus:
mAudioInputDevice = findAudioDevice(AudioManager.GET_DEVICES_INPUTS, AudioDeviceInfo.TYPE_BUS);
mAudioOutputDevice = findAudioDevice(AudioManager.GET_DEVICES_OUTPUTS, AudioDeviceInfo.TYPE_BUS);
Then you can run audioTrack.setPreferredDevice(mAudioOutputDevice);
Given the following code if I use the first method in the if branch to obtain a MIDIDestination the code works correctly, and MIDI data is sent. If I use the second method from the else branch, no data is sent.
var client = MIDIClientRef()
var port = MIDIPortRef()
var dest = MIDIEndpointRef()
MIDIClientCreate("jveditor" as CFString, nil, nil, &client)
MIDIOutputPortCreate(client, "output" as CFString, &port)
if false {
dest = MIDIGetDestination(1)
} else {
var device = MIDIGetExternalDevice(0)
var entity = MIDIDeviceGetEntity(device, 0)
dest = MIDIEntityGetDestination(entity, 0)
}
var name: Unmanaged<CFString>?
MIDIObjectGetStringProperty(dest, kMIDIPropertyDisplayName, &name)
print(name?.takeUnretainedValue() as! String)
var gmOn : [UInt8] = [ 0xf0, 0x7e, 0x7f, 0x09, 0x01, 0xf7 ]
var pktlist = MIDIPacketList()
var current = MIDIPacketListInit(&pktlist)
current = MIDIPacketListAdd(&pktlist, MemoryLayout<MIDIPacketList>.stride, current, 0, gmOn.count, &gmOn)
MIDISend(port, dest, &pktlist)
In both cases the printed device name is correct, and the status of every call is noErr.
I have noticed that if I ask for the kMIDIManufacturerName property that I get different results - specifically using the first method I get Generic, from the USB MIDI interface to which the MIDI device is connected, and with the second method I get the value of Roland configured via the Audio MIDI Setup app.
The reason I want to use the second method is specifically so that I can filter out devices that don't have the desired manufacturer name, but as above I can't then get working output.
Can anyone explain the difference between these two methods, and why the latter doesn't work, and ideally offer a suggestion as to how I can work around that?
It sounds like you want to find only the MIDI destination endpoints to talk to a certain manufacturer's devices. Unfortunately that isn't really possible, since there is no protocol for discovering what MIDI devices exist, what their attributes are, and how they are connected to the computer.
(Remember that MIDI is primitive 1980s technology. It doesn't even require bidirectional communication. There are perfectly valid MIDI setups with MIDI devices that you can send data to, but can never receive data from, and vice versa.)
The computer knows what MIDI interfaces are connected to it (for instance, a USB-MIDI interface). CoreMIDI calls these "Devices". You can find out how many there are, how many ports each has, etc. But there is no way to find out anything about the physical MIDI devices like keyboards and synthesizers that are connected to them.
"External devices" are an attempt to get around the discovery problem. They are the things that appear in Audio MIDI Setup when you press the "Add Device" button. That's all!
Ideally your users would create an external device for each physical MIDI device in their setup, enter all the attributes of each one, and set up all the connections in a way that perfectly mirrors their physical MIDI cables.
Unfortunately, in reality:
There may not be any external devices. There is not much benefit to creating them in Audio MIDI Setup, and it's a lot of boring data entry, so most people don't bother.
If there are external devices, you can't trust any of the information that the users added. The manufacturer might not be right, or might be spelled wrong, for instance.
It's pretty unfriendly to force your users to set things up in Audio MIDI Setup before they can use your software. Therefore, no apps do that... and therefore nobody sets anything up in Audio MIDI Setup. It's a chicken-and-egg problem.
Even if there are external devices, your users might want to send MIDI to other endpoints (like virtual endpoints created by other apps) that are not apparently connected to external devices. You should let them do what they want.
The documentation for MIDIGetDevice() makes a good suggestion:
If a client iterates through the devices and entities in the system, it will not ever visit any virtual sources and destinations created by other clients. Also, a device iteration will return devices which are "offline" (were present in the past but are not currently present), while iterations through the system's sources and destinations will not include the endpoints of offline devices.
Thus clients should usually use MIDIGetNumberOfSources, MIDIGetSource, MIDIGetNumberOfDestinations and MIDIGetDestination, rather iterating through devices and entities to locate endpoints.
In other words: use MIDIGetNumberOfDestinations and MIDIGetDestination to get the possible destinations, then let your users pick one of them. That's all.
If you really want to do more:
Given a destination endpoint, you can use MIDIEndpointGetEntity and MIDIEndpointGetDevice to get to the MIDI interface.
Given any MIDI object, you can find its connections to other objects. Use MIDIObjectGetDataProperty to get the value of property kMIDIPropertyConnectionUniqueID, which is an array of the unique IDs of connected objects. Then use MIDIObjectFindByUniqueID to get to the object. The outObjectType will tell you what kind of object it is.
But that's pretty awkward, and you're not guaranteed to find any useful information.
Based on a hint from Kurt Revis's answer, I've found the solution.
The destination that I needed to find is associated with the source of the external device, with the connection between them found using the kMIDIPropertyConnectionUniqueID property of that source.
Replacing the code in the if / else branch in the question with the code below works:
var external = MIDIGetExternalDevice(0)
var entity = MIDIDeviceGetEntity(external, 0)
var src = MIDIEntityGetSource(entity, 0)
var connID : Int32 = 0
var dest = MIDIObjectRef()
var type = MIDIObjectType.other
MIDIObjectGetIntegerProperty(src, kMIDIPropertyConnectionUniqueID, &connID)
MIDIObjectFindByUniqueID(connID, &dest, &type)
A property dump suggests that the connection Unique ID property is really a data property (perhaps containing multiple IDs) but the resulting CFData appears to be in big-endian format so reading it as an integer property instead seems to work fine.