I am trying to check if AirPods are connected to the iPhone. How can I check it programmatically?
For airpods port.portType value is .builtInMic which is not sufficient to check if airpods are connected to iphone
class func isMicAvailbale() -> Bool{
let availableInputs:[AVAudioSessionPortDescription] = AVAudioSession.sharedInstance().availableInputs ?? []
var micPresent = false;
for port in availableInputs
{
if port.portType == .builtInMic{
micPresent = true
}
}
return micPresent
}
One way that comes to my mind is that you could use Core Bluetooth API to access the airpods via Bluetooth. But this might be overkill when you can use AVSession. I don't know why exactly you want to detect just airpods and no other bluetooth headphones. But I think buildInMic stands for buildIn microphone inside the device and not the bluetooth device :P If you take a look into docs you can see it :P
You didn't ask for other bluetooth headset, but as part of the answer I will provide you this code, this should work for non MFI headsets connected to iPhone via Bluetooth.
Now to the Airpod part.
You probably want to use ExternalAccessory.framework to communicate with the MFI bluetooth devices such as Airpods.~~
I haven't been working with EAAccessory yet but I believe you have to do something like this:
Create instance of EAAccessoryManager
Use that instance to get the connected devices
Find airpods via some ID
Figure out how to check if the accessory is connected, but this should be piece of cake.
Also very important step is to add UISupportedExternalAccessoryProtocols to your info.plist file
I am bit tired so if you have any questions ask, tommorow I will write the implementation in here if no-one will be faster.
Okay so obviously my answer was totally wrong on the first place.
I have learned today that Airpods aren't listed in apple's MFI devices so the ExternalAccessorymanager won't obviously work. As stated in the answer mentioned in the footer, all you need to do is to add category to the AVSession.
So the whole code is basically in here :D
let session = AVAudioSession.sharedInstance()
try! session.setCategory(.playAndRecord, mode: .default, options: .allowBluetooth)
guard let availableInputs = session.availableInputs else { return }
for input in availableInputs {
if input.portType == .bluetoothHFP {
// Do your stuff...
}
}
prove:
2019-01-04 02:32:13.462093+0100 Accessory games[24578:5411208] [avas] AVAudioSessionPortImpl.mm:56:ValidateRequiredFields: Unknown selected data source for Port Butcher’s AirPods (type: BluetoothHFP)
(lldb) po availableInputs
▿ 2 elements
- 0 : <AVAudioSessionPortDescription: 0x283b401b0, type = MicrophoneBuiltIn; name = iPhone Mikrofon; UID = Built-In Microphone; selectedDataSource = Vpředu>
- 1 : <AVAudioSessionPortDescription: 0x283b40250, type = BluetoothHFP; name = Butcher’s AirPods; UID = 10:94:BB:5D:5F:F7-tsco; selectedDataSource = (null)>
(lldb) po availableInputs[1].portName
"Butcher’s AirPods"
(lldb) po availableInputs[1].portType
▿ AVAudioSessionPort
- _rawValue : BluetoothHFP
(lldb)
Sorry for misunderstanding and writing totally offtopic answer. But hey, at least you know something about external accessories :)
Also you might want to take a look in here
Related
It’s quite easy to detect if Mac has an illuminated keyboard with ioreg at the command line:
ioreg -c IOResources -d 3 | grep '"KeyboardBacklight" =' | sed 's/^.*= //g'
But how can I programmatically get this IOKit boolean property using the latest Swift? I’m looking for some sample code.
I figured out the following with some trial and error:
Get the "IOResources" node from the IO registry.
Get the "KeyboardBacklight" property from that node.
(Conditionally) convert the property value to a boolean.
I have tested this on an MacBook Air (with keyboard backlight) and on an iMac (without keyboard backlight), and it produced the correct result in both cases.
import Foundation
import IOKit
func keyboardHasBacklight() -> Bool {
let port: mach_port_t
if #available(macOS 12.0, *) {
port = kIOMainPortDefault // New name as of macOS 12
} else {
port = kIOMasterPortDefault // Old name up to macOS 11
}
let service = IOServiceGetMatchingService(port, IOServiceMatching(kIOResourcesClass))
guard service != IO_OBJECT_NULL else {
// Could not read IO registry node. You have to decide whether
// to treat this as a fatal error or not.
return false
}
guard let cfProp = IORegistryEntryCreateCFProperty(service, "KeyboardBacklight" as CFString,
kCFAllocatorDefault, 0)?.takeRetainedValue(),
let hasBacklight = cfProp as? Bool
else {
// "KeyboardBacklight" property not present, or not a boolean.
// This happens on Macs without keyboard backlight.
return false
}
// Successfully read boolean "KeyboardBacklight" property:
return hasBacklight
}
As an addendum to Martin’s excellent answer, here are the notes I got from Apple’s Developer Technical Support:
Calling I/O Kit from Swift is somewhat challenging. You have
two strategies here:
You can wrap the I/O Kit API in a Swift-friendly wrapper and then use that to accomplish your task.
You can just go straight to your task, resulting in lots of ugly low-level Swift.
Pulling random properties out of the I/O Registry is not the best path
to long-term binary compatibility. Our general policy here is that we
only support properties that have symbolic constants defined in the
headers (most notably IOKitKeys.h, but there are a bunch of others).
KeyboardBacklight has no symbolic constant and thus isn’t supported.
Make sure you code defensively. This property might go away, change
meaning, change its type, and so on. Your code must behave reasonable
in all such scenarios.
Please make sure you file an enhancement request for a proper API to
get this info, making sure to include a high-level description of your
overall goal.
I've hunted high and low and cannot find a solution to this problem. I am looking for a method to change the input/output devices which an AVAudioEngine will use on macOS.
When simply playing back an audio file the following works as expected:
var outputDeviceID:AudioDeviceID = xxx
let result:OSStatus = AudioUnitSetProperty(outputUnit, kAudioOutputUnitProperty_CurrentDevice, kAudioUnitScope_Global, 0, &outputDeviceID, UInt32(MemoryLayout<AudioObjectPropertyAddress>.size))
if result != 0 {
print("error setting output device \(result)")
return
}
However if I initialize the audio input (with let input = engine.inputNode) then I get an error once I attempt to start the engine:
AVAEInternal.h:88 required condition is false: [AVAudioEngine.mm:1055:CheckCanPerformIO: (canPerformIO)]
I know that my playback code is OK since, if I avoid changing the output device then I can hear the microphone and the audio file, and if I change the output device but don't initialize the inputNode the file plays to the specified destination.
Additionally to this I have been trying to change the input device, I understood from various places that the following should do this:
let result1:OSStatus = AudioUnitSetProperty(inputUnit, kAudioOutputUnitProperty_CurrentDevice, kAudioUnitScope_Output, 0, &inputDeviceID, UInt32(MemoryLayout<AudioObjectPropertyAddress>.size))
if result1 != 0 {
print("failed with error \(result1)")
return
}
However, this doesn't work - in most cases it throws an error (10853) although if I select a sound card that has both inputs and outputs it succeeds - it appears that when I am attempting to set the output or the input node it is actually setting the device for both.
I would think that this meant that an AVAudioEngine instance can only deal with one device, however it is quite happy working with the default devices (mic and speakers/headphones) so I am confident that isn't the issue. Looking at some solutions I have seen online people simply change the default input, but this isn't a massively nice solution.
Does anyone have any ideas as to whether this is possible?
It's worth noting that kAudioOutputUnitProperty_CurrentDevice is the only property available, there is not an equivalent kAudioInputUnitProperty_CurrentDevice key, due to the fact that as I understand it both the inputNode and outputNode are classed as "Output Units" (as they both emit sound somewhere).
Any ideas would be much appreciated as this is very very frustrating!!
Thanks
So I filed a support request with apple on this and another issue and the response confirms that an AVAudioEngine can only be assigned to a single Aggregate device (that is, a device with both input and output channels) - the system default units create effectively an aggregate device internally which is why they work, although I've found an additional issue in that if the input device also has output capabilities (and you activate the inputNode) then that device has to be both the input and output device as otherwise the output appears not to work.
So answer is that I think there is no answer..
My aim is to write an audio app for low latency realtime audio analysis on OSX. This will involve connecting to one or more USB interfaces and taking specific channels from these devices.
I started with the learning core audio book and writing this using C. As I went down this path it came to light that a lot of the old frameworks have been deprecated. It appears that the majority of what I would like to achieve can be written using AVAudioengine and connecting AVAudioUnits, digging down into core audio level only for the lower things like configuring the hardware devices.
I am confused here as to how to access two devices simultaneously. I do not want to create an aggregate device as I would like to treat the devices individually.
Using core audio I can list the audio device ID for all devices and change the default system output device here (and can do the input device using similar methods). However this only allows me one physical device, and will always track the device in system preferences.
static func setOutputDevice(newDeviceID: AudioDeviceID) {
let propertySize = UInt32(MemoryLayout<UInt32>.size)
var deviceID = newDeviceID
var propertyAddress = AudioObjectPropertyAddress(
mSelector: AudioObjectPropertySelector(kAudioHardwarePropertyDefaultOutputDevice),
mScope: AudioObjectPropertyScope(kAudioObjectPropertyScopeGlobal),
mElement: AudioObjectPropertyElement(kAudioObjectPropertyElementMaster))
AudioObjectSetPropertyData(AudioObjectID(kAudioObjectSystemObject), &propertyAddress, 0, nil, propertySize, &deviceID)
}
I then found that the kAudioUnitSubType_HALOutput is the way to go for specifying a static device only accessible through this property. I can create a component of this type using:
var outputHAL = AudioComponentDescription(componentType: kAudioUnitType_Output, componentSubType: kAudioUnitSubType_HALOutput, componentManufacturer: kAudioUnitManufacturer_Apple, componentFlags: 0, componentFlagsMask: 0)
let component = AudioComponentFindNext(nil, &outputHAL)
guard component != nil else {
print("Can't get input unit")
exit(-1)
}
However I am confused about how you create a description of this component and then find the next device that matches the description. Is there a property where I can select the audio device ID and link the AUHAL to this?
I also cannot figure out how to assign an AUHAL to an AVAudioEngine. I can create a node for the HAL but cannot attach this to the engine. Finally is it possible to create multiple kAudioUnitSubType_HALOutput components and feed these into the mixer?
I have been trying to research this for the last week, but nowhere closer to the answer. I have read up on channel mapping and everything I need to know down the line, but at this level getting the audio at. lower level seems pretty undocumented, especially when using swift.
Given the following code if I use the first method in the if branch to obtain a MIDIDestination the code works correctly, and MIDI data is sent. If I use the second method from the else branch, no data is sent.
var client = MIDIClientRef()
var port = MIDIPortRef()
var dest = MIDIEndpointRef()
MIDIClientCreate("jveditor" as CFString, nil, nil, &client)
MIDIOutputPortCreate(client, "output" as CFString, &port)
if false {
dest = MIDIGetDestination(1)
} else {
var device = MIDIGetExternalDevice(0)
var entity = MIDIDeviceGetEntity(device, 0)
dest = MIDIEntityGetDestination(entity, 0)
}
var name: Unmanaged<CFString>?
MIDIObjectGetStringProperty(dest, kMIDIPropertyDisplayName, &name)
print(name?.takeUnretainedValue() as! String)
var gmOn : [UInt8] = [ 0xf0, 0x7e, 0x7f, 0x09, 0x01, 0xf7 ]
var pktlist = MIDIPacketList()
var current = MIDIPacketListInit(&pktlist)
current = MIDIPacketListAdd(&pktlist, MemoryLayout<MIDIPacketList>.stride, current, 0, gmOn.count, &gmOn)
MIDISend(port, dest, &pktlist)
In both cases the printed device name is correct, and the status of every call is noErr.
I have noticed that if I ask for the kMIDIManufacturerName property that I get different results - specifically using the first method I get Generic, from the USB MIDI interface to which the MIDI device is connected, and with the second method I get the value of Roland configured via the Audio MIDI Setup app.
The reason I want to use the second method is specifically so that I can filter out devices that don't have the desired manufacturer name, but as above I can't then get working output.
Can anyone explain the difference between these two methods, and why the latter doesn't work, and ideally offer a suggestion as to how I can work around that?
It sounds like you want to find only the MIDI destination endpoints to talk to a certain manufacturer's devices. Unfortunately that isn't really possible, since there is no protocol for discovering what MIDI devices exist, what their attributes are, and how they are connected to the computer.
(Remember that MIDI is primitive 1980s technology. It doesn't even require bidirectional communication. There are perfectly valid MIDI setups with MIDI devices that you can send data to, but can never receive data from, and vice versa.)
The computer knows what MIDI interfaces are connected to it (for instance, a USB-MIDI interface). CoreMIDI calls these "Devices". You can find out how many there are, how many ports each has, etc. But there is no way to find out anything about the physical MIDI devices like keyboards and synthesizers that are connected to them.
"External devices" are an attempt to get around the discovery problem. They are the things that appear in Audio MIDI Setup when you press the "Add Device" button. That's all!
Ideally your users would create an external device for each physical MIDI device in their setup, enter all the attributes of each one, and set up all the connections in a way that perfectly mirrors their physical MIDI cables.
Unfortunately, in reality:
There may not be any external devices. There is not much benefit to creating them in Audio MIDI Setup, and it's a lot of boring data entry, so most people don't bother.
If there are external devices, you can't trust any of the information that the users added. The manufacturer might not be right, or might be spelled wrong, for instance.
It's pretty unfriendly to force your users to set things up in Audio MIDI Setup before they can use your software. Therefore, no apps do that... and therefore nobody sets anything up in Audio MIDI Setup. It's a chicken-and-egg problem.
Even if there are external devices, your users might want to send MIDI to other endpoints (like virtual endpoints created by other apps) that are not apparently connected to external devices. You should let them do what they want.
The documentation for MIDIGetDevice() makes a good suggestion:
If a client iterates through the devices and entities in the system, it will not ever visit any virtual sources and destinations created by other clients. Also, a device iteration will return devices which are "offline" (were present in the past but are not currently present), while iterations through the system's sources and destinations will not include the endpoints of offline devices.
Thus clients should usually use MIDIGetNumberOfSources, MIDIGetSource, MIDIGetNumberOfDestinations and MIDIGetDestination, rather iterating through devices and entities to locate endpoints.
In other words: use MIDIGetNumberOfDestinations and MIDIGetDestination to get the possible destinations, then let your users pick one of them. That's all.
If you really want to do more:
Given a destination endpoint, you can use MIDIEndpointGetEntity and MIDIEndpointGetDevice to get to the MIDI interface.
Given any MIDI object, you can find its connections to other objects. Use MIDIObjectGetDataProperty to get the value of property kMIDIPropertyConnectionUniqueID, which is an array of the unique IDs of connected objects. Then use MIDIObjectFindByUniqueID to get to the object. The outObjectType will tell you what kind of object it is.
But that's pretty awkward, and you're not guaranteed to find any useful information.
Based on a hint from Kurt Revis's answer, I've found the solution.
The destination that I needed to find is associated with the source of the external device, with the connection between them found using the kMIDIPropertyConnectionUniqueID property of that source.
Replacing the code in the if / else branch in the question with the code below works:
var external = MIDIGetExternalDevice(0)
var entity = MIDIDeviceGetEntity(external, 0)
var src = MIDIEntityGetSource(entity, 0)
var connID : Int32 = 0
var dest = MIDIObjectRef()
var type = MIDIObjectType.other
MIDIObjectGetIntegerProperty(src, kMIDIPropertyConnectionUniqueID, &connID)
MIDIObjectFindByUniqueID(connID, &dest, &type)
A property dump suggests that the connection Unique ID property is really a data property (perhaps containing multiple IDs) but the resulting CFData appears to be in big-endian format so reading it as an integer property instead seems to work fine.
So I'm stumped on this one.
In Mac OS X there is an easy way to get the "Me" card (the owner of the Mac/account) from the built-in address book API.
Has anyone found a way to find out which contact (if it exists) belongs to the owner of the iPhone?
You could use the undocumented user default:
[[NSUserDefaults standardUserDefaults] objectForKey:#"SBFormattedPhoneNumber"];
and then search the address book for the card with that phone number.
Keep in mind that since the User Default is undocumented, Apple could change at any time and you may have trouble getting into the App Store.
Another approach you could take, although it is much more fragile, is to look at the device name. If the user hasn't changed it from the default "User Name's iPhone" AND they are using their real name as an iPhone, you could grab the user name from that. Again, not the best solution by any means, but it does give you something else to try.
The generally accepted answer to this question is to file a Radar with Apple for this feature and to prompt users to choose their card.
Contacts container have a me identifier property on iOS that can be accessed using container.value(forKey: "meIdentifier")
if let containers = try? CNContactStore().containers(matching: nil) {
containers.forEach { container in
if let meIdentifier = container.value(forKey: "meIdentifier") as? String {
print("Contacts:", "meIdentifier", meIdentifier)
}
}
The identifier is a legacy identifier used in the old AddressBook framework. You can still access it in CNContact:
let iOSLegacyIdentifier = contact.value(forKey: "iOSLegacyIdentifier")
There is no such API in the iPhone SDK 2.2.1 and earlier. Please file a request for it at: http://bugreport.apple.com
Edit: [Obsolete answer]
There's no API for getting the "me" card because there is no "me" card. The iPhone's contacts app has no way of marking a card as being "me", and the API reflects this.
I came up with a partial solution to this
you can get the device name as follows
NSString *ownerName = [[UIDevice currentDevice] name];
in English a device is originally called, for example, 'Joe Blogg's iPhone'
the break out the name
NSRange t = [ownerName rangeOfString:#"’s"];
if (t.location != NSNotFound) {
ownerName = [ownerName substringToIndex:t.location];
}
you can then take that name and search the contacts
CNContactStore *contactStore = [CNContactStore new];
NSPredicate *usersNamePredicate = [CNContact predicateForContactsMatchingName:usersName];
NSArray * keysToFetch = #[[CNContactFormatter descriptorForRequiredKeysForStyle:CNContactFormatterStyleFullName],CNContactPhoneNumbersKey,CNContactEmailAddressesKey,CNContactSocialProfilesKey, ];
NSArray * matchingContacts = [contactStore unifiedContactsMatchingPredicate:usersNamePredicate keysToFetch:keysToFetch error:nil];
Of course other languages differ in the device name string e.g. 'iPhone Von Johann Schmidt' so more parsing needs to be done for other languages and it only works if the user hasn't changed the name of the device in iTunes to something like "Joes phone' but it gives you a starting point
well... it gives you an array of matching items :) So if there is more than one contact with that array you just have to use pot luck and go with the first one or work thru multiple cards and take what you need from each.
I did say its a partial solution and even though it won't work for all user cases you might find it works for many of your users and reduces a little friction