Microphone Device ID - matlab

I have three USB microphones plugged up to my Macbook Air via a USB hub. In the Audio MIDI I aggregated the devices and selected the aggregate device as the input for the computer. However, 'audiorecorder' is not picking up the device ID.
audiodevinfo
ans
{1x1 struct}
{1x1 struct}
and it's naming the internal microphone. Is there a way to set a device ID for each individual microphone?

You can determine if you have the correct name for a device using
mic1 = audiodevinfo(1, 0)
mic2 = audiodevinfo(1, 1)
mic3 = audiodevinfo(1, 2)
where the first argument is output/input (0 or 1) and the second argument is the device ID. Then you can address the mics separately, for example by using
audiorecorder(Fs, NBITS, NCHANS, ID)
and replacing ID with 1 for mic2
If you plug any mic in or out of the computer you may need to restart matlab for it to be recognized.

Related

Add 2 I2c devices with the same adress in dts

I have a 2 different board each with a I2c device, the devices have the same address. I want to build one firmware where I can control both of devices depends on which device is connected to the I2C bus
device1#30 {
compatible = "device1";
reg = <0x30>;
clock-mode = /bits/ 8 <1>;
....
device2#30 {
compatible = "device2";
reg = <0x30>;
clock-mode = /bits/ 8 <1>;
I got an error when trying to read from the register of the second device which says that device 2 that be binded. How can I resolve this problem ?
NB : I have only 1 I2C Bus
I tried to add some detection function in the first and second device ( read ID register ) but it fails because I can't read the reg of the second device.

AVAudioEngine Input Node only seeing first device inputs from connected Aggregate Audio Device Swift macOS

I am trying to process the individual channels of a multichannel Soundcard, using AVAudioEngine.
In order to do that, I first discover the DeviceID of the Soundcard I want to connect to my AVAudioEngine. Then I assign the device to the auAudioUnit of the AVAudioEngine Input Node. Finally, I install a tap on the input node, where I can read the buffer.
I have some code that mostly works, but I have a bug where if I connect an Aggregate Audio Device, my Input Node only sees the input channels of the first audio device within the Aggregate Audio Device.
For example if my Aggregate Audio Device comprises of two sound cards, one with 6 inputs and the second with two inputs, my Input Node AVAudioFormat would see 6 inputs, not 8.
I add the device like so:
try inputNode.auAudioUnit.setDeviceID(deviceID)
Then I get the buffer format like so:
let input = self.inputNode
let bus = 0
bufferFormat = input.inputFormat(forBus: bus)
Finally, I install the tap:
input.installTap(onBus: bus, bufferSize: 1024, format: bufferFormat) { [weak self] (buffer, time) in
self?.process(buffer: buffer, time: time)
}
With this method, my Input Node auAudioUnit ends up with two input busses and two output busses, the second input bus format seems to have the correct channel count, but I'm not sure why these channels are not passed to the associated input node.
Any help would be greatly appreciated!
Thanks,
Dan

Using multiple audio devices simultaneously on osx

My aim is to write an audio app for low latency realtime audio analysis on OSX. This will involve connecting to one or more USB interfaces and taking specific channels from these devices.
I started with the learning core audio book and writing this using C. As I went down this path it came to light that a lot of the old frameworks have been deprecated. It appears that the majority of what I would like to achieve can be written using AVAudioengine and connecting AVAudioUnits, digging down into core audio level only for the lower things like configuring the hardware devices.
I am confused here as to how to access two devices simultaneously. I do not want to create an aggregate device as I would like to treat the devices individually.
Using core audio I can list the audio device ID for all devices and change the default system output device here (and can do the input device using similar methods). However this only allows me one physical device, and will always track the device in system preferences.
static func setOutputDevice(newDeviceID: AudioDeviceID) {
let propertySize = UInt32(MemoryLayout<UInt32>.size)
var deviceID = newDeviceID
var propertyAddress = AudioObjectPropertyAddress(
mSelector: AudioObjectPropertySelector(kAudioHardwarePropertyDefaultOutputDevice),
mScope: AudioObjectPropertyScope(kAudioObjectPropertyScopeGlobal),
mElement: AudioObjectPropertyElement(kAudioObjectPropertyElementMaster))
AudioObjectSetPropertyData(AudioObjectID(kAudioObjectSystemObject), &propertyAddress, 0, nil, propertySize, &deviceID)
}
I then found that the kAudioUnitSubType_HALOutput is the way to go for specifying a static device only accessible through this property. I can create a component of this type using:
var outputHAL = AudioComponentDescription(componentType: kAudioUnitType_Output, componentSubType: kAudioUnitSubType_HALOutput, componentManufacturer: kAudioUnitManufacturer_Apple, componentFlags: 0, componentFlagsMask: 0)
let component = AudioComponentFindNext(nil, &outputHAL)
guard component != nil else {
print("Can't get input unit")
exit(-1)
}
However I am confused about how you create a description of this component and then find the next device that matches the description. Is there a property where I can select the audio device ID and link the AUHAL to this?
I also cannot figure out how to assign an AUHAL to an AVAudioEngine. I can create a node for the HAL but cannot attach this to the engine. Finally is it possible to create multiple kAudioUnitSubType_HALOutput components and feed these into the mixer?
I have been trying to research this for the last week, but nowhere closer to the answer. I have read up on channel mapping and everything I need to know down the line, but at this level getting the audio at. lower level seems pretty undocumented, especially when using swift.

JTAG: How do I know the width of the Instruction Register?

Assumed I have a JTAG-chain with several devices from different manufactures:
How does my software, which shall communicate with a specific system within that chain, known the length of the IR for all the others devices within the chain? I do have to know them to send a certain instruction to my device, right?
It is possible to detect the total length of all IR registers in your JTAG daisy-chain. It is also possible to detect the number of devices (or TAPs) in your chain. But you can't detect the individual IR length of a single TAP.
What you can do: You can read out the JTAG ID code register of all of you TAPs. The ID code register (in DR path) is always 32 bit and gets selected by test-logic-reset.
With the ID code you can identify the existing TAPs and look up in the datasheet the length of the individual IR registers.
And yes: In general you do have to know the individual IR length of all the TAPs in your chain to communicate with one of them.
try here: http://www.fpga4fun.com/JTAG3.html
When IR = '1...1', the BYPASS is selected.
The idea is to send a lot of '1' so regardless of IR length all devices will select BYPASS.

How to calculate Voice and Data usage in BlackBerry 10 application

Hi I am developing an application in BlackBerry 10 platform which will calculate data usage and voice usage of user device for particular duration.
There are some of the feasibility check need to be done for this which are as follows
Does BB10 API supports data usage calculation? If yes, Can I differentiate 3G/Cellular data from WiFi data?If yes, how can I achieve this?
How can I calculate Voice usage in BB 10 application? Voice usage is nothing but duration of all calls happened within particular timespan
Is there any API BB10 provides through which I can check if device is currently in Roaming or not?
Please let me know if this can be done in BB 10 application
Does BB10 API supports data usage calculation?
Yes, there are a few for API's for this
Can I differentiate 3G/Cellular data from WiFi data?
Yurp you can.
1) Add the following line to your .pro file:
LIBS += -lbbdevice
2) Make sure you include:
#include <bb/device/NetworkDataUsage>
3) Getting data useage for cellular network only
bb::device::NetworkDataUsage *nduCell = new bb::device::NetworkDataUsage("cellular0");
nduCell ->update();
quint64 bytesSent = nduCell ->bytesSent();
quint64 bytesReceived = nduCell ->bytesReceived();
4) Getting data useage for wifi only
bb::device::NetworkDataUsage *nduWifi = new bb::device::NetworkDataUsage("tiw_sta0");
nduWifi ->update();
quint64 bytesSent = nduWifi ->bytesSent();
quint64 bytesReceived = nduWifi ->bytesReceived();
That will give your the data useage since the device has started.
You will need to call ndu->update() regularly to get the most recent data usage statistics.
Extra Info:
Changing the parameter for the NetworkDataUsage changes the interface it minitors:
cellular0 == Cellular
tiw_sta0 == Wifi
ecm0 == USB
To find out which interfaces are available on your device:
1) Add the following line to your .pro file:
QT += network
2) Make sure you include:
#include <QNetworkInterface>
3) Displaying available interfaces
QList<QNetworkInterface> interfaces = QNetworkInterface::allInterfaces();
for (int i = 0; i < interfaces.size(); i++) {
qDebug() << QString::number(i) + ": " + interfaces.value(i).humanReadableName();
}
How can I calculate Voice usage in BB 10 application? Voice usage is nothing but duration of all calls happened within particular timespan
This can be done using Phone class.
There is signal call void callUpdated (const bb::system::phone::Call &call, ) using which we can get to know if incoming call is received or outgoing call is initiated.
With the combination of this and Timer class we can calculate Voice usage of device. (This code is not tested)