How to connect Remote IO mic to input of 2 mixer units? - iphone

using iphone sdk 4.3. I am trying to connect the Remote IO mic connection to the input of 2 mixer units in an AUGraph. However with the following code only the first connection works and the second fails with error code -10862 (Audio processing graphs can only contain one output unit)
result = AUGraphConnectNodeInput (
processingGraph,
iONode, // source node
1, // source node output bus number
mixerNode1, // destination node
1 // desintation node input bus number
);
result = AUGraphConnectNodeInput (
processingGraph,
iONode, // source node
1, // source node output bus number
mixerNode2, // destination node
1 // desintation node input bus number
So how can i feed the input of the mic to input of 2 mixers?
);

You cannot connect the same output to two separate inputs. The core audio model is a pull model with each node requesting samples from the previous node that it is connected to. If two mixers were requesting samples from one node you would get samples 0..255 in one mixer and samples 256 - 511 in the next mixer (if the buffer size was 256 samples). If you want a scenario like this to work buffer the samples from the mic input and then give access to the buffer in both the mixers callbacks.

I know the question is really old - but I needed a solution for that as well. So this is what I came up with....
You can use kAudioUnitSubType_Splitter.
An audio unit with one input bus and two output buses. The audio unit duplicates the input signal to each of its two output buses.
Have a look at Apple's documentation

Related

How can I mix multiple stereo signals to one with WebAudio?

I'm writing a web app which needs to combine a number of stereo sounds into one stereo output, so I want an equivalent of gstreamer's audiomixer element, but there doesn't seem to be one in WebAudio. ChannelMerger doesn't do quite the same thing - it combines multiple mono signals into one multi-channel signal.
The documentation for AudioNode.connect says that you can connect an output to multiple inputs of other nodes and that attempts to connect the same output to the same input more than once are ignored. But it doesn't say what will happen if you try to connect multiple different outputs to the same input. Would that act as a simple mixer like I want? I suspect not, because what splitting/merging functionality WebAudio does provide (see ChannelMerger above) seems to mostly be based on converting between multiple mono signals and one multi-channel signal with a one channel to one mono signal mapping.
I could take an arbitrary node (I guess a GainNode would work, and I could take advantage of its gain functionality) and set its channelInterpretation mode to "speakers" to actually mix channels, but it only works for 1, 2, 4 or 6 inputs. I'm unlikely to need more than 6, but I will definitely need to be able to handle 3, and possibly 5. That could be done by using more than one mixer (eg for three channels mix inputs 1 and 2 in one mixer, then mix its output with input 3 in a second mixer), but I think I would have to add more GainNodes to balance the mix correctly. A mixer presumably has to attenuate each input to prevent coincident peaks from clipping out of range, so with chained mixers without compensation I'd end up with 1/4,1/4,1/2 instead of 1/3,1/3,1/3?
You almost got it right. Use a single GainNode and connect each source to the single input to the GainNode. This will sum up all of the different connections and produce a single output. If you know all of the individual sources are stereo, you don't need to change anything about the channelInterpretation, channelCountMode, or channelCount to get what you want.
You will probably have to adjust the gain value of the GainNode to reduce the output volume so that you don't overdrive the output device.
Other than, this should all work.

ESP8266 + Micropython: Why do I keep getting the same value with periodic i2c reads?

I am writing some simple code using MicroPython running on a Digistump Oak, which is basically an ESP8266 breakout board. I'm trying to understand the behavior that I see when performing periodic reads of the sensors via i2c.
The following code (which reads the value of the ACCEL_XOUT_H and ACCEL_XOUT_L registers) works just fine:
>>> from machine import Pin, I2C
>>> bus = I2C(scl=Pin(2), sda=Pin(0))
>>> while True:
... h, l = bus.readfrom_mem(0x68, 0x3b, 2)
... print (-((((h<<8)+l)^0xFFFF) + 1) if (h & (1<<7)) else (h<<8)+l)
(That print statement is just performing the conversion from two's complement.)
As expected, that prints out values from the accelerometer that change in approximately real time as I move the imu around.
But if I introduce a delay into the loop, such as...
>>> import time
>>> from machine import Pin, I2C
>>> bus = I2C(scl=Pin(2), sda=Pin(0))
>>> while True:
... h, l = bus.readfrom_mem(0x68, 0x3b, 2)
... print (-((((h<<8)+l)^0xFFFF) + 1) if (h & (1<<7)) else (h<<8)+l)
... time.sleep(1)
...I see some very strange behavior. The values returned by the i2c read operation continue to remain the same for many iterations after the imu has changed orientation. I'm at a loss as to what is going on here: the i2c read operations read from the registers on the imu, which according to the documentation are updated at the sampling rate, which in a default configuration is going to be 1Khz. I don't see anything in the code or data path that could be latching or caching these values somehow.
This is the documentation on the accelerometer registers, as found in the Register Map and Descriptions document:
These registers store the most recent accelerometer measurements.
Accelerometer measurements are written to these registers at the
Sample Rate as defined in Register 25.
The accelerometer measurement registers, along with the temperature
measurement registers, gyroscope measurement registers, and external
sensor data registers, are composed of two sets of registers: an
internal register set and a user-facing read register set. The data
within the accelerometer sensors’ internal register set is always
updated at the Sample Rate. Meanwhile, the user-facing read register
set duplicates the internal register set’s data values whenever the
serial interface is idle.
Since I'm sleeping between read calls, I'm pretty sure the i2c serial interface is idle by any definition, and I don't see anything else seems relevant to this behavior.
Do you have any suggestions as to what could be going on here?

DAQ Matlab toolbox: how to count trigger events without an edge counter channel and how to output different value at each successive trigger

I need your help with the session based interface for the Matlab DAQ toolbox. I have not been able to find much help in the MathWorks tutorials or examples. I am currently using a USB-6003 DAQ from NI.
So basically in my system I have 2 analog output channels (ch1 and ch2) and 1 analog input channel (ch3), and what I am trying to do is to drive the output voltage in ch1 from 0V to 10V in steps of 1V, with ch2 constant and then repeat the loop in ch1 for a different voltage in ch2. As for the analog input ch3, I am triggering it some time after triggering the ch1. My triggers are being externally generated by a function generator.
What I have been struggling with is:
1) How to at each successive trigger event output a different value in the ch1.
2) And how after 11 triggers, can I change ch2 output's.
3) How to save the input in a different location between trigger events, so it does not get overwritten by the next event.
My main constraints are:
1) I cannot use an edge-counter channel to count the triggers because I only have two PFI channels and I need both, one to trigger ch1 and the other ch3 (I cannot use only one).
2) I cannot use wait or any other software time function, because I need a high speed acquisition system (it is for a laser microscope)
3) I need two have at least 2 sessions running in parallel because my DAQ does not allow simultaneous tasks in the same session.
I have attached a channel's time diagram of what I am trying to do.
Channels diagram
Caution
"I need a high speed acquisition system"
USB might not be the right option. Using USB as the control/data transport mechanism is slow compared to other computer I/O, like PCIe or EtherCAT. If, after you get this working, you determine that you need lower latency and jitter, my recommendation is to try CompactRIO and LabVIEW Real-Time.
Compounding the performance is the on-demand nature of the USB-6003. While both analong input and analog output are controlled by electrical signals (the start trigger and sample clock) and have their data automatically transferred by the driver, the digital input and counter are only software-timed, which means that reading data isn't automatic and must be prompted by you, the user, with a read command.
Since the only way you can get digital data from a USB-6003 is on-demand, your only option is to wait for it; there is no way to be notified that a new edge has arrived. Other devices (like the PCIe-63xx X Series or cDAQ-940x devices) support digital input change detection, which causes a software event to be sent to the program. If you had one of these devices, then you wouldn't have to wait.
Suggestion
However, if you change your triggering and data strategies a little, I still think you can achieve the kind of I/O you want. You'll then be able to evaluate its speed and reliability to decide if you need to upgrade DAQ hardware.
New triggering and data strategy
The core idea is: instead of keeping the channels on their own "time base", unify them to a single time base and use that to coordinate the voltage updates. By doubling the frequency of your external trigger, all three channels can share the same timing. In other words, both the analog input task and the analog output task use the same external signal as their sample clock.
Double the frequency of the FGEN's trigger signal.
Repeat an analog output sample if the level doesn't need to change.
Throw an analog input sample away if it coincides with an output level change.
The analog output samples would be:
ch1 ch2
0.0 0.0
0.0 0.0
1.0 0.0
1.0 0.0
2.0 0.0
2.0 0.0
0.0 1.0
0.0 1.0
New program strategy
Now that both the analog input and analog output are using the FGEN as their sample clock, the MATLAB routine only needs to prepare the operation and then monitor/feed it. The hardware will be able to generate and acquire without any intervention from the PC, but the PC will need to periodically read analog input data and write more analog output data to keep the driver satisfied.
I don't know how much of the DAQmx API MATLAB exposes, but you can ask the driver how many samples are left in the device's buffer
Analog input is DAQmxGetReadAvailSampPerChan (doc)
Analog output is DAQmxGetWriteSpaceAvail (doc)
Reference
NI USB-6003 Specifications
http://digital.ni.com/manuals.nsf/websearch/666A752FCC177B0186257CD8006C24C8

How to implement non interleaved mmap (direct) access mode in Alsa for live streaming from SRAM?

I have a buffer in SRAM of size 4096 bytes which gets updated with new raw audio data periodically:
----------------------------------------
| 2048 bytes of L | 2048 bytes of right|
----------------------------------------
^ ^
|A |B
NOTE: A and B are pointers to the start addresses.
As shown, the data is non-interleaved stereo (16 bit samples, 44100Hz sampling rate) and since it is in memory already, I prefer to use MMAP'ed access instead of RW (and as far as my understanding of alsa goes, it should not use a separate buffer for copying data into from this buffer).
The starting address for this buffer is fixed (say 0x3f000000 physical address) and I am MMAPing this buffer to get a virtual address pointer.
Now, how do I send data to alsa for playback and what should be my configuration?
My current unsuccessful way is:
Resample ON
Rate 44100
SND_PCM_ACCESS_MMAP_NONINTERLEAVED
channels 2
format SND_PCM_FORMAT_S16_LE
period near 1024 frames
buffer near 2*1024 frames
void* ptr[2];
ptr[0] = A // Points to mmaped virtual address of A
ptr[1] = B // Points to mmaped virtual address of B
while(1)
{
wait_for_new_data_in_buffer();
snd_pcm_mmap_writen(handle, &ptr, period_size);
}
Extra info:
1. I am using an embedded board with arm cores and running basic linux on it.
2. This is a proprietary work related project and hence the vague-ness of this question.
3. I already know that directly MMAPing a physical address is not recommended so do not waste your time commenting about it.
Thanks in advance.

Recording playback and mic on IPhone

In iPhone SDK 4.3 I would like to record what is being played out through speaker via Remote IO and also record the mic input. I was wondering if the best way is to record each separately to a different channel in an audio file. If so which apis allow me to do this and what audio format should I use. I was planning on using ExtAudioFileWrite to do the actual writing to the file.
Thanks
If both tracks that you have is mono, 16bit integer with the same sample rate:
format->mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;
format->mBitsPerChannel = 16;
you can combine those tracks to the 2 channels PCM by just alternating sample from one track with sample from another.
[short1_track1][short1_track2][short2_track1][short2_track2] and so on.
After that you can write this samples to the output file using ExtAudioFileWrite. That file should be 2 channel kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked of course.
If one of tracks is stereo (I don't think that it is reasonable to record stereo from iphone mic), you can convert it to the mono by taking average from 2 channels or by skipping every second sample of it.
You can separately save PCM data from the play and record callback buffers of the RemoteIO Audio Unit, then mix them using your own mixer code (DSP code) before writing the mixed result to a file.
You may or may not need to do your own echo cancellation (advanced DSP code) as well.