What is the lowest input->output audio passthru latency possible with iPhone 4 / iOS 4.2? I just want to take input from the mic and play it over the headphones with the smallest delay. I'd like to know the theoretical minimum and minimum actually observed, if possible.
An app can usually configure Audio Unit RemoteIO input/record and output/play buffers of length 256 at a 44.1k sample rate. Thus 6 to 12 mS is probably a lowest bound, just from the minimum iOS API buffer filling/handling delay (and not including OS, driver, DAC, amp, speaker, and speed-of-sound-in-air time-of-flight delays).
Related
I am working on an app that analyzes incoming audio from the built in microphone on iPhone/iPad using the iOS 6.0 SDK.
I have been struggling some time with very low levels of the lower frequencies (i.e. below 200 Hz) and I have (on the web) found others having the same problems without any answers to the problem.
Various companies working with audio tools for iOS states that there was (previous to iOS 6.0) a built in low-frequency rolloff filter that was causing these low signals on the lower frequencies BUT those sources also states that starting with iOS 6.0, it should be possible to turn off this automatic low-frequency filtering of the input audio signals.
I have gone through the audio unit header files, the audio documentation in Xcode as well as audio-related sample code without success. I have played with the different parameters and properties of the AudioUnit (which mentions low-pass filters and such) without solving the problem.
Does anybody know how to turn off the automatic low-frequency rolloff filter for RemoteIO input in iOS 6.0?
Under iOS 6.0 there is the possibility to set the current AVAudioSession to AVAudioSessionModeMeasurement like this:
[[AVAudioSession sharedInstance] setMode: AVAudioSessionModeMeasurement error:NULL];
This removes the low frequency filtering.
Link:
http://developer.apple.com/library/ios/#documentation/AVFoundation/Reference/AVAudioSession_ClassReference/Reference/Reference.html
I hope this helps.
I'm not sure if there is any way to ever accomplish this on these devices. Most microphones have difficulty with frequencies below 200 HZ (and above the 20 kHZ range as well). In fact, a lot of speakers can barely play audio at that range either. In order to get a clean signal at the <200 HZ range, you would require good enough hardware, which I think is a bit beyond the capabilities of the built in microphones of the iPhone/iPad. That's probably why Apple has filtered out these low frequency sounds, as they cannot guarantee a good enough recording, OR a good enough playback. Here's a link describing the situation better for the older devices (iPhone 4, iPhone 3GS, and iPad 1).
Apple is also very picky about what they will and won't let you play with. Even if you do find out where this filtering is taking place, interrupting that code will most likely result in your app being rejected by the app store. And due to hardware limitations, you probably wouldn't be able to achieve what you want to anyways.
Hope that Helps!
I'm currently developing an app which plays an audio file (mp3, but can change to WAV to reduce decoding time), and records audio at the same time.
For synchronization purposes, I want to estimate the exact time when audio started playing.
Using AudioQueue to control each buffer, I can estimate the time when the first buffer was drained. My questions are:
What is the hardware delay between AudioQueue buffers being drained and them actually being played?
Is there a lower level API (specifically, AudioUnit), that has better performance (in hardware latency measures)?
Is it possible to place an upper limit on hardware latency using AudioQueue, w or w/o decoding the buffer? 5ms seems something that I can work with, more that that will require a different approach.
Thanks!
The Audio Queue API runs on top of Audio Units, so the RemoteIO Audio Unit using raw uncompressed audio will allow a lower and more deterministic latency. The minimum RemoteIO buffer duration that can be set on some iOS devices (using the Audio Session API) is about 6 to 24 milliseconds, depending on application state. That may set a lower limit on both play and record latency, depending on what events you are using for your latency measurement points.
Decoding compressed audio can add around an order of magnitude or two more latency from decode start.
My boss wants me to develop an app, using iPhone to recognize sound frequencies from 20-24 Hz that humans cannot hear. (iPhone frequency response: 20 Hz to 20 kHz)
Is this possible? If yes, can anyone give me some advice? Where to start?
Before you start working on this you need to make sure that the iPhone hardware is physically capable of detecting such low frequencies. Most microphones have very poor sensitivity at low frequencies, and consumer analogue input stages typically have a high pass filter which attenuates frequencies below ~ 30 Hz. You need to try capturing some test sounds containing the signals of interest with an existing audio capture app on an iPhone and see whether the low frequency components get recorded. If not then your app is a non-starter.
What you're looking for is a fast fourier transform. This is the main algorithm used for converting a time based signal to a frequency based one.
It seems the Accelerate framework has some FFT support, so I'd start looking at that, there are several posts about that already.
Apple has some sample openCL code for doing this on a mac, but AFAIK openCL isn't on iOS yet.
You'd also want to check the frequency response of the microphone ( I think there are some apps out doing oscilloscope displays from the mic that would help here).
You basic method would be to take a chunk of sound from the mic. Filter it and then maybe shift it down in frequency, depending on what you need to do with it.
I'm using Audio Queue Framework with sample rate 44100 Hz to record data from the microphone on iPhone.
Then I test frequency response for iPhone 4 and iPhone 4s.
iPhone 4s mic is blind on frequencies higher than 20 kHz unlike iPhone 4.
It seems like the microphone is better on previous model.
Is it hardware limitation? Is it software limitation? Or some misconfiguration (enabled noise compession or something else)?
No one can hear frequencies that high - for most people, the threshold is somewhere around 15kHz.
So the 4s cuts out unnecessary frequencies - potentially making it better than the iPhone 4.
How many Multichannel Mixer audio units can you use simultaneously in an iOS 4.0 application? Is there a hard limit to this?
There's no hard limit per se, but you do approach the limits of the device eventually. I did some testing, and reported the results here.