I'm using Audio Queue Framework with sample rate 44100 Hz to record data from the microphone on iPhone.
Then I test frequency response for iPhone 4 and iPhone 4s.
iPhone 4s mic is blind on frequencies higher than 20 kHz unlike iPhone 4.
It seems like the microphone is better on previous model.
Is it hardware limitation? Is it software limitation? Or some misconfiguration (enabled noise compession or something else)?
No one can hear frequencies that high - for most people, the threshold is somewhere around 15kHz.
So the 4s cuts out unnecessary frequencies - potentially making it better than the iPhone 4.
Related
I am working on an app that analyzes incoming audio from the built in microphone on iPhone/iPad using the iOS 6.0 SDK.
I have been struggling some time with very low levels of the lower frequencies (i.e. below 200 Hz) and I have (on the web) found others having the same problems without any answers to the problem.
Various companies working with audio tools for iOS states that there was (previous to iOS 6.0) a built in low-frequency rolloff filter that was causing these low signals on the lower frequencies BUT those sources also states that starting with iOS 6.0, it should be possible to turn off this automatic low-frequency filtering of the input audio signals.
I have gone through the audio unit header files, the audio documentation in Xcode as well as audio-related sample code without success. I have played with the different parameters and properties of the AudioUnit (which mentions low-pass filters and such) without solving the problem.
Does anybody know how to turn off the automatic low-frequency rolloff filter for RemoteIO input in iOS 6.0?
Under iOS 6.0 there is the possibility to set the current AVAudioSession to AVAudioSessionModeMeasurement like this:
[[AVAudioSession sharedInstance] setMode: AVAudioSessionModeMeasurement error:NULL];
This removes the low frequency filtering.
Link:
http://developer.apple.com/library/ios/#documentation/AVFoundation/Reference/AVAudioSession_ClassReference/Reference/Reference.html
I hope this helps.
I'm not sure if there is any way to ever accomplish this on these devices. Most microphones have difficulty with frequencies below 200 HZ (and above the 20 kHZ range as well). In fact, a lot of speakers can barely play audio at that range either. In order to get a clean signal at the <200 HZ range, you would require good enough hardware, which I think is a bit beyond the capabilities of the built in microphones of the iPhone/iPad. That's probably why Apple has filtered out these low frequency sounds, as they cannot guarantee a good enough recording, OR a good enough playback. Here's a link describing the situation better for the older devices (iPhone 4, iPhone 3GS, and iPad 1).
Apple is also very picky about what they will and won't let you play with. Even if you do find out where this filtering is taking place, interrupting that code will most likely result in your app being rejected by the app store. And due to hardware limitations, you probably wouldn't be able to achieve what you want to anyways.
Hope that Helps!
My boss wants me to develop an app, using iPhone to recognize sound frequencies from 20-24 Hz that humans cannot hear. (iPhone frequency response: 20 Hz to 20 kHz)
Is this possible? If yes, can anyone give me some advice? Where to start?
Before you start working on this you need to make sure that the iPhone hardware is physically capable of detecting such low frequencies. Most microphones have very poor sensitivity at low frequencies, and consumer analogue input stages typically have a high pass filter which attenuates frequencies below ~ 30 Hz. You need to try capturing some test sounds containing the signals of interest with an existing audio capture app on an iPhone and see whether the low frequency components get recorded. If not then your app is a non-starter.
What you're looking for is a fast fourier transform. This is the main algorithm used for converting a time based signal to a frequency based one.
It seems the Accelerate framework has some FFT support, so I'd start looking at that, there are several posts about that already.
Apple has some sample openCL code for doing this on a mac, but AFAIK openCL isn't on iOS yet.
You'd also want to check the frequency response of the microphone ( I think there are some apps out doing oscilloscope displays from the mic that would help here).
You basic method would be to take a chunk of sound from the mic. Filter it and then maybe shift it down in frequency, depending on what you need to do with it.
What is the lowest input->output audio passthru latency possible with iPhone 4 / iOS 4.2? I just want to take input from the mic and play it over the headphones with the smallest delay. I'd like to know the theoretical minimum and minimum actually observed, if possible.
An app can usually configure Audio Unit RemoteIO input/record and output/play buffers of length 256 at a 44.1k sample rate. Thus 6 to 12 mS is probably a lowest bound, just from the minimum iOS API buffer filling/handling delay (and not including OS, driver, DAC, amp, speaker, and speed-of-sound-in-air time-of-flight delays).
I'm building an app that measures sound volume. I understand that audio hardware in the iPhone is not as accurate as professional hardware, which is OK, but I need to know if there are any differences between the different iPhone models. For example, is it possible that the volume measured on an iPhone 3G will be different on an iPhone 4? Unfortunately I do not possess any models earlier than the 4 so I'm unable to test this myself.
The audio frameworks seem to be identical for identical iOS versions (except for the 2G). However the physical microphones (and acoustical environments) are different. People have published various test results, such as:
http://blog.faberacoustical.com/2009/iphone/iphone-microphone-frequency-response-comparison/
and
http://blog.faberacoustical.com/2010/iphone/iphone-4-audio-and-frequency-response-limitations/
But it's possible that the mic response may vary with manufacturing batches as well. YMMV.
I'd just like to add that no matter what I try, there seems to be no way of exporting audio through AVAssetExportSession using an iPhone 3G. It's working with 3GS, iPod Touches, 4, iPad and so on.
I'm (trying) to use HTTP-Live-Streaming in my app and after weeks of re-encoding it seems to work now without errors by the mediastream validator.
On my latest iPod Touch (iOS 4.0) with WiFi the videostream loads in 1sec and switches to the highest bandwidth stream.
On another test device iPhone 3G (iOS 3.0) with WiFi it takes up to 30 seconds to load the stream - although I see in my log files that it looks for the high quality stream after 1 second. But I get a black screen with audio only in the first 30 seconds. Is this problem to due the better CPU on the latest iPod touch or is it due to the iOS upgrade?
Also I'm fearing another rejection by Apple because the last time they checked my stream they only looked at each videostream for about 3 seconds and then rejected because they didn't see any video.
Take a closer look at the segmented files. Example: can you play the first low-quality MPEG-TS segment in VLC? Is their video there?
I've found iOS devices to be very picky about that they will and won't play. Make sure you are using a lowest common denominator code settings. I'm a big fan of The Complete Guide to iPod, Apple TV and iPhone Video Formats