My boss wants me to develop an app, using iPhone to recognize sound frequencies from 20-24 Hz that humans cannot hear. (iPhone frequency response: 20 Hz to 20 kHz)
Is this possible? If yes, can anyone give me some advice? Where to start?
Before you start working on this you need to make sure that the iPhone hardware is physically capable of detecting such low frequencies. Most microphones have very poor sensitivity at low frequencies, and consumer analogue input stages typically have a high pass filter which attenuates frequencies below ~ 30 Hz. You need to try capturing some test sounds containing the signals of interest with an existing audio capture app on an iPhone and see whether the low frequency components get recorded. If not then your app is a non-starter.
What you're looking for is a fast fourier transform. This is the main algorithm used for converting a time based signal to a frequency based one.
It seems the Accelerate framework has some FFT support, so I'd start looking at that, there are several posts about that already.
Apple has some sample openCL code for doing this on a mac, but AFAIK openCL isn't on iOS yet.
You'd also want to check the frequency response of the microphone ( I think there are some apps out doing oscilloscope displays from the mic that would help here).
You basic method would be to take a chunk of sound from the mic. Filter it and then maybe shift it down in frequency, depending on what you need to do with it.
Related
I'm doing an app where I want to detect sound frequency. How to detect frequency for particular sound like dog sound? Does anybody have tutorial or some sample codes?
Detecting a single frequency, or even computing a single FFT, is not a reliable method for differentiating a dog bark from other common sounds of around the same volume.
What might work is sound fingerprint analysis using MFCC's, followed by statistical pattern matching against a large enough "dog" sound database. Some pointers to the type of signal processing required might be answered here: Music Recognition and Signal Processing
This is non-trivial stuff more suited for multiple college textbook chapters than any short tutorial.
To detect the frequency, you can use a pitch detection algorithm like FFT.
Learn more here: http://en.wikipedia.org/wiki/Pitch_detection_algorithm
You can look at this project for working source code for iOS that uses FFT algorithm to detect frequencies:
https://github.com/hollance/SimonSings
I am working on an app that analyzes incoming audio from the built in microphone on iPhone/iPad using the iOS 6.0 SDK.
I have been struggling some time with very low levels of the lower frequencies (i.e. below 200 Hz) and I have (on the web) found others having the same problems without any answers to the problem.
Various companies working with audio tools for iOS states that there was (previous to iOS 6.0) a built in low-frequency rolloff filter that was causing these low signals on the lower frequencies BUT those sources also states that starting with iOS 6.0, it should be possible to turn off this automatic low-frequency filtering of the input audio signals.
I have gone through the audio unit header files, the audio documentation in Xcode as well as audio-related sample code without success. I have played with the different parameters and properties of the AudioUnit (which mentions low-pass filters and such) without solving the problem.
Does anybody know how to turn off the automatic low-frequency rolloff filter for RemoteIO input in iOS 6.0?
Under iOS 6.0 there is the possibility to set the current AVAudioSession to AVAudioSessionModeMeasurement like this:
[[AVAudioSession sharedInstance] setMode: AVAudioSessionModeMeasurement error:NULL];
This removes the low frequency filtering.
Link:
http://developer.apple.com/library/ios/#documentation/AVFoundation/Reference/AVAudioSession_ClassReference/Reference/Reference.html
I hope this helps.
I'm not sure if there is any way to ever accomplish this on these devices. Most microphones have difficulty with frequencies below 200 HZ (and above the 20 kHZ range as well). In fact, a lot of speakers can barely play audio at that range either. In order to get a clean signal at the <200 HZ range, you would require good enough hardware, which I think is a bit beyond the capabilities of the built in microphones of the iPhone/iPad. That's probably why Apple has filtered out these low frequency sounds, as they cannot guarantee a good enough recording, OR a good enough playback. Here's a link describing the situation better for the older devices (iPhone 4, iPhone 3GS, and iPad 1).
Apple is also very picky about what they will and won't let you play with. Even if you do find out where this filtering is taking place, interrupting that code will most likely result in your app being rejected by the app store. And due to hardware limitations, you probably wouldn't be able to achieve what you want to anyways.
Hope that Helps!
I'm making an app on the iphone and I need a way of detecting the tune of the sounds coming in through the microphone. (I.e. A#, G, C♭, etc.)
I assumed I'd use AVAudio but I really don't know and I can't find anything in the documentation..
Any help?
Musical notes are nothing more than specific frequencies of sound. You will need a way to analyze all of the frequencies in your input signal, and then find a way to isolate the individual notes.
Finding frequencies in an audio signal is done using the Fast Fourier Transform (FFT). There is plenty of source code available online to compute the FFT from an audio signal. In particular, oScope offers an open-source solution for the iPhone.
Edit: Pitch detection seems to be the technical name for what you are trying to do. The answers to a similar question here on SO may be of use.
There's nothing built-in to the iOS APIs for musical pitch estimation. You will have to code your own DSP function. The FFTs in the Accelerate framework will give you spectral frequency information from a PCM sampled waveform, but frequency is different from psycho-perceptual pitch.
There are a bunch of good and bad ways to estimate frequency and pitch. I have a long partial list of various estimation methods on my DSP resources web page.
You can look at Apple's aurioTouch sample app for an example of getting iOS device audio input and displaying it's frequency spectrum.
Like #e.James said, you are looking to find the pitch of a note, its called Pitch Detection. There are a ton of resources at CCRMA, Stanford University for what you are looking for. Just google for Pitch Detection and you will see a brilliant collection of algorithms. As far as wanting to find the FFT of blocks of Audio Samples, you could use the built-in FFT function of the Accelerate Framework (see this and this) or use the MoMu toolkit. Using MoMu has the benefit of it's functions decomposing the audio stream into samples for you and easy application of the FFT using it's own functions.
I am working on a program that needs to capture the frequency of sound from a guitar. I have modified the aurioTouch example to output the frequency by using the frequency with the highest magnitude. It works ok for high notes but is very inaccurate on the lower strings. I believe it is due to overtones. I researched ways on how to solve this problem such as Cepstrum Analysis but I am lost on how to implement this within the example code as it is unclear and hard to follow without comments. any help would be greatly appreciated, thanks!
As you have discovered, musical pitch is not the same as peak frequency.
But trying to investigate algorithms while trying to work with real-time audio is not easy.
I suggest you separate the problems. Record some music sounds (guitar plucks, etc.) on your Mac into raw sound files. Try your chosen pitch estimation algorithms on these recorded sample sets. Then, after you get this working, figure out how to integrate your code into the iOS audio and Accelerate (for FFT) frameworks.
I want to analyze MIC audio on an ongoing basis (not just a snipper or prerecorded sample), and display frequency graph and filter out certain aspects of the audio. Is the iPhone powerful enough for that? I suspect the answer is a yes, given the Google and iPhone voice recognition, Shazaam and other music recognition apps, and guitar tuner apps out there. However, I don't know what limitations I'll have to deal with.
Anyone play around with this area?
Apple's sample code aurioTouch has a FFT implementation.
The apps that I've seen do some sort of music/voice recognition need an internet connection, so it's highly likely that these just so some sort of feature calculation on the audio and send these features via http to do the recognition on the server.
In any case, frequency graphs and filtering have been done before on lesser CPUs a dozen years ago. The iPhone should be no problem.
"Fast enough" may be a function of your (or your customer's) expectations on how much frequency resolution you are looking for and your base sample rate.
An N-point FFT is on the order of N*log2(N) computations, so if you don't have enough MIPS, reducing N is a potential area of concession for you.
In many applications, sample rate is a non-negotiable, but if it was, this would be another possibility.
I made an app that calculates the FFT live
http://www.itunes.com/apps/oscope
You can find my code for the FFT on GitHub (although it's a little rough)
http://github.com/alexbw/iPhoneFFT
Apple's new iPhone OS 4.0 SDK allows for built-in computation of the FFT with the "Accelerate" library, so I'd definitely start working with the new OS if it's a central part of your app's functionality.
You cant just port FFT code written in C into your app...there is the thumb compiler option that complicates floating point arithmetic. You need to put it in arm mode