This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How to detect sound frequency for particular sound in iphone?
I am seeking a way to find the frequency of mic input.
I can record my voice or a sin noise in a temporary file.
For the recording I am using the AVFoundation framework.
I used this framework to find the peak of the signal.
What can I do to find the frequency of the signal?
There is a quite good example project from Apple. Here is the link to the aurioTouch2 sample app:
https://developer.apple.com/library/ios/samplecode/aurioTouch2/Introduction/Intro.html
I guess you need to start using the CoreAudio Library instead of the AVFoundation. To get the frequency you probably need to use a FFT (Fast Fourier Transformation). This is done in the example above. It even visualizes a Spectogram (Frequency over time).
Or maybe this stackoverflow posts could help you:
Get Frequency for Audio Input on iPhone
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I'm developing an app that would be listening waiting for a certain frequency to be detected by the microphone. As far as I researched, I think the best way to do this is by performing a Fast Fourier Transform to the real time audio signal (if anyone think there's a better way to do it, I would love to hear from you). It works fine on iOS but I can't find a way to make it work on watchOS
I implemented the famous tempi-fft with no problems on iOS and Swift 5, the problems comes with different libraries not present or limited on watchOS SDK, as I can't use AudioUnit, AURenderCallback to detect new data on buffer, can't adjust the preferred buffer duration and sample rate to the AudioSession on WatchOS, etc. I'm not an expert at all with the 'audio engineering' side, so I don't know how to approach this. I researched a lot but i didn't find implementations on watchOS for this.
I only found this project AccelerateWatch, but this was uploaded years ago when Accelerate framework was not available on watchOS yet.
Any help on this would be greatly appreciated.
I have a chromatic tuner app for the Apple Watch in the iOS App Store that does exactly this, running completely inside the Apple Watch component.
Both the iOS app and the watchOS Watch app component use an AVAudioInputNode installTapOnBus to acquire AVAudioPCMBuffer microphone sample buffers in near real-time. Then the apps feed blocks of the audio sample data to either Goertzel routines or Accelerate framework vDSP FFT functions for further frequency analysis. Then the app does further processing on the FFT frequency data to detect and estimate audio pitch (which is often very different from the FFT frequency magnitude peak). The watchOS display is then animated at frame rate using SpriteKit to output the results.
I'm making an app on the iphone and I need a way of detecting the tune of the sounds coming in through the microphone. (I.e. A#, G, C♭, etc.)
I assumed I'd use AVAudio but I really don't know and I can't find anything in the documentation..
Any help?
Musical notes are nothing more than specific frequencies of sound. You will need a way to analyze all of the frequencies in your input signal, and then find a way to isolate the individual notes.
Finding frequencies in an audio signal is done using the Fast Fourier Transform (FFT). There is plenty of source code available online to compute the FFT from an audio signal. In particular, oScope offers an open-source solution for the iPhone.
Edit: Pitch detection seems to be the technical name for what you are trying to do. The answers to a similar question here on SO may be of use.
There's nothing built-in to the iOS APIs for musical pitch estimation. You will have to code your own DSP function. The FFTs in the Accelerate framework will give you spectral frequency information from a PCM sampled waveform, but frequency is different from psycho-perceptual pitch.
There are a bunch of good and bad ways to estimate frequency and pitch. I have a long partial list of various estimation methods on my DSP resources web page.
You can look at Apple's aurioTouch sample app for an example of getting iOS device audio input and displaying it's frequency spectrum.
Like #e.James said, you are looking to find the pitch of a note, its called Pitch Detection. There are a ton of resources at CCRMA, Stanford University for what you are looking for. Just google for Pitch Detection and you will see a brilliant collection of algorithms. As far as wanting to find the FFT of blocks of Audio Samples, you could use the built-in FFT function of the Accelerate Framework (see this and this) or use the MoMu toolkit. Using MoMu has the benefit of it's functions decomposing the audio stream into samples for you and easy application of the FFT using it's own functions.
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
iPhone voice changer
I want to make an applications similar to Talking Tom Cat, But i am not able to identify the approach for sound.
How I can convert my sound into cat sound?
Look up independent time pitch stretching of audio. It's a digital signal processing technique. One method that can be used is the phase vocoder technique of sound analysis/resynthesis in conjunction with resampling. There seem to be a couple companies selling libraries to do suitable time pitch modification.
The technique to convert a normal voice into a squeaky voice is called time-scale modification of speech. One way is to take the speech and reduce the pitch by a certain amount. Another approach is to stretch/compress the speech so that the frequencies in the voice get scaled by an appropriate amount. These are techniques in digital signal processing.
the great link to download the sample code that provides all your needs,here also refer here for gaining more knowledge regarding your question.
In the init method of the Sample Code of HelloWorldLayer.mm ,u can see three float values as
time = 0.7;
pitch = 0.8;
formant = pow(2., 0./12.);
just adjust the pitch value to 1.9,it would be really a nice cat sound!!!
I am working on a program that needs to capture the frequency of sound from a guitar. I have modified the aurioTouch example to output the frequency by using the frequency with the highest magnitude. It works ok for high notes but is very inaccurate on the lower strings. I believe it is due to overtones. I researched ways on how to solve this problem such as Cepstrum Analysis but I am lost on how to implement this within the example code as it is unclear and hard to follow without comments. any help would be greatly appreciated, thanks!
As you have discovered, musical pitch is not the same as peak frequency.
But trying to investigate algorithms while trying to work with real-time audio is not easy.
I suggest you separate the problems. Record some music sounds (guitar plucks, etc.) on your Mac into raw sound files. Try your chosen pitch estimation algorithms on these recorded sample sets. Then, after you get this working, figure out how to integrate your code into the iOS audio and Accelerate (for FFT) frameworks.
This question already has answers here:
Real-time Pitch Shifting on the iPhone
(5 answers)
Closed 7 years ago.
What is the best way to accomplish this?
From what I have read so far, it seems you have to setup IO Remote (which is a pain itself). Do you need to do a FFT? Are there any examples out there?
Can I just speed up / slow down playback?
Thanks
OpenAL lets you pitch-shift with the AL_PITCH source property. Maybe you could run your audio through OpenAL and use that.
I've never developed for an iPhone, but if you have sufficient control over the buffer sent to the audiodevice, then you could do the following:
say you have a buffer read from an audiofile, if you send only every even sample to the audiodevice (probably put in some other buffer which is passed to some function), then it will double the frequency and half the time to play the file.
If you want something in between you need to compute in between samples or resample the audiofile = interpolate values between successive samples.