Stereo convolution with AudioKit - convolution

I'm using AudioKit (version 4.10.1) to apply reverberation to a microphone signal. I just started experimenting with AKConvolution with a stereo impulse response (.wav file) applied to the microphone signal. My code is functioning with no errors, but the resulting reverberation is mono, rather than stereo. Applying AKZitaReverb to the same microphone signal produces nice, stereo reverb, so I expected AKConvolution with a stereo impulse response to do the same. I think stereo (2-channel) convolution should be possible, but I am not super-familiar with AudioKit. I think the microphone signal channel is stereo (I set AKSettings.channelCount = 2). Seems like a stereo signal convolved with a stereo impulse response should produce stereo reverb.
I am stuck, and would appreciate any ideas you might have to get me unstuck. Before posting this message, I searched online for other people who've encountered the same problem, but similar posts are all a few years old and the solutions people suggested included links that are now dead and/or depended on classes and functions that are no longer part of more recent versions of AudioKit.
Thanks!

Related

How to detect sound frequency for particular sound in iphone?

I'm doing an app where I want to detect sound frequency. How to detect frequency for particular sound like dog sound? Does anybody have tutorial or some sample codes?
Detecting a single frequency, or even computing a single FFT, is not a reliable method for differentiating a dog bark from other common sounds of around the same volume.
What might work is sound fingerprint analysis using MFCC's, followed by statistical pattern matching against a large enough "dog" sound database. Some pointers to the type of signal processing required might be answered here: Music Recognition and Signal Processing
This is non-trivial stuff more suited for multiple college textbook chapters than any short tutorial.
To detect the frequency, you can use a pitch detection algorithm like FFT.
Learn more here: http://en.wikipedia.org/wiki/Pitch_detection_algorithm
You can look at this project for working source code for iOS that uses FFT algorithm to detect frequencies:
https://github.com/hollance/SimonSings

How to get samples from AudioQueue Services on iOS

I'm trying to get samples from AudioQueue to show spectrum of music (like in iTunes) on iPhone.
Ive read a lot of posts but almost all asks about get samples when Recording, not playing :(
I'm using AudioQueue Services for streaming audio. Please help to understanding next points:
1/ Where can I get access to samples (PCM, non mp3 (I'm using mp3 stream)
2/ Should I collect samples in my own buffer to apply fft ?
3/ Is it possible get frequencies without fft transformations ?
4/ How can I synchronize my fft shift in buffer with current playing samples ?
thanks,
update:
AudioQueueProcessingTapNew
For iOS6+, this works fine for me. But what about iOS5 ?
For playing audio, the idea is to get at the samples before you feed them to the Audio Queue callback. You may need to convert any compressed audio file format into raw PCM samples beforehand. This can be done using one of the AVFoundation converter or file reader services.
You can then copy frames of data from the same source used to feed the Audio Queue callback buffers, and apply your FFT or other DSP for visualization to them.
You can use either FFTs or a bank of band-pass filters to get frequency info, but the FFT is very efficient at this.
Synchronization needs to done by trial-and-error, as Apple does not specify exact audio and view graphic display latencies, which may differ between iOS devices and OS versions anyway. But short Audio Queue buffers or using the RemoteIO Audio Unit may give you better control of the audio latency, and OpenGL ES will give you better control of the graphic latency.

Use iPhone to recognize sound frequency in range 20-24 Hz

My boss wants me to develop an app, using iPhone to recognize sound frequencies from 20-24 Hz that humans cannot hear. (iPhone frequency response: 20 Hz to 20 kHz)
Is this possible? If yes, can anyone give me some advice? Where to start?
Before you start working on this you need to make sure that the iPhone hardware is physically capable of detecting such low frequencies. Most microphones have very poor sensitivity at low frequencies, and consumer analogue input stages typically have a high pass filter which attenuates frequencies below ~ 30 Hz. You need to try capturing some test sounds containing the signals of interest with an existing audio capture app on an iPhone and see whether the low frequency components get recorded. If not then your app is a non-starter.
What you're looking for is a fast fourier transform. This is the main algorithm used for converting a time based signal to a frequency based one.
It seems the Accelerate framework has some FFT support, so I'd start looking at that, there are several posts about that already.
Apple has some sample openCL code for doing this on a mac, but AFAIK openCL isn't on iOS yet.
You'd also want to check the frequency response of the microphone ( I think there are some apps out doing oscilloscope displays from the mic that would help here).
You basic method would be to take a chunk of sound from the mic. Filter it and then maybe shift it down in frequency, depending on what you need to do with it.

AVAudio detect note/pitch/etc. iPhone xcode objective-c

I'm making an app on the iphone and I need a way of detecting the tune of the sounds coming in through the microphone. (I.e. A#, G, C♭, etc.)
I assumed I'd use AVAudio but I really don't know and I can't find anything in the documentation..
Any help?
Musical notes are nothing more than specific frequencies of sound. You will need a way to analyze all of the frequencies in your input signal, and then find a way to isolate the individual notes.
Finding frequencies in an audio signal is done using the Fast Fourier Transform (FFT). There is plenty of source code available online to compute the FFT from an audio signal. In particular, oScope offers an open-source solution for the iPhone.
Edit: Pitch detection seems to be the technical name for what you are trying to do. The answers to a similar question here on SO may be of use.
There's nothing built-in to the iOS APIs for musical pitch estimation. You will have to code your own DSP function. The FFTs in the Accelerate framework will give you spectral frequency information from a PCM sampled waveform, but frequency is different from psycho-perceptual pitch.
There are a bunch of good and bad ways to estimate frequency and pitch. I have a long partial list of various estimation methods on my DSP resources web page.
You can look at Apple's aurioTouch sample app for an example of getting iOS device audio input and displaying it's frequency spectrum.
Like #e.James said, you are looking to find the pitch of a note, its called Pitch Detection. There are a ton of resources at CCRMA, Stanford University for what you are looking for. Just google for Pitch Detection and you will see a brilliant collection of algorithms. As far as wanting to find the FFT of blocks of Audio Samples, you could use the built-in FFT function of the Accelerate Framework (see this and this) or use the MoMu toolkit. Using MoMu has the benefit of it's functions decomposing the audio stream into samples for you and easy application of the FFT using it's own functions.

calculating frequency with apple's auriotouch example

I am working on a program that needs to capture the frequency of sound from a guitar. I have modified the aurioTouch example to output the frequency by using the frequency with the highest magnitude. It works ok for high notes but is very inaccurate on the lower strings. I believe it is due to overtones. I researched ways on how to solve this problem such as Cepstrum Analysis but I am lost on how to implement this within the example code as it is unclear and hard to follow without comments. any help would be greatly appreciated, thanks!
As you have discovered, musical pitch is not the same as peak frequency.
But trying to investigate algorithms while trying to work with real-time audio is not easy.
I suggest you separate the problems. Record some music sounds (guitar plucks, etc.) on your Mac into raw sound files. Try your chosen pitch estimation algorithms on these recorded sample sets. Then, after you get this working, figure out how to integrate your code into the iOS audio and Accelerate (for FFT) frameworks.