Playing multiple sine waves on iPhone with AudioUnit(s) - iphone

I'm currently working on program that can output sine wave of set frequency through speaker/headphones on iPhone.
Now I want to output multiple sine waves, and I don't know which approach is better. Should I just add all sine waves and play them using one AudioUnit, or maybe create AudioUnit for each sine wave ?
I'm currently leaning towards first solution, but don't know why ... It's just my instinct. It would be great if someone could explain to me why solution they choose is better :)
Thanks !

You will have more precise control of the timing of the mix (where each sine wave starts and ends), and the quality of the mix, if you create one DSP mixer and play the result through a single Audio Unit. There will also be a very tiny bit less thread switching overhead taking up CPU cycles.

Related

How to play a row of numbers on an iPhone as audio?

I'm looking at an output of an electroencephalogram sensor. This data is displayed on screen in raw form at about 200Hz. I read that in the old times, it was possible to hook up such output to a speaker and hear the waveform, instead of seeing it. So I'm interested if it is possible to replicate this experiment with modern iPhone. How can I take a waveform that is displayed in a graph form and package it in such a way that it can be played through a iPhone's speakers live? In other words, I'm looking to stream EEG data through some sort of audio player and need to know how to create audio packets from this data on the fly.
Here's the raw waveform, it is displayed at 200 data points per second (200Hz)
After I clean up and process the waveform, I'm interested in how far it deviates from the average of the waveform. In this case, I think this can be played as a increasing/decreasing amplitude of a sine wave, which may be easier.
Thank you for your input
Here's a good tutorial on generating a sine tone for output through CoreAudio:
http://www.cocoawithlove.com/2010/10/ios-tone-generator-introduction-to.html
The RenderProc is the bit of code you're twiddling with, in the example they're using an NSSlider to change the frequency, you just need to feed it with your signal data instead.
One of the ideas that I had for playing sound in response to the signal amplitude change is to divide the amplitude into a set of discrete bands of values (for example 0-10, 10-20, 20-30, etc) and then assign a sound to each band. Then using audio services or system sound, it might be possible to loop a unique sound fragment for each band.

Audio processing with fft iphone app [duplicate]

I'm trying to write a simple tuner (no, not to make yet another tuner app), and am looking at the AurioTouch sample source (has anyone tried to comment this code??).
My worry is that aurioTouch doesn't seem to actually work very well when looking at the frequency domain graph. I play a single note on an instrument and I don't see a nicely ordered, small, set of frequencies with one string peak at the appropriate frequency of the note.
Has anyone used aurioTouch enough to know whether the underlying code is functional or whether it is just a crude sample?
Other options I have are to use FFTW or KISS FFT. Anyone have any experience with those?
Thanks.
You're expecting the wrong thing!!
Not the library's fault
Whether the library produces it properly or not, you're looking for a pattern that rarely actually exists in real-life sounds. Only a perfect sine wave, electronically generated, will cause an even partway discrete appearing 'spike' in the freq. graph. If you don't believe it try firing up a 'spectrum analyzer' visualization in winamp or media player. It's not so much the PC's fault.
Real sound waves are complicated animals
Picture a sawtooth or sqaure wave in your mind's eye. those sharp turnaround - corners or points on the wave, look like tons of higher harmonics to the FFT or even a real fourier. And if you've ever seen a real 'sqaure wave/sawtooth' on a scope, or even a 'sine wave' produced by an instrument that is supposed to produce a sinewave, take a look at all the sharp nooks and crannies in just ONE note (if you don't have a scope just zoom way in on the wave in audacity - the more you zoom, the higher notes you're looking at). Yep, those deviations all count as frequencies.
It's hard to tell the difference between one note and a whole orchestra sometimes in a spectrum analysis.
But I hear single notes!
So how does the ear do it? It considers the entire waveform. Then your lower brain lies to your upper brain about what the input is: one note, not a mess of overtones.
You can't do it as fully, but you can approximate it via 'training.'
Approximation: building some smarts
PLAY the note on the instrument and 'save' the frequency graph. Do this for notes in several frequency ranges, or better yet all notes.
Then interpolate the notes to fill in gaps (by 1/2 or 1/4 steps) by multiplying the saved graphs for that instrument by 2^(1/12) (or 1/24 for 1/4 steps, etc).
Figure out how to store them in a quickly-searchable data structure like a BST or trie. Only it would have to return a 'how close is this' score. It would have to identify the match via proportions of frequencies as well, in case it came in different volumes.
Using the smarts
Next time you're looking for a note from that instrument, just take the 'heard' freq graph and find it in that data structure. You can record several instruments that make different waveforms and search for them too. If there are background sounds or multiple notes, take the closest match. Then if you want to identify other notes, 'subtract' the found frequency pattern from the sampled one, and rinse, lather repeat.
It won't work by your voice...
If you ever tried to tune yourself by singing into a guitar tuner, you'll know that tuners arent that clever. Of course some instruments (voice esp) really float around the pitch and generate an ever-evolving waveform (even without somebody singing).
What are you trying to accomplish?
You would not have to totally get this fancy for a 'simple' tuner app, but if you're not making just another tuner app them I'm guessing you actually want to identify notes (e.g., maybe you want to autogenerate midi files from songs on the radio ;-)
Good luck. I hope you find a library that does all this junk instead of having to roll your own.
Edit 2017
Note this webpage: http://www.feilding.net/sfuad/musi3012-01/html/lectures/015_instruments_II.htm
Well down the page, there are spectrum analyses of various organ pipes. There are many, many overtones. These are possible to detect - with enough work - if you 'train' your app with them first (just like telling a kid, 'this is what a clarinet sounds like...')
aurioTouch looks weird because the frequency axis is on a linear scale. It's very difficult to interpret FFT output when the x-axis is anything other than a logarithmic scale (traditionally log2).
If you can't use aurioTouch's integer-FFT, check out my library:
http://github.com/alexbw/iPhoneFFT
It uses double-precision, has support for multiple window types, and implements Welch's method (which should give you more stable spectra when viewed over time).
#zaph, the FFT does compute a true Discrete Fourier Transform. It is simply an efficient algorithm that takes advantage of the bit-wise representation of digital signals.
FFTs use frequency bins and the bin frequency width is based on the FFT parameters. To find a frequency you will need to record it sampled at a rate at least twice the highest frequency present in the sample. Then find the time between the cycles. If it is not a pure frequency this will of course be harder.
I am using Ooura FFT to compute the FFT of acceleromter data. I do not always obtain the correct spectrum. For some reason, Ooura FFT produces completely wrong results with spectral magnitudes of the order 10^200 across all frequencies.

Peak detection in Performous code

I was looking to implement voice pitch detection in iphone using HPS method. But the detected tones are not very accurate. Performous does a decent job of pitch detection.
I looked through the code but i did not fully get the theory behind the calculations.
They use FFT and find the peaks. But the part where they use the phase of FFT output, got me confused.I figure they use some heuristics for voice frequencies.
So,Could anyone please explain the algorithm used in Performous to detect pitch?
[Performous][1] extracts pitch from the microphone. Also the code is open source. Here is a description of what the algorithm does, from the guy that coded it (Tronic on irc.freenode.net#performous).
PCM input (with buffering)
FFT (1024 samples at a time, remove 200 samples from front of the buffer afterwards)
Reassignment method (against the previous FFT that was 200 samples earlier)
Filtering of peaks (this part could be done much better or even left out)
Combining peaks into sets of harmonics (we call the combination a tone)
Temporal filtering of tones (update the set of tones detected earlier instead of simply using the newly detected ones)
Pick the best vocal tone (frequency limits, weighting, could use the harmonic array also but I don't think we do)
I still wasn't able from this information to figure it out and implement it. If anyone manages this, please please post your results here, and comment this response so that SO notifies me.
The task would be to create a minimal C++ wrapper around this code.

How to determine the frequency of the input recorded voice in iphone?

I am new to iphone development.I am doing research on voice recording in iphone .I have downloaded the "speak here " sample program from Apple.I want to determine the frequency of my voice that is recorded in iphone.Please guide me .Please help me out.Thanks.
In the context of processing human speech, there's really no such thing as "the" frequency.
The signal will be a mix of many different frequencies, so it might be more fruitful to think in terms of a spectrum, rather than a single frequency. Even if you're talking about
a sustained musical note with a fixed pitch, there will be plenty of overtones and harmonics present, in addition to the fundamental frequency of the note. And for actual speech,
the frequency spectrum will change drastically even within a short clip, due to the different tonal characteristics of vowels and consonants.
With that said, it does make some sense to consider the peak frequency of a voice recording.
You could calculate the Fast Fourier Transform of your voice clip, then find the frequency
bin with the largest response. You may also be interested in the concept of a spectrogram, which represents how the audio spectrum of a signal varies over time.
Use Audacity. Take a small recording of typical speech, and cut it down to one wavelength, from one peak to another peak. Subtract the two times, and divide 1 by that number and you'll get the frequency of your wave in Hz.
Example:
In my audio clip, my waveform runs from 0.0760 to 0.0803 seconds.
0.0803-0.0760 = 0.0043
1/0.0043 = 232.558 Hz, my typical speech frequency
This might give you a good basis to create an analyzer. You'd need to detect the peaks, and time between the peaks of the wave and do an average calculation of the result.
You'll need to use Apple's Accelerate framework to take an FFT of the relevant audio. The FFT will convert the audio in the time domain to the frequency domain. The Accelerate framework supports the FFT and will allow you to do frequency analysis in real-time.

AurioTouch & FFT for an instrument tuner

I'm trying to write a simple tuner (no, not to make yet another tuner app), and am looking at the AurioTouch sample source (has anyone tried to comment this code??).
My worry is that aurioTouch doesn't seem to actually work very well when looking at the frequency domain graph. I play a single note on an instrument and I don't see a nicely ordered, small, set of frequencies with one string peak at the appropriate frequency of the note.
Has anyone used aurioTouch enough to know whether the underlying code is functional or whether it is just a crude sample?
Other options I have are to use FFTW or KISS FFT. Anyone have any experience with those?
Thanks.
You're expecting the wrong thing!!
Not the library's fault
Whether the library produces it properly or not, you're looking for a pattern that rarely actually exists in real-life sounds. Only a perfect sine wave, electronically generated, will cause an even partway discrete appearing 'spike' in the freq. graph. If you don't believe it try firing up a 'spectrum analyzer' visualization in winamp or media player. It's not so much the PC's fault.
Real sound waves are complicated animals
Picture a sawtooth or sqaure wave in your mind's eye. those sharp turnaround - corners or points on the wave, look like tons of higher harmonics to the FFT or even a real fourier. And if you've ever seen a real 'sqaure wave/sawtooth' on a scope, or even a 'sine wave' produced by an instrument that is supposed to produce a sinewave, take a look at all the sharp nooks and crannies in just ONE note (if you don't have a scope just zoom way in on the wave in audacity - the more you zoom, the higher notes you're looking at). Yep, those deviations all count as frequencies.
It's hard to tell the difference between one note and a whole orchestra sometimes in a spectrum analysis.
But I hear single notes!
So how does the ear do it? It considers the entire waveform. Then your lower brain lies to your upper brain about what the input is: one note, not a mess of overtones.
You can't do it as fully, but you can approximate it via 'training.'
Approximation: building some smarts
PLAY the note on the instrument and 'save' the frequency graph. Do this for notes in several frequency ranges, or better yet all notes.
Then interpolate the notes to fill in gaps (by 1/2 or 1/4 steps) by multiplying the saved graphs for that instrument by 2^(1/12) (or 1/24 for 1/4 steps, etc).
Figure out how to store them in a quickly-searchable data structure like a BST or trie. Only it would have to return a 'how close is this' score. It would have to identify the match via proportions of frequencies as well, in case it came in different volumes.
Using the smarts
Next time you're looking for a note from that instrument, just take the 'heard' freq graph and find it in that data structure. You can record several instruments that make different waveforms and search for them too. If there are background sounds or multiple notes, take the closest match. Then if you want to identify other notes, 'subtract' the found frequency pattern from the sampled one, and rinse, lather repeat.
It won't work by your voice...
If you ever tried to tune yourself by singing into a guitar tuner, you'll know that tuners arent that clever. Of course some instruments (voice esp) really float around the pitch and generate an ever-evolving waveform (even without somebody singing).
What are you trying to accomplish?
You would not have to totally get this fancy for a 'simple' tuner app, but if you're not making just another tuner app them I'm guessing you actually want to identify notes (e.g., maybe you want to autogenerate midi files from songs on the radio ;-)
Good luck. I hope you find a library that does all this junk instead of having to roll your own.
Edit 2017
Note this webpage: http://www.feilding.net/sfuad/musi3012-01/html/lectures/015_instruments_II.htm
Well down the page, there are spectrum analyses of various organ pipes. There are many, many overtones. These are possible to detect - with enough work - if you 'train' your app with them first (just like telling a kid, 'this is what a clarinet sounds like...')
aurioTouch looks weird because the frequency axis is on a linear scale. It's very difficult to interpret FFT output when the x-axis is anything other than a logarithmic scale (traditionally log2).
If you can't use aurioTouch's integer-FFT, check out my library:
http://github.com/alexbw/iPhoneFFT
It uses double-precision, has support for multiple window types, and implements Welch's method (which should give you more stable spectra when viewed over time).
#zaph, the FFT does compute a true Discrete Fourier Transform. It is simply an efficient algorithm that takes advantage of the bit-wise representation of digital signals.
FFTs use frequency bins and the bin frequency width is based on the FFT parameters. To find a frequency you will need to record it sampled at a rate at least twice the highest frequency present in the sample. Then find the time between the cycles. If it is not a pure frequency this will of course be harder.
I am using Ooura FFT to compute the FFT of acceleromter data. I do not always obtain the correct spectrum. For some reason, Ooura FFT produces completely wrong results with spectral magnitudes of the order 10^200 across all frequencies.