How to play a row of numbers on an iPhone as audio? - iphone

I'm looking at an output of an electroencephalogram sensor. This data is displayed on screen in raw form at about 200Hz. I read that in the old times, it was possible to hook up such output to a speaker and hear the waveform, instead of seeing it. So I'm interested if it is possible to replicate this experiment with modern iPhone. How can I take a waveform that is displayed in a graph form and package it in such a way that it can be played through a iPhone's speakers live? In other words, I'm looking to stream EEG data through some sort of audio player and need to know how to create audio packets from this data on the fly.
Here's the raw waveform, it is displayed at 200 data points per second (200Hz)
After I clean up and process the waveform, I'm interested in how far it deviates from the average of the waveform. In this case, I think this can be played as a increasing/decreasing amplitude of a sine wave, which may be easier.
Thank you for your input

Here's a good tutorial on generating a sine tone for output through CoreAudio:
http://www.cocoawithlove.com/2010/10/ios-tone-generator-introduction-to.html
The RenderProc is the bit of code you're twiddling with, in the example they're using an NSSlider to change the frequency, you just need to feed it with your signal data instead.

One of the ideas that I had for playing sound in response to the signal amplitude change is to divide the amplitude into a set of discrete bands of values (for example 0-10, 10-20, 20-30, etc) and then assign a sound to each band. Then using audio services or system sound, it might be possible to loop a unique sound fragment for each band.

Related

Real time speech transformation in MATLAB

Is it possible to transform speech (pitch/formant shift) in (near) real-time using MATLAB? How can it be done?
If not, what should I use to do that?
I need to get input from the microphone, visualise the sound wave, add a filter to it, see the oscilloscope again, and play back the modified sound.
The real-time visualization (spectrogram) can be created with SparkNG package by Hideki Kawahara.
Sure. There's a demo application up on the MATLAB Central File Exchange that does something similar. It reads in a signal from the sound card (requires Data Acquisition Toolbox) in near real time, applies an FFT transform - you could do something else like applying a filter - and visualises the results in 3D graphs live. You could use it as a template and modify it to your needs, such as visualising in different ways (more of an oscilloscope style), or outputting the sound as a .wav file for later playback.
If you need properly real time, you might look into implementing in Simulink rather than just base MATLAB.

Playing multiple sine waves on iPhone with AudioUnit(s)

I'm currently working on program that can output sine wave of set frequency through speaker/headphones on iPhone.
Now I want to output multiple sine waves, and I don't know which approach is better. Should I just add all sine waves and play them using one AudioUnit, or maybe create AudioUnit for each sine wave ?
I'm currently leaning towards first solution, but don't know why ... It's just my instinct. It would be great if someone could explain to me why solution they choose is better :)
Thanks !
You will have more precise control of the timing of the mix (where each sine wave starts and ends), and the quality of the mix, if you create one DSP mixer and play the result through a single Audio Unit. There will also be a very tiny bit less thread switching overhead taking up CPU cycles.

how to convert level meter to Hz in iPhone?

I'm using AudioQueue to get recording level meter via microphone. The problem is what I got from it are floating point numbers. I know they represents for audio sample.
I need to convert it to Hz. My assignment is to convert a digital signal audio recording from microphone. Convert it to Hz and do a simple formula to get a result that relevant to this number.
Please help, I really appreciate your help.
Thanks,
Quan
An audio recording from a microphone will not contain a single frequency that you can represent in Hz.
Instead, it will be a combination of a lot of different frequencies mixed together, which are represented by your samples.
To get the frequencies in your sample and their amplitude, you need to use the Fast Fourier transform. From the results you can determine which frequencies are most prevalent in your sample, which I believe is what you are looking for.

Dynamically calculate frequency value?

In my app, I want to find/calculate the audio frequency as dynamically when i am recording an audio and no need to save, play and all. Now i am trying to do that with help of an aurioToch sample code. In that sample, inside FFTBufferManager class methods such as GrabAudioData and ComputeFFT,Here I am not able to find where they are calculating frequency value as dynamically depends on the audio sound and I spent more than 5 days.please help me.
You don't need to use FFT if your audio signal is clear and pure. Just count the peaks and valleys. Or just peaks. Take that number and divide by the number of samples in your buffer, and then multiply by the sample rate. That's your frequency value.

How to determine the frequency of the input recorded voice in iphone?

I am new to iphone development.I am doing research on voice recording in iphone .I have downloaded the "speak here " sample program from Apple.I want to determine the frequency of my voice that is recorded in iphone.Please guide me .Please help me out.Thanks.
In the context of processing human speech, there's really no such thing as "the" frequency.
The signal will be a mix of many different frequencies, so it might be more fruitful to think in terms of a spectrum, rather than a single frequency. Even if you're talking about
a sustained musical note with a fixed pitch, there will be plenty of overtones and harmonics present, in addition to the fundamental frequency of the note. And for actual speech,
the frequency spectrum will change drastically even within a short clip, due to the different tonal characteristics of vowels and consonants.
With that said, it does make some sense to consider the peak frequency of a voice recording.
You could calculate the Fast Fourier Transform of your voice clip, then find the frequency
bin with the largest response. You may also be interested in the concept of a spectrogram, which represents how the audio spectrum of a signal varies over time.
Use Audacity. Take a small recording of typical speech, and cut it down to one wavelength, from one peak to another peak. Subtract the two times, and divide 1 by that number and you'll get the frequency of your wave in Hz.
Example:
In my audio clip, my waveform runs from 0.0760 to 0.0803 seconds.
0.0803-0.0760 = 0.0043
1/0.0043 = 232.558 Hz, my typical speech frequency
This might give you a good basis to create an analyzer. You'd need to detect the peaks, and time between the peaks of the wave and do an average calculation of the result.
You'll need to use Apple's Accelerate framework to take an FFT of the relevant audio. The FFT will convert the audio in the time domain to the frequency domain. The Accelerate framework supports the FFT and will allow you to do frequency analysis in real-time.