Audio recording and wave form drawing on ios - iphone

I'm recording audio using AVAudioRecorder and after recording I want to draw a wave form of the recorded audio. I've found a nice article about wave form drawing, but it first I need the frequencies at a certain sample rate as floats, right?
Do I need to do FFT on the audio and how do I do this? Is *AVAudioRecorder** even the API for this purpose? Or do I need to use some lower API to record the audio?
Hope someone can help me.

AVAudioRecorder doesn't look like it's much use for this (although it may be possible). You need to look at recording with AudioQueue.
The 'waveform' of the audio isn't the frequencies. The waveform is the value of the samples that make up the audio (you can get these when recording with an AudioQueue). FFT converts the audio samples from the time domain to the frequency domain - if you draw the output of the FFT you will have a Spectrograph instead of a waveform.

Related

aurio_touch---how to take input from avaudioplayer rather than mic

aurio_touch sample is taking input from mic,but i want to take show frequency waves while playing sound files.
Is it possible through audio_touch,if yes then how?
please reply...

Generate paused/discontinuous beeps in iPhone App

I am creating one sound based application where I need to generate different oscillations for sound wave. One part that I need to achieve in this is the discontinuous or paused beep. I am trying to do this using I'm doing variations in the sine wave using Core Audio, but I'm not getting the desired output.
Basically I need to generate variable sound oscillation patterns like Dog Whistler app.
Can anyone help me in proper direction for this
have a look at my other answer how to loop background music. You can create an audio file with 0.5 sec beep and 1 sec silence and loop that file infinitely via AVAudioPlayer. As well as pausing the beep sequence via [player pause];
This will work if you have some predefined beep intervals/oscillations but not if you need 100% user customization. You can also look for sound effects to adjust the pitch and frequency.

How to programmatically crossfade between songs

I am working on a music player for the iPhone, and I would like to crossfade between songs. I know this is possible because My DJ does it...I can't find any documentation and don't even know where to begin on it. Can anyone give me a starting point or example code?
Thanks.
One option is to convert both songs into raw (linear PCM) samples. Then build your own mixer function or audio unit where the mix ratio of each sample comes from a ramp or custom mix curve function. Feed the output to the RemoteIO audio unit.

Rhythm (sound change) detection on iPhone

Sorry for my weak english
I've got some aif or MP3 tunes for plaing loud on the iPhone,
and I need to do some 'sound change' detections,
such I would use for some visualisations (jumping man or other)
how to do it 'easy' on iPhone or how to do it 'fine'?
should I do fft or such, or something different?
I have seen some sound visualisations back then but all they
seem not to much good (they seem not to be much clear in reactions on
music change).
Thanks
What are you currently using for playback? I would use audio queues or audio units - both give you access to buffers of audio samples as they're passed through to the hardware. At that point it's a matter of detecting peaks in the sound above a certain threshold, which you can do in a variety of different ways.
An FFT will not help you because you're interested in the time-domain waveform of the sound (its amplitude over time) not its frequency-domain characteristics (the relative strengths of different frequencies in different time windows.)

FMOD on non-playing audio

Hey, is there any way to get the audio spectrum of a section of a song using FMOD if it is not playing?
Can I render a full song waveform using FMOD (+opengl/openframeworks/etc.) before the song is playing?
Yes.
Yes but you will have to do the your own spectrum analysis on the time domain wavedata.
You can get the wave data from the FMOD::Sound using Sound::lock. To do this you would have to create the sound as FMOD_SAMPLE which means the entire song will be decompressed into memory. You can render the waveform using this data and also conduct your spectrum analysis. FMOD's inbuilt 'getSpectrum' function will only work with snapshots of the playing data in a Channel or ChannelGroup.