Sorry for my weak english
I've got some aif or MP3 tunes for plaing loud on the iPhone,
and I need to do some 'sound change' detections,
such I would use for some visualisations (jumping man or other)
how to do it 'easy' on iPhone or how to do it 'fine'?
should I do fft or such, or something different?
I have seen some sound visualisations back then but all they
seem not to much good (they seem not to be much clear in reactions on
music change).
Thanks
What are you currently using for playback? I would use audio queues or audio units - both give you access to buffers of audio samples as they're passed through to the hardware. At that point it's a matter of detecting peaks in the sound above a certain threshold, which you can do in a variety of different ways.
An FFT will not help you because you're interested in the time-domain waveform of the sound (its amplitude over time) not its frequency-domain characteristics (the relative strengths of different frequencies in different time windows.)
Related
I am working on a "test your hearing application".
I know that half the amplitude is -6db.
I would like to know that when 1.0 is maximum volume using the AVAudioPlayer class, will setting the volume to 0.5 means that it plays at 50%, meaning a drop in relative db to -6db, if we used an ideal earphone?
That is an excellent question.
I wouldn't be surprised if the range in AVAudioPlayer more closely matches that of the "standard leveling scale" for popular music.
Notice here that 0dB is pretty far up linearly in the range. This is based on the history of the VU meter as described in Level Practices (Part 2) and seen in this picture:
So, I don't know for sure, but my guess would be that AVAudioPlayer more closely matches these ranges. You could always plug headphone jack of our iOS device into an application like Audacity, Logic, or Pro Tools and actually measure the signal coming out of your app as you sweep through the volume range in AVAudioPlayer.
I need to create an audio loudness (decibel) detector. To clarify, I am not trying to find the volume at which the iPhone is playing, but instead the volume of its surrounding in decibels. How can I do this?
It can be done using AVAudioRecorder: http://b2cloud.com.au/tutorial/obtaining-decibels-from-the-ios-microphone
Use one of the Audio Queue or Audio Unit APIs to record low latency audio, run the samples through a DSP filter to weight the spectrum for the particular type or color of loudness you want to measure, then calibrate the mic on all the particular models of iOS devices you want to run your detector on against calibrated sound sources, perhaps in an anechoic chamber.
I'm recording audio using AVAudioRecorder and after recording I want to draw a wave form of the recorded audio. I've found a nice article about wave form drawing, but it first I need the frequencies at a certain sample rate as floats, right?
Do I need to do FFT on the audio and how do I do this? Is *AVAudioRecorder** even the API for this purpose? Or do I need to use some lower API to record the audio?
Hope someone can help me.
AVAudioRecorder doesn't look like it's much use for this (although it may be possible). You need to look at recording with AudioQueue.
The 'waveform' of the audio isn't the frequencies. The waveform is the value of the samples that make up the audio (you can get these when recording with an AudioQueue). FFT converts the audio samples from the time domain to the frequency domain - if you draw the output of the FFT you will have a Spectrograph instead of a waveform.
Within my iPhone application, how would I recognize the noise that a clap makes?
If you're talking about recognizing the sound of a clap, SCListener is a great and easy-to-use class that gives you simple audio levels. Then it's just a question of measuring peaks or even just high values.
In the new iPhone 3GS commercial, Apple shows voice control with a cool blue waveform animation. Is this visual effect for rendering the waveforms (or maybe just volumes) available as an API call or source code somewhere? (Not the voice control part, just the audio visualization)
I think you could get the sound info from AVAudioPlayer's averagePowerForChannel: method, but how would you show the waves moving up and down?
Thanks!
John,
That code that you found for drawing the sine wave is great. This sample code from Apple shows a sound meter with live audio recording. With those two resources, you should be on track to make the visual waveform.
The waveform in the apple commercials is very clearly just a plain old sine wave that looks like it's got its amplitude modulated by the volume of the input.