Metronome for iPhone using Audio Queue - iphone

I'm trying to develop a metronome for iPhone. I've tried using an NSTimer and Apples SoundEngine.h to play a tick, but it doesn't seem to be very accurate. I've seen several forums that Auido Queue can be used to create a more precise metronome. I haven't used Audio Queue before, but I've looked at "Audio Queue Services Programming Guide" and SpeakHere sample code. Still I don't have a clue as to how to generate sound data on the fly (as opposed to reading from a file) and play it using Audio Queue. Can somebody help me? I'd love to see a code sample or at least some pseudo code describing the steps to do the timing and feed audio data from a short audio clip to the Queues.
Thanks

well, you looked at the wrong 'sample code' from apple... see this one!!!

Related

How to play record the sound programmatically and how to play that recorded audio?

I am developing one application. In that I want to record the sounds and I want to play that recorded sound file. I know the frameworks for doing this. But how to develop programmatically by using that frameworks?
You can refer to this link:
I have implemented this code in one of my apps and it works completely fine.
How do I record audio on iPhone with AVAudioRecorder?
For Playing the sound you have option to use AVAudioRecorder.
Hope this helps.
The best way to do it - and I am talking from painful experience here - is with the RemoteIO audio unit. You can also do it with AudioQueue, but it has a higher latency, and the queue type approach becomes very problematic.
So, I think that they are really different tools for different jobs. Note that you won't play a sound file as such. You will play the contents of a buffer held in memory. As long as the buffer is not too large, this should not be an issue.
So, going with RemoteIO, you will find this blog and tutorial very useful. It includes code samples.
Using RemoteIO audio unit By MICHAEL TYSON

How to listen to mic input and analyse in real time?

Hi unfortunately I've not been able to figure out audio on the iPhone. The best I've come close to are the AVAudioRecorder/Player classes and I know that they are no good fo audio processing.
So i'm wondering if someone would be able to explain to me how to "listen" to the iPhone's mic input in chunks of say 1024 samples, analyse the samples and do stuff. And just keep going like that until my app terminates or tells it to stop. I'm not looking to save any data, all I want is to analyse the data in real time and do stuff in real time with it.
I've attempted to try and understand apples "aurioTouch" example but it's just way too complicated for me to understand.
So can someone explain to me how I should go about this?
If you want to analyze audio input in real-time, it doesn't get a lot simpler than Apple's aurioTouch iOS sample app with source code (there is also a mirror site). You can google a bit more info on using the Audio Unit RemoteIO API for recording, but you'll still have to figure out the real-time analysis DSP portion.
The Audio Queue API is a slight bit simpler for getting input buffers of raw PCM audio data from the mic, but not much simpler, and it has a higher latency.
Added later: There's also a version of aurioTouch converted to Swift here: https://github.com/ooper-shlab/aurioTouch2.0-Swift
AVAudioPlayer/Recorder class won't take you there if you wanna do any real time audio processing. The Audio Toolbox and Audio Unit frameworks are the way to go. Check here for apple's audio programming guide to see which framework suits your need. And believe me, these low level stuff is not easy and is poorly documented. CocoaDev has some tutorials where you can find sample codes. Also, there is an audio DSP library DIRAC I recently discovered for tempo and pitch manipulation. I haven't looked into it much but you might find it useful.
If all you want is samples with a minimum amount of processing by the OS, you probably want the Audio Queue API; see Audio Queue Services Programming Guide.
AVAudioRecorder is designed for recording to a file, and AudioUnit is more for "pluggable" audio processing (and on the Mac side of things, AU Lab is actually pretty cool).

Audio on the iPhone

I'm looking to create an app that emulates a physical instrument. I've got audio samples but I want to be able to increase the pitch/frequency dynamically so I don't have to load from too many files.
Any idea which audio API will be able to do this? I reckon either OpenAL or Audio Queue Services but am not sure which is suitable. Any links to guides/sample code is also much appreciated.
Thanks in advance.
I went down this road in 2009, trying Audio Toolkit, Audio Queue Services, openAL, and finally settling on the RemoteIO AudioUnit.
Audio Toolbox is fine for basic triggered sound effects, but it wasn't able to change frequencies or loop samples.
Audio Queue Services can loop samples, but the only way I could find to adjust the playback frequency of a sample was to re-read the data from the file -- very painful. Plus, the framework is tremendously cumbersome - I'd only use it if I was trying to stream something off the Internet.
OpenAL was a godsend - was up and running with it in under an hour, after getting my hands on the no-longer-available-from-Apple "CrashLanding" iPhone sample app. I found OpenAL to be ideally suited to games or even a musical instrument -- samples could be pre-loaded, adjusting the frequency was easy, and looping was no problem. The deal-breaker for me was that starting and stopping a looped sample would result in a nasty "pop" almost every time. Also the builtin 3d positional audio mixer was a bit too CPU-intensive for my liking.
If your instrument does not use looped samples, I'd suggest trying the OpenAL route first - the learning curve is much less intimidating. Try to track down "SoundEngine.h", "CrashLanding" or "TouchFighter", or check out the following link:
http://benbritten.com/blog/2008/11/06/openal-sound-on-the-iphone/
Since looped samples was a requirement for me, I finally settled on AudioUnits (which, on the iPhone, is referred to as "RemoteIO" if you want to do input or output). It was tremendously difficult to implement - very similar to Audio Queue Services, in that the core of your implementation will be inside a "buffer callback", being called several times per second to fill a buffer of outbound audio with raw SInt16 values.
Ultimately, I got my instrument working beautifully with multi-note polyphony, looped samples, no popping, and minimal latency.
Unfortunately, RemoteIO is not well documented. Michael Tyson was one of the first in the field to write about RemoteIO at length, and his posts (and the comments) were very useful to me:
http://michael.tyson.id.au/2008/11/04/using-remoteio-audio-unit/
Good luck!
Edited years later: I've open-sourced the RemoteIO/AudioUnits code I alluded to above: https://github.com/glenn-barnett/hexaphone/blob/master/Classes/Instrument.m - apologies for the mess, I hope to get some time to clean up the code and comments.
Try creating an Audio Unit. I'm doing something similar an AU worked well for me.
Initially I used an audio queue as it was simpler (higher level?) and
synchronous, however it was lacking in responsiveness, so I dumped it for
the Audio Unit.
It sounds, a bit, like you're creating essentially the wavetable synthesis method of playing MIDI files. You might be able to find a MIDI synthesizer for the iPhone that you can use, and then use your audio samples to build a wavetable set. Anytime you'd want to play tones, you would simply send the MIDI event into the iPhone MIDI synth with your loaded wavetable set.
Another option now is AUSampler.
http://developer.apple.com/library/mac/#technotes/tn2283/_index.html

AudioQueueOfflineRender questions

I have a few questions about this after reading the iPhone documentation on it:
Does this take the audio being played and save it to a buffer so it can be written to a file?
If so does the audio being played have to be played using a playback audio queue or can it be played via a higher level class such as AVAudioPlayer.
Can anyone point me in the direction of some sample code or further help than the docs.
Thanks
Yes
Pretty sure you need to use a/the playback audio queue.
This Apple QA points to a file called aqrender.cpp which implements point 1.

How Do I do Real Time Sound/Signal Processing On The iPhone?

I may be doing an iPhone-based application doing near-real-time sound-processing (filtering, etc). I was wondering the best way to get started. Would I want to create an audio cue for recording and processing sound, as described here?
Edit:
I should be clear. I am not asking how to do signal processing, in general. I know some of that and my team's expert will handle the rest. I asking what the "low level" interfaces to sound data on the iphone are.
Edit2:
My iphone development has been pushed back a week or two so I don't have access to the deve kit right now. Once I have access to the kit, I'll mark one answer or another correct.
Sound processing is a big subject. AudioQueue will get you the raw data. Apple provides two samples that will get you started using AudioQueue: SpeakHere and AurioTouch.
I used SpeakHere as a starting point for some audio processing I wanted to do. It's relatively easy to understand, and has all the pieces to do input and output.