iOS Mixing audio, Audio Toolbox vs Audio Units - iphone

I've been searching for a while and can't come to a good conclusion.
I am trying to create an app that can "record" beats that a user makes on a 4x4 button array. Each button has a sound tied to it and after they hit record, I want to mix the audio that gets played and save it to a file so they can listen to it and play over it later.
What makes this even trickier is that there will be a metronome playing and I do not want to mix the metronome sound into the audio that is getting saved.
From what I have found, the only way to go is Audio Units for these features, but I am reluctant to since it seems a little overkill and somewhat complicated to learn. Can Audio Toolbox make this any easier?
Thanks!

In generally, using a AudioToolBox easily implements.
more information, see below sample code. it's a lot of help.
MixerHost

Related

How to record user generated sound output on iPhone

I have a series of sounds that a user will play, rearrange, and edit etc. while using my app. When the user is finished, I want them to be able to save their work and record it to an mp3.
I don't want to play it through speakers and record it with the mic since that will result in low sound quality and interference. I cannot think of any ways of doing this that doesn't require extra hardware and/or a computer.
How can I do this using just their device?
Well, I would say it cant be done with AVFoundation.
My suggestion is to use Audio Units, and transform all your interactions to an audio graph. at some point you set a render notify on the RemoteIO so every time it renders sounds to the speakers you get a callback where you can write it down those frames/packets/data into a file.
I will probably suggest to use AAC(m4a) over MP3. I am not very fond of MP3, and to be honest as far as I know the sdk does not provide encoding to MP3, probably due to licensing issues. I could be wrong though. Check this sample code below, probably the best sample code you will ever find on Audio units on the web.
AudioGraph by Tom Zic

How to play record the sound programmatically and how to play that recorded audio?

I am developing one application. In that I want to record the sounds and I want to play that recorded sound file. I know the frameworks for doing this. But how to develop programmatically by using that frameworks?
You can refer to this link:
I have implemented this code in one of my apps and it works completely fine.
How do I record audio on iPhone with AVAudioRecorder?
For Playing the sound you have option to use AVAudioRecorder.
Hope this helps.
The best way to do it - and I am talking from painful experience here - is with the RemoteIO audio unit. You can also do it with AudioQueue, but it has a higher latency, and the queue type approach becomes very problematic.
So, I think that they are really different tools for different jobs. Note that you won't play a sound file as such. You will play the contents of a buffer held in memory. As long as the buffer is not too large, this should not be an issue.
So, going with RemoteIO, you will find this blog and tutorial very useful. It includes code samples.
Using RemoteIO audio unit By MICHAEL TYSON

Which audio playback technology do I need for this?

I have trouble choosing the right audio playback technology. There's a ton of technologies to use on the iPhone, it's so confusing.
What I need to do is this:
start playing short sounds ranging between 0.1 and 2 seconds
high quality playback, no crackle (I heard some of the iPhone audio playback technologies do a crackle sound on start or end, which is bad!)
ability to start playback of a sound, while there's already another one playing right now (two, three or more sounds at the same time)
What would you suggest here, and why? Thanks :-)
There are basically four options for playing audio on the iPhone:
Audio Toolbox. Easy, but only good for playing sound effects in applications (sample code).
Audio Queue Services. Very powerful, can do anything. C API, pretty messy to work with. Callbacks, buckets, pain.
AVAudioPlayer. About the easiest option. Can play compressed audio, with a simple wrapper you can easily play multiple instances of the same sample at once (non-compressed audio only, as there is only one HW audio decoder). Starting to play a sound with AVAudioPlayer seems to lag about 20 ms, could be a problem.
OpenAL. Decent compromise between complexity and features. Sounds do not lag, you can play multiple sounds just fine, but you have to do a lot of the work yourself. I’ve written a sound engine called Finch that can help you.
Don’t know much about cracking, never experienced it. I think there were some issues with playing seamless compressed loops with AVAudioPlayer, can be overcome by saving the loop without compression.

iphone - any way to manipulate recorded audio?

Specifically, I just want to record something, reverse it, and play it back. I've looked through the apple docs and couldn't find anything about editing audio. Is it possible?
Yes, it is definitely possible. Last I checked the Apple Core Audio docs were not very good, but it has been a few months since I've worked with it. Here are the steps that I would follow.
Record the audio sample.
Reverse the audio by looping through the first half of the array and swapping the value located there with one an equidistant from the end of the array.
Play the resulting audio clip.
Quite frankly, the first step is probably the hardest. Here is a decent article about doing audio on the iPhone including recording. Make sure you look at all of the different parts of the article. Here is another article about recording sound on the iPhone, but using a different framework. There are really several ways to go about recording on the iPhone though, last I checked, if you want to play audio while you are recording you have to use RemoteIO.
Edit:
If you would like to use RemoteIO(which I preferred), then this site is pretty helpful for getting started with it. Also, the aurioTouch sample program that Apple provides is immensely helpful (though more than you want).
If you don't need RemoteIO (because it can be a major pain though it is more low-level and thus more flexible), then try the SpeakHere sample program. It is made just to record and play back. However, I just looked at it and it writes the recording to a file rather than a buffer which isn't what you want. I would recommend going with RemoteIO for that reason (unless you can find a way to have it write to a buffer instead).

Audio on the iPhone

I'm looking to create an app that emulates a physical instrument. I've got audio samples but I want to be able to increase the pitch/frequency dynamically so I don't have to load from too many files.
Any idea which audio API will be able to do this? I reckon either OpenAL or Audio Queue Services but am not sure which is suitable. Any links to guides/sample code is also much appreciated.
Thanks in advance.
I went down this road in 2009, trying Audio Toolkit, Audio Queue Services, openAL, and finally settling on the RemoteIO AudioUnit.
Audio Toolbox is fine for basic triggered sound effects, but it wasn't able to change frequencies or loop samples.
Audio Queue Services can loop samples, but the only way I could find to adjust the playback frequency of a sample was to re-read the data from the file -- very painful. Plus, the framework is tremendously cumbersome - I'd only use it if I was trying to stream something off the Internet.
OpenAL was a godsend - was up and running with it in under an hour, after getting my hands on the no-longer-available-from-Apple "CrashLanding" iPhone sample app. I found OpenAL to be ideally suited to games or even a musical instrument -- samples could be pre-loaded, adjusting the frequency was easy, and looping was no problem. The deal-breaker for me was that starting and stopping a looped sample would result in a nasty "pop" almost every time. Also the builtin 3d positional audio mixer was a bit too CPU-intensive for my liking.
If your instrument does not use looped samples, I'd suggest trying the OpenAL route first - the learning curve is much less intimidating. Try to track down "SoundEngine.h", "CrashLanding" or "TouchFighter", or check out the following link:
http://benbritten.com/blog/2008/11/06/openal-sound-on-the-iphone/
Since looped samples was a requirement for me, I finally settled on AudioUnits (which, on the iPhone, is referred to as "RemoteIO" if you want to do input or output). It was tremendously difficult to implement - very similar to Audio Queue Services, in that the core of your implementation will be inside a "buffer callback", being called several times per second to fill a buffer of outbound audio with raw SInt16 values.
Ultimately, I got my instrument working beautifully with multi-note polyphony, looped samples, no popping, and minimal latency.
Unfortunately, RemoteIO is not well documented. Michael Tyson was one of the first in the field to write about RemoteIO at length, and his posts (and the comments) were very useful to me:
http://michael.tyson.id.au/2008/11/04/using-remoteio-audio-unit/
Good luck!
Edited years later: I've open-sourced the RemoteIO/AudioUnits code I alluded to above: https://github.com/glenn-barnett/hexaphone/blob/master/Classes/Instrument.m - apologies for the mess, I hope to get some time to clean up the code and comments.
Try creating an Audio Unit. I'm doing something similar an AU worked well for me.
Initially I used an audio queue as it was simpler (higher level?) and
synchronous, however it was lacking in responsiveness, so I dumped it for
the Audio Unit.
It sounds, a bit, like you're creating essentially the wavetable synthesis method of playing MIDI files. You might be able to find a MIDI synthesizer for the iPhone that you can use, and then use your audio samples to build a wavetable set. Anytime you'd want to play tones, you would simply send the MIDI event into the iPhone MIDI synth with your loaded wavetable set.
Another option now is AUSampler.
http://developer.apple.com/library/mac/#technotes/tn2283/_index.html