Using AUGraphs for Mic input - iphone

I am trying to create a streamlined audio application that takes the volume of the mic input from the iPhone and uses that to set the volume of a sound played from another audio unit source.
I would prefer to do all of this using graphs. I don't need any information from the mic other than the volume level, and as mentioned previously, I would like this to be a streamlined solution that has good performance and a minimum of code.
I am investigating a solution that would use an "output" audio type with remoteIO as a subtype, which the documentation says can be used for input or output or both.
I can't seem to find any way to accomplish this using graphs only. I've previously implemented it with AVAudioRecorder, but I'm not happy with that approach. I've looked at the aurioTouch and aurioTouch2 examples, but neither implements the graph approach. Apple's audio documentation states that this is the way to go.

Novocaine will probably be the easiest solution for you, as it'll give you data from the mic right away. You'll probably want to take the buffer it gives you and calculate its RMS. This'll give you a rough "how loud is the mic volume right now" answer.
To answer your question from an AUGraph perspective, the trick is that it's difficult to use the RemoteIO as an input to an AUGraph. Technically, a RemoteIO is an output unit, which confuses the AUGraph somewhat. Typically you would use a ring buffer to buffer audio from the RemoteIO's mic input, then feed that into an AUGraph later (i.e. when the head of the AUGraph issues a render callback to your app). Of course, if you get to this point you don't need an AUGraph at all and can just check the audio level at what would be the ring buffer stage.
That said, let's say you have samples coming into your AUGraph. How do you tell how loud they are at a given point? You can use the AUMultiChannelMixer audio unit, which has a metering feature. Just enable the kAudioUnitProperty_MeteringMode property on the input scope, then just check the kMultiChannelMixerParam_PreAveragePower param to see the volume of the audio passing through the mixer at that point (values will be floats from -120 to 0 representing -120dB to 0dB).

Related

How to record user generated sound output on iPhone

I have a series of sounds that a user will play, rearrange, and edit etc. while using my app. When the user is finished, I want them to be able to save their work and record it to an mp3.
I don't want to play it through speakers and record it with the mic since that will result in low sound quality and interference. I cannot think of any ways of doing this that doesn't require extra hardware and/or a computer.
How can I do this using just their device?
Well, I would say it cant be done with AVFoundation.
My suggestion is to use Audio Units, and transform all your interactions to an audio graph. at some point you set a render notify on the RemoteIO so every time it renders sounds to the speakers you get a callback where you can write it down those frames/packets/data into a file.
I will probably suggest to use AAC(m4a) over MP3. I am not very fond of MP3, and to be honest as far as I know the sdk does not provide encoding to MP3, probably due to licensing issues. I could be wrong though. Check this sample code below, probably the best sample code you will ever find on Audio units on the web.
AudioGraph by Tom Zic

How to listen to mic input and analyse in real time?

Hi unfortunately I've not been able to figure out audio on the iPhone. The best I've come close to are the AVAudioRecorder/Player classes and I know that they are no good fo audio processing.
So i'm wondering if someone would be able to explain to me how to "listen" to the iPhone's mic input in chunks of say 1024 samples, analyse the samples and do stuff. And just keep going like that until my app terminates or tells it to stop. I'm not looking to save any data, all I want is to analyse the data in real time and do stuff in real time with it.
I've attempted to try and understand apples "aurioTouch" example but it's just way too complicated for me to understand.
So can someone explain to me how I should go about this?
If you want to analyze audio input in real-time, it doesn't get a lot simpler than Apple's aurioTouch iOS sample app with source code (there is also a mirror site). You can google a bit more info on using the Audio Unit RemoteIO API for recording, but you'll still have to figure out the real-time analysis DSP portion.
The Audio Queue API is a slight bit simpler for getting input buffers of raw PCM audio data from the mic, but not much simpler, and it has a higher latency.
Added later: There's also a version of aurioTouch converted to Swift here: https://github.com/ooper-shlab/aurioTouch2.0-Swift
AVAudioPlayer/Recorder class won't take you there if you wanna do any real time audio processing. The Audio Toolbox and Audio Unit frameworks are the way to go. Check here for apple's audio programming guide to see which framework suits your need. And believe me, these low level stuff is not easy and is poorly documented. CocoaDev has some tutorials where you can find sample codes. Also, there is an audio DSP library DIRAC I recently discovered for tempo and pitch manipulation. I haven't looked into it much but you might find it useful.
If all you want is samples with a minimum amount of processing by the OS, you probably want the Audio Queue API; see Audio Queue Services Programming Guide.
AVAudioRecorder is designed for recording to a file, and AudioUnit is more for "pluggable" audio processing (and on the Mac side of things, AU Lab is actually pretty cool).

Mixing sound files on an iPhone

I've got a couple of wav files and possibly a mp3 that I'd like to mix down to a single wav or mp3-file. I'm using C/C++/Obj-C (iPhone). I have really no experience with this sort of thing. If anyone could give me some pointers, I would be very grateful.
Basically what I want to do is similar things like for example Audacity can do, but programmatically. Isn't there a sound library where you can easily open audio files and "paste" them into a new one at defined positions? Where mixing is something you don't have to worry about?
Thanks.
Mixing two sound buffers of linear PCM is only a matter of adding each sample value in them together, and of course make sure you don't overflow. Normally you would use floating point values in the buffers though, so the issue is when you go back to the file. You should also have CoreAudio available on the iPhone, it has all the means to open/read/write sound files in different formats. I think there is also a more high level api available to the iPhone that isn't on the mac, look up the apple docs.
If you are specifically looking for the features of Audacity, it uses PortAudio under the hood (looks like an MIT license). Perhaps you can just try to use that?
Read Multimedia support as a starting point it contains alot info. Here is an extract:
There are 3 ways to mix the audio on iphone:
Audio Unit framework
Multichannel Mixer - "lets you mix multiple audio streams to a single stream"
3D Mixer unit - "lets you mix multiple audio streams, specify stereo output panning, manipulate sample rate"
OpenAl used in games development
Also check the following sample out: iPhoneMultichannelMixerTest:
Two input busses are created each with input volume controls. An overall mixer output volume control is also provided and each bus may be enabled or disabled

How can I record the audio output of the iPhone? (like sounds of my app)

I want to record the sound of my iPhone-App. So like someone is playing something on a iPhone instrument and after that you can hear it.
Is it possible without the micro?
Do you mean an App you build yourself? If yes, you could just save the rendered waveform (maybe encoded/compressed to save space) for later playback. (see: Extended Audio File Services, it can write the same AudioBufferList to a file that you would render to the RemoteAudio Unit when playing audio in your Instrument-App)
[Edit: removed comments on recording third-party app audio output ...]
With the AVFoundation you are currently using, you're always working on the level of the sound files. Your code never sees the actual audio signal. Thus, you can't 'grab' the audio signal that your app generates when it is used. Also, AVAudioPlayer does not provide any means of getting to the final signal. If you're using multiple instances of AVAudio player to play multiple sounds at the same time you also wouldn't be able to get at the mixed signal.
Alas, you probably need to use CoreAudio which is a much more low level interface.
I'd like to suggest an alternative approach: Instead of recording the audio output, why not record the sequence of actions together with their time which lead to the audio being played? Write this sequence of events to a file and read it back in to reproduce the 'performance' - it's a bit like your own MIDI sequencer :)

Audio on the iPhone

I'm looking to create an app that emulates a physical instrument. I've got audio samples but I want to be able to increase the pitch/frequency dynamically so I don't have to load from too many files.
Any idea which audio API will be able to do this? I reckon either OpenAL or Audio Queue Services but am not sure which is suitable. Any links to guides/sample code is also much appreciated.
Thanks in advance.
I went down this road in 2009, trying Audio Toolkit, Audio Queue Services, openAL, and finally settling on the RemoteIO AudioUnit.
Audio Toolbox is fine for basic triggered sound effects, but it wasn't able to change frequencies or loop samples.
Audio Queue Services can loop samples, but the only way I could find to adjust the playback frequency of a sample was to re-read the data from the file -- very painful. Plus, the framework is tremendously cumbersome - I'd only use it if I was trying to stream something off the Internet.
OpenAL was a godsend - was up and running with it in under an hour, after getting my hands on the no-longer-available-from-Apple "CrashLanding" iPhone sample app. I found OpenAL to be ideally suited to games or even a musical instrument -- samples could be pre-loaded, adjusting the frequency was easy, and looping was no problem. The deal-breaker for me was that starting and stopping a looped sample would result in a nasty "pop" almost every time. Also the builtin 3d positional audio mixer was a bit too CPU-intensive for my liking.
If your instrument does not use looped samples, I'd suggest trying the OpenAL route first - the learning curve is much less intimidating. Try to track down "SoundEngine.h", "CrashLanding" or "TouchFighter", or check out the following link:
http://benbritten.com/blog/2008/11/06/openal-sound-on-the-iphone/
Since looped samples was a requirement for me, I finally settled on AudioUnits (which, on the iPhone, is referred to as "RemoteIO" if you want to do input or output). It was tremendously difficult to implement - very similar to Audio Queue Services, in that the core of your implementation will be inside a "buffer callback", being called several times per second to fill a buffer of outbound audio with raw SInt16 values.
Ultimately, I got my instrument working beautifully with multi-note polyphony, looped samples, no popping, and minimal latency.
Unfortunately, RemoteIO is not well documented. Michael Tyson was one of the first in the field to write about RemoteIO at length, and his posts (and the comments) were very useful to me:
http://michael.tyson.id.au/2008/11/04/using-remoteio-audio-unit/
Good luck!
Edited years later: I've open-sourced the RemoteIO/AudioUnits code I alluded to above: https://github.com/glenn-barnett/hexaphone/blob/master/Classes/Instrument.m - apologies for the mess, I hope to get some time to clean up the code and comments.
Try creating an Audio Unit. I'm doing something similar an AU worked well for me.
Initially I used an audio queue as it was simpler (higher level?) and
synchronous, however it was lacking in responsiveness, so I dumped it for
the Audio Unit.
It sounds, a bit, like you're creating essentially the wavetable synthesis method of playing MIDI files. You might be able to find a MIDI synthesizer for the iPhone that you can use, and then use your audio samples to build a wavetable set. Anytime you'd want to play tones, you would simply send the MIDI event into the iPhone MIDI synth with your loaded wavetable set.
Another option now is AUSampler.
http://developer.apple.com/library/mac/#technotes/tn2283/_index.html