Is there another API that will enable real-time audio processing on iOS? - iphone

I was hoping to make an application that did real time voice manipulation like the the T-Pain App. But AVAudioRecorder only enables a post processing from of audio manipulation. Is there another API that will enable real-time audio processing?
thanks!

The iOS Audio Unit RemoteIO API allows for very low latency audio recording and playback, and the raw audio sample buffers are available for modification in between the audio record and play buffer callbacks.
See Apple's aurioTouch sample app for example source code.

Related

Playback and Recording simultaneously using Core Audio in iOS

I need to play and record simultaneously using Core Audio. I really do not want to use AVFoundation API (AVAudioPlayer + AVAudioRecorder) to do this as I am making a music app and cannot have any latency issues.
I've looked at the following source code from Apple:
aurioTouch
MixerHost
I've already looked into the following posts:
iOS: Sample code for simultaneous record and playback
Record and play audio Simultaneously
I am still not clear on how I can do playback and record the same thing simultaneously using Core Audio. Any pointers towards how I can achieve this will be greatly appreciable. Any pointers to any sample source code will also be of great help.
The RemoteIO Audio Unit can be used for simultaneous record and play. There are plenty of examples of recording using RemoteIO (aurioTouch) and playing using RemoteIO. Just enable both unit input and unit output, and handle both buffer callbacks. See an example here

Recording audio output only from speaker of iphone excluding microphone

I am trying to record the sound from iPhone speaker. I am able to do that, but I am unable to avoid mic input in the recorded output. Tried with sample code available in different websites with no luck.
The sample which I used does the recording with audio units. I need to know if there is any property for audio unit to set the mic input volume to zero. Above that I came to from other posts that Audio Queue services can do the thing for me. Can any one redirect me with sample code for the audio queue services implementation. I need to know whether there is a way of writing the data to an separate audio file before sending it as input to speaker.
Thanks in advance
There is no public iOS API or property for recording generic audio sent to the iPhone speaker. Only mic input can be recorded.
But if you are playing audio in your app using only uncompressed samples with Audio Queues or the RemoteIO Audio Unit, you can just copy those samples to a file before you write them to the audio callback buffers. Those saved samples can be used to construct a recording.

Audio Recording on iOS

I've just started working on a project that requires me to do lots of audio related stuff on iOS.
This is the first time I'm working in the realm of audio, and have absolutely no idea how to go about it. So, I googled for documents, and was mostly relying on Apple docs. Firstly, I must mention that the documents are extremely confusing, and often, misleading.
Anyways, to test a recording, I used AVAudioSession and AVAudioRecorder. From what I understand, these are okay for simple recording and playback. So, here are a couple of questions I have regarding doing anything more complex:
If I wish to do any real-time processing with the audio, while recording is in progress, do I need to use Audio Queue services?
What other options do I have apart from Audio Queue Services?
What are Audio Units?
I actually got Apple's Audio Queue Services programming guide, and started writing an audio queue for recording. The "diagram" on their audio queue services guide (pg. 19 of the PDF) shows recording being done using an AAC codec. However, after some frustration and wasting a lot of time, I found out that AAC recording is not available on iOS - "Core Audio Essentials", section "Core Audio Plug-ins: Audio Units and Codecs".
Which brings me to my another two question:
What's a suitable format for recording, given Apple Lossless, iLBC, IMA/ADPCM, Linear PCM, uLaw/aLaw? Is there some chart somewhere that someone might be able to refer to?
Also, if MPEG4AAC (.m4a) recording is not available using an audio queue, how is it that I can record an MPEG4AAC (.m4a) using AVAudioRecorder?!
Super thanks a ton in advance for helping me out on this. I'll super appreciate any links, directions and/or words of wisdom.
Thanks again and cheers!
For your first question, Audio Queue services or using the RemoteIO Audio Unit are the appropriate APIs for real-time audio processing, with RemoteIO allowing lower and more deterministic latency, but with stricter real-time requirements than Audio Queues.
For creating aac recordings, one possibility is to record to raw linear PCM audio, then later use AV file services to convert buffered raw audio into your desired compressed format.

Is there a way to record device audio on the iPhone?

AVAudioRecorder allows the recording of external audio. However I wish to record the audio made by my application (through numerous AVAudioPlayers), is this possible on the iPhone?
If you want to record the sounds your iOS app makes, you have to use a much lower lever API, such as Audio Unit RemoteIO, or Audio Queues with raw PCM audio samples.

iPhone recording audio

I'm currently working on a project where it is necessary to record sound being played by the iPhone. By this, I mean recording sound being played in the background like a sound clip or whatever, NOT using the built-in microphone.
Can this be done? I am currently experimenting with the AVAudioRecorder but this only captures sound with the built-in microphone.
Any help would be appreciated!
This is possible only when using only the Audio Unit RemoteIO API or only the Audio Queue API with uncompressed raw audio, and with no background audio mixed in. Then you have full access to the audio samples, and can queue them up to be saved in a file.
It is not possible to record sound output of the device itself using any of the other public audio APIs.
Just to elaborate on hotpaw2's answer, if you are responsible for generating the sound then you can retrieve it. But if you are not, you cannot. You only have any control over sounds in your process. yes, you can choose to stifle sounds coming from different processes. but you can't actually get the data for these sounds or process them in any way.