objective-c record audio session output - iphone

I am writing an app that generates music. I am using OpenAL to: modify gain; modify pitch; mix audio; and play the resulting audio. I now need to record the audio as it is being played. I understand that OpenAL does not let you record the output audio. The other options I have found is to use audio units. However because I need to mix/pitch/gain the audio and record it, it seems I need to write all the audio processing so I can have access to the output buffer. Is this correct? Or is there a different iOS API I can use to do this. If not then is there a 3rd party solution already that lets me record the output (paid solutions are fine)?

You are correct.
Audio Units are the only iOS public API that allows an app to both process and then record audio.
Trying to record the OpenAL output may well be a violation of Apple's rules against using non-public APIs.
The alternative may be to completely rewrite the portions of OpenAL you need (there may be open source for some portions) running on top of the RemoteIO Audio Unit.

The best way to go is likely to be Core Audio, since it will give you as much flexibility as you need. Take a look into the Extended Audio File Services reference pages.
Using and extended audio file you should be able to set up a file format and audio stream buffer to send the final mixed output to, and then use the ExtAudioFileWrite() function to write the samples to the file.

Related

how to insert and overwrite audio file in iOS

Am developing an application which has audio recorder. The user should be able to play the audio file & insert recording into that.. cut unwanted audio .. overwrite some part of audio file.
Have you seen how to Insert , overwrite audio files -Audio Editing iphone? but no one answered this...
Atleast suggest me a way to implement this....
Thanx in advance...
What type of audio file are you talking about? You will almost certainly need to convert whatever you are using into PCM WAV data for this type of manipulation. Luckily, Core Audio, which others have pointed you towards has some convenience methods for doing this.
Once you have the raw PCM data, you can insert by simply inserting other PCM data at the desired point in the data. You want to make sure you don't do something like write in the middle of a stereo packet, or something like that, but besides that, most simply-formatted PCM data is pretty easy to manipulate. Think of it like a string -- you can start with "Hello World" and change it to "Hello, Beautiful World" but simply inserting data in the middle.
Overwriting is the same principal.
Once you are done with the edits, you'll need to transform the PCM data back into whatever format you had saved in before.
Have a look at Core Audio
Core Audio provides software interfaces for implementing audio features in applications you create for iOS and OS X. Under the hood, it handles all aspects of audio on each of these platforms. In iOS, Core Audio capabilities include recording, playback, sound effects, positioning, format conversion, and file stream parsing, as well as:

iOS Advanced Audio API for decompressing format

On iOS, is it possible to get the user's audio stream in a decompressed format? For example, the MP3 is returned as a WAV that can be used for audio analysis? I'm relatively new to the iOS platform, and I remember seeing that this wasn't possible in older iOS versions. I read that iOS 4 brought in some advanced APIs but I'm not sure where I can find documentations/samples for these.
If you don't mind using API for iOS 4.1 and above, you could try using the AVAssetReader class and friends. In this similar question you have a full example on how to extract video frames. I would expect the same to work for audio, and the nice thing is that the reader deals with all the details of decompression. You can even do composition with AVComposition to merge several streams.
These classes are part of the AVFramework, which allows not only reading but also creating your own content.
Apple has an OpenAL example at http://developer.apple.com/library/mac/#samplecode/OpenALExample/Introduction/Intro.html where Scene.m should interest you.
The Apple documentation has this picture where the Core Audio framework clearly shows that it gives you MP3 out. It also states that you can access audio units in a more radical way if you so need.
The same Core Audio document gives also some information about using MIDI if it may help you.
Edit:
You're in luck today.
In this example an audio file is loaded and fed into an AudioUnit graph. You could fairly easily write an AudioUnit of your own to put into this graph and which analyzes the PCM stream as you see fit. You can even do it in the callback function, although that's probably not a good idea because callbacks are encouraged to be as simple as possible.

Playing notes or simple sounds on iOS?

I would like to play simple sounds that can be varied at runtime, for example being able to play sounds at different frequencies.
Basically, I would like to be able to produce a simple melody at runtime, and then play it. How do synthesizing apps do that? I'd imagine that there is a way to do it via CoreAudio.
Is there a way to do that using the SDK?
If you know how to create PCM samples of audio waveforms, you can create a waveform for your desired note duration at your desired frequency and volume, and feed that raw waveform data to either the Audio Queue API or the Audio Unit RemoteIO API.
Here's one slightly longer description of how to play a tone using these APIs.
http://atastypixel.com/blog/using-remoteio-audio-unit/
This is a most excellent resource, it will get you up and running with audio units.
This is great too: http://cocoawithlove.com/2010/10/ios-tone-generator-introduction-to.html
There are a number of ways to do that. The simplest would be to record all the notes you need, and then use a timer to create a sequence. Try AVAudioPlayer first, that's the easiest way. If you need to work with the audio data directly you can use Audio Queue Services or OpenAL.

iphone sdk: Core Audio How to continue recording to file after user stops recording by leaving the application and then re-opens it?

The iPhone's AVAudioRecorder class will not allow you to open an existing file to continue a recording. Instead, it overwrites it. I'd like to know an approach that would allow me to continue recording to an existing file using Core Audio APIs.
The best bet would be to take a look at the Audio Queue Services API. This is basically the next "deeper" level into the Core Audio stack provided by Apple. Unfortunately, the chasm between AVAudioRecorder and Audio Queue Services is vast. AQS is a C-based API and a fairly low level abstraction of the even more "raw" lowest levels of Core Audio. I would suggest reviewing the guide above, then taking a look at the example SpeakHere. It should easily be able to handle your current requirement.
No matter which API, you will have to handle the "intermediate" storage of your PCM data, probably temporarily storing it as a WAV or raw PCM, which you then reload and append with PCM data when continuing.

How can I record the audio output of the iPhone? (like sounds of my app)

I want to record the sound of my iPhone-App. So like someone is playing something on a iPhone instrument and after that you can hear it.
Is it possible without the micro?
Do you mean an App you build yourself? If yes, you could just save the rendered waveform (maybe encoded/compressed to save space) for later playback. (see: Extended Audio File Services, it can write the same AudioBufferList to a file that you would render to the RemoteAudio Unit when playing audio in your Instrument-App)
[Edit: removed comments on recording third-party app audio output ...]
With the AVFoundation you are currently using, you're always working on the level of the sound files. Your code never sees the actual audio signal. Thus, you can't 'grab' the audio signal that your app generates when it is used. Also, AVAudioPlayer does not provide any means of getting to the final signal. If you're using multiple instances of AVAudio player to play multiple sounds at the same time you also wouldn't be able to get at the mixed signal.
Alas, you probably need to use CoreAudio which is a much more low level interface.
I'd like to suggest an alternative approach: Instead of recording the audio output, why not record the sequence of actions together with their time which lead to the audio being played? Write this sequence of events to a file and read it back in to reproduce the 'performance' - it's a bit like your own MIDI sequencer :)