iphone sdk: Core Audio How to continue recording to file after user stops recording by leaving the application and then re-opens it? - iphone

The iPhone's AVAudioRecorder class will not allow you to open an existing file to continue a recording. Instead, it overwrites it. I'd like to know an approach that would allow me to continue recording to an existing file using Core Audio APIs.

The best bet would be to take a look at the Audio Queue Services API. This is basically the next "deeper" level into the Core Audio stack provided by Apple. Unfortunately, the chasm between AVAudioRecorder and Audio Queue Services is vast. AQS is a C-based API and a fairly low level abstraction of the even more "raw" lowest levels of Core Audio. I would suggest reviewing the guide above, then taking a look at the example SpeakHere. It should easily be able to handle your current requirement.
No matter which API, you will have to handle the "intermediate" storage of your PCM data, probably temporarily storing it as a WAV or raw PCM, which you then reload and append with PCM data when continuing.

Related

how to insert and overwrite audio file in iOS

Am developing an application which has audio recorder. The user should be able to play the audio file & insert recording into that.. cut unwanted audio .. overwrite some part of audio file.
Have you seen how to Insert , overwrite audio files -Audio Editing iphone? but no one answered this...
Atleast suggest me a way to implement this....
Thanx in advance...
What type of audio file are you talking about? You will almost certainly need to convert whatever you are using into PCM WAV data for this type of manipulation. Luckily, Core Audio, which others have pointed you towards has some convenience methods for doing this.
Once you have the raw PCM data, you can insert by simply inserting other PCM data at the desired point in the data. You want to make sure you don't do something like write in the middle of a stereo packet, or something like that, but besides that, most simply-formatted PCM data is pretty easy to manipulate. Think of it like a string -- you can start with "Hello World" and change it to "Hello, Beautiful World" but simply inserting data in the middle.
Overwriting is the same principal.
Once you are done with the edits, you'll need to transform the PCM data back into whatever format you had saved in before.
Have a look at Core Audio
Core Audio provides software interfaces for implementing audio features in applications you create for iOS and OS X. Under the hood, it handles all aspects of audio on each of these platforms. In iOS, Core Audio capabilities include recording, playback, sound effects, positioning, format conversion, and file stream parsing, as well as:

objective-c record audio session output

I am writing an app that generates music. I am using OpenAL to: modify gain; modify pitch; mix audio; and play the resulting audio. I now need to record the audio as it is being played. I understand that OpenAL does not let you record the output audio. The other options I have found is to use audio units. However because I need to mix/pitch/gain the audio and record it, it seems I need to write all the audio processing so I can have access to the output buffer. Is this correct? Or is there a different iOS API I can use to do this. If not then is there a 3rd party solution already that lets me record the output (paid solutions are fine)?
You are correct.
Audio Units are the only iOS public API that allows an app to both process and then record audio.
Trying to record the OpenAL output may well be a violation of Apple's rules against using non-public APIs.
The alternative may be to completely rewrite the portions of OpenAL you need (there may be open source for some portions) running on top of the RemoteIO Audio Unit.
The best way to go is likely to be Core Audio, since it will give you as much flexibility as you need. Take a look into the Extended Audio File Services reference pages.
Using and extended audio file you should be able to set up a file format and audio stream buffer to send the final mixed output to, and then use the ExtAudioFileWrite() function to write the samples to the file.

Audio Recording on iOS

I've just started working on a project that requires me to do lots of audio related stuff on iOS.
This is the first time I'm working in the realm of audio, and have absolutely no idea how to go about it. So, I googled for documents, and was mostly relying on Apple docs. Firstly, I must mention that the documents are extremely confusing, and often, misleading.
Anyways, to test a recording, I used AVAudioSession and AVAudioRecorder. From what I understand, these are okay for simple recording and playback. So, here are a couple of questions I have regarding doing anything more complex:
If I wish to do any real-time processing with the audio, while recording is in progress, do I need to use Audio Queue services?
What other options do I have apart from Audio Queue Services?
What are Audio Units?
I actually got Apple's Audio Queue Services programming guide, and started writing an audio queue for recording. The "diagram" on their audio queue services guide (pg. 19 of the PDF) shows recording being done using an AAC codec. However, after some frustration and wasting a lot of time, I found out that AAC recording is not available on iOS - "Core Audio Essentials", section "Core Audio Plug-ins: Audio Units and Codecs".
Which brings me to my another two question:
What's a suitable format for recording, given Apple Lossless, iLBC, IMA/ADPCM, Linear PCM, uLaw/aLaw? Is there some chart somewhere that someone might be able to refer to?
Also, if MPEG4AAC (.m4a) recording is not available using an audio queue, how is it that I can record an MPEG4AAC (.m4a) using AVAudioRecorder?!
Super thanks a ton in advance for helping me out on this. I'll super appreciate any links, directions and/or words of wisdom.
Thanks again and cheers!
For your first question, Audio Queue services or using the RemoteIO Audio Unit are the appropriate APIs for real-time audio processing, with RemoteIO allowing lower and more deterministic latency, but with stricter real-time requirements than Audio Queues.
For creating aac recordings, one possibility is to record to raw linear PCM audio, then later use AV file services to convert buffered raw audio into your desired compressed format.

iPhone recording audio

I'm currently working on a project where it is necessary to record sound being played by the iPhone. By this, I mean recording sound being played in the background like a sound clip or whatever, NOT using the built-in microphone.
Can this be done? I am currently experimenting with the AVAudioRecorder but this only captures sound with the built-in microphone.
Any help would be appreciated!
This is possible only when using only the Audio Unit RemoteIO API or only the Audio Queue API with uncompressed raw audio, and with no background audio mixed in. Then you have full access to the audio samples, and can queue them up to be saved in a file.
It is not possible to record sound output of the device itself using any of the other public audio APIs.
Just to elaborate on hotpaw2's answer, if you are responsible for generating the sound then you can retrieve it. But if you are not, you cannot. You only have any control over sounds in your process. yes, you can choose to stifle sounds coming from different processes. but you can't actually get the data for these sounds or process them in any way.

How can I record the audio output of the iPhone? (like sounds of my app)

I want to record the sound of my iPhone-App. So like someone is playing something on a iPhone instrument and after that you can hear it.
Is it possible without the micro?
Do you mean an App you build yourself? If yes, you could just save the rendered waveform (maybe encoded/compressed to save space) for later playback. (see: Extended Audio File Services, it can write the same AudioBufferList to a file that you would render to the RemoteAudio Unit when playing audio in your Instrument-App)
[Edit: removed comments on recording third-party app audio output ...]
With the AVFoundation you are currently using, you're always working on the level of the sound files. Your code never sees the actual audio signal. Thus, you can't 'grab' the audio signal that your app generates when it is used. Also, AVAudioPlayer does not provide any means of getting to the final signal. If you're using multiple instances of AVAudio player to play multiple sounds at the same time you also wouldn't be able to get at the mixed signal.
Alas, you probably need to use CoreAudio which is a much more low level interface.
I'd like to suggest an alternative approach: Instead of recording the audio output, why not record the sequence of actions together with their time which lead to the audio being played? Write this sequence of events to a file and read it back in to reproduce the 'performance' - it's a bit like your own MIDI sequencer :)