Which iOS audio API is designed for streaming large local audio files from disk with low latency? - iphone

I need to play very large audio files from disk, and streaming them into memory would be the most efficient.
Must I use OpenAL for this or is there another option?

I would recommend using audio queues. They're simple to use, well-documented, and high-level. And I'd recommend using extended audio file services to get the audio data out of the files.

On iOS, for uncompressed audio data, you can mmap huge files, and then just copy the samples from memory to the RemoteIO callback buffers. The RemoteIO Audio Unit can be configured for even lower latency than Audio Queues.

Definitely use Audio Unit, it's the lowest level Audio API of Apple, so it's perfect for low latency..
You should first init Audio unit with remoteIO, them read buffer from your file in a while loop and push these buffers to AudioUnit in AudioUnit callback

Related

iOS Audio for a DirectSound programmer - what to use?

I'm a DirectSound programmer new to iOS. I want to implement the ability to play streaming multichannel audio, sometimes looping back to a specified point when the stream is finished playing (think of a song that has a little intro diddy that is played once, then the song loops indefinitely, skipping that intro).
With DirectSound and libvorbis, at least, I'd feed a chunk of the OGG data into the libvorbis decoder, it'd spit out some PCM, and I'd fill the buffer and queue it up to play right after the current sound buffer is finished, swapping between two buffers.
Probably looking at using some kind of hardware-supported format on iOS, like AAC. What programming APIs should I be using that will allow me to do multichannel and loop points? Any input is appreciated, thanks!
The iOS AVAssetReader class can be used to read compressed audio file data into a PCM buffer. Either the Audio Queue API (simpler) or the RemoteIO Audio Unit (lower latency) can be used to play buffers of PCM data.

Audio Recording on iOS

I've just started working on a project that requires me to do lots of audio related stuff on iOS.
This is the first time I'm working in the realm of audio, and have absolutely no idea how to go about it. So, I googled for documents, and was mostly relying on Apple docs. Firstly, I must mention that the documents are extremely confusing, and often, misleading.
Anyways, to test a recording, I used AVAudioSession and AVAudioRecorder. From what I understand, these are okay for simple recording and playback. So, here are a couple of questions I have regarding doing anything more complex:
If I wish to do any real-time processing with the audio, while recording is in progress, do I need to use Audio Queue services?
What other options do I have apart from Audio Queue Services?
What are Audio Units?
I actually got Apple's Audio Queue Services programming guide, and started writing an audio queue for recording. The "diagram" on their audio queue services guide (pg. 19 of the PDF) shows recording being done using an AAC codec. However, after some frustration and wasting a lot of time, I found out that AAC recording is not available on iOS - "Core Audio Essentials", section "Core Audio Plug-ins: Audio Units and Codecs".
Which brings me to my another two question:
What's a suitable format for recording, given Apple Lossless, iLBC, IMA/ADPCM, Linear PCM, uLaw/aLaw? Is there some chart somewhere that someone might be able to refer to?
Also, if MPEG4AAC (.m4a) recording is not available using an audio queue, how is it that I can record an MPEG4AAC (.m4a) using AVAudioRecorder?!
Super thanks a ton in advance for helping me out on this. I'll super appreciate any links, directions and/or words of wisdom.
Thanks again and cheers!
For your first question, Audio Queue services or using the RemoteIO Audio Unit are the appropriate APIs for real-time audio processing, with RemoteIO allowing lower and more deterministic latency, but with stricter real-time requirements than Audio Queues.
For creating aac recordings, one possibility is to record to raw linear PCM audio, then later use AV file services to convert buffered raw audio into your desired compressed format.

Is it possible to access decoded audio data from Audio Queue Services?

I have an app in the App Store for streaming compressed music files over the network. I'm using Audio Queue Services to handle the playback of the files.
I would like to do some processing on the files as they are streamed. I tested the data in the audio buffers, but it is not decompressed audio. If I'm streaming an mp3 file, the buffers are just little slices of mp3 data.
My question is: is it ever possible to access the uncompressed PCM data when using Audio Queue Services? Am I forced to switch to using Audio Units?
You could use Audio Conversion Services to do your own MP3-to-PCM conversion, apply your effect, then put PCM into the queue instead of MP3. It's not going to be terribly easy, but you'd still be saved the threading challenges of doing this directly with audio units (which would require you to do your own conversion anyways and then probably use a CARingBuffer to send samples between the download/convert thread and the realtime callback thread from the IO unit)

Play and render stream using audio queues

I'm currently playing a stream on my iOS App but one feature we'd like to add is the visualization of the output wave. I use an output audio queue in order to play the stream, but have found no way to read the output buffer. Can this be achieved using audio queues or shall be done wit a lower level api?
To visualize, you presumably need PCM (uncompressed) data, so if you're pushing some compressed format into the queue like MP3 or AAC, then you never see the data you need. If you were working with PCM (maybe you're uncompressing it yourself with the Audio Conversion APIs), then you could visualize before putting samples into the queue. But then the problem would be latency - you want to visualize samples when they play, not when they go into the queue.
For latency reasons alone, you probably want to be using audio units.
It cannot actually be done. In order to do so, I need audio units to implement the streamer.

Playback MP3 using RemoteIO and AudioUnits on iPhone... possible?

I want to playback an mp3 rather than an uncompressed file using RemoteIO / AudioUnit.
Using uncompressed files obviously uses far too much disk space (30MB vs 3MB for mp3).
Is this even possible? If so, can you provide a little code headstart?
Thanks a million.
How low-level do you want to go? You could use the AudioHardware API:
err = AudioDeviceAddIOProc(deviceID, ioProc, self);
and in your ioProc fill the buffers yourself, listen for hardware changes, and deal with real-time threading and a lot of other low-level stuff.
I'm assuming you're already decoding the mp3 data with AudioConverter and know how to do that.
Yes, it is possible. But it requires multiple audio API use and multiple threads.
Due to the real-time constraints of the Audio Unit buffer callback thread, you will have do the conversion of compressed files to raw PCM samples outside the Audio Unit callback. You could use Extended Audio File Services or AVAssetReader to do the conversion to uncompressed samples outside the Audio Unit callback. However, you don't need to uncompress the entire file at once. A short uncompressed buffer of a fraction of a second will likely do, as long as you keep filling it far enough ahead of the Audio Unit callback buffer consumption rate. This can be done in a separate timer driven thread that monitors buffer consumption, and decompresses just enough audio accordingly, perhaps into a ring buffer or circular FIFO.
What you will end up with will be similar to a rewrite of the Audio Queue API layer, but with more customizability.