What's the most suitable sound/audio framework for iPhone OpenGL-ES games? - iphone

I'm writing a game for iPhone/iPod.
My engine is using OpenGL-ES, and this means game requires some performance.
(realtime games, not a static board-game like games.)
I looked at basic sound framework in iPhone, there're several frameworks,(Core Audio, Audio Toolbox, OpenAL...) but I cannot determine differences of them in detail.
I think OpenAL will gain best performance, but it's just a guess with no clue. And iPhone/iPod is a music player hardware, I cannot know in-depth features of iPhone/iPod.
I'm new to all of those framework, so I have to study one of them. And now I'm choosing one.
The features required for me is:
Delay-less playback. Sound effect should be a realtime feedback.
Streamed long music playback with very small memory footprint.
Volume control per playback of sound effect.
Mixing. Multiple difference sound effect can be played at same time. (around 4 or more)
Other feature required for games.
Hardware acceleration (if exists)
Realtime filtering effect (reverb, echo, 3D, ...) if possible.
...
Can you recommend a framework for my game? And some explanation about each framework also will be very appreciated.

You can do everything you want with OpenAL. It's what I'd recommend for a game.
Plus, it's the only framework for 3D positional audio which often goes hand-in-hand with a 3D game.

OpenAL, Core Audio, AudioToolbox etc. are wrappers around the same things: namely, Apple’s own audio processing features. OpenAL is just a different interface but has the same performance as Core Audio, as it sends commands to the same things.
There are several other “audio engines” out there that are just wrappers.
At risk of tooting my own horn, Superpowered is the only audio SDK that outperforms Apple’s Core Audio on mobile devices. It’s specifically designed to outperform every single one of those, with lower memory footprint, CPU load and battery usage. For example, the Superpowered reverb is 5x faster than Apple’s. See http://superpowered.com/reverb/

Related

Using graphics hardware for audio processing in iphone app

We are developing an iphone app that needs to process audio data in real time, but we are suffering with performance. The bottlenecks are in audio effects, which are in fact quite simple, but the performance hit is noticeable when several are added.
Most of the audio effects code is written in C.
We think there are two places we can use gpu hardware to speed things up: using openCL for effects and hardware for interpolation/smoothing. We are fairly new to this and don't know where to begin.
You probably mean OpenGL, as OpenCL is only present on the desktop. Yes, you could use OpenGL ES 2.0 programmable shaders for this, if you wanted to perform some very fast parallel processing, but that will be extremely complex to pull off.
You might first want to look at the Accelerate framework, which has hardware-accelerated functions for doing just the kind of tasks needed for audio processing. A great place to start is Apple's WWDC 2010 session 202 - "The Accelerate framework for iPhone OS", along with their "Taking Advantage of the Accelerate Framework" article.
Also, don't dismiss Hans' suggestion that you profile your code first, because your performance bottleneck might be somewhere you don't expect.
You might get better DSP acceleration coding for the ARM NEON SIMD unit. NEON is designed for DSP operations and can pipeline multiple single precision floating point operations per cycle. Whereas getting audio data in and out of GPU memory may be possible, but may not be that fast.
But you might want to profile your code to see if something else is the bottleneck. The iPhone 4 CPU can easily keep up with doing multiple FFT's and IIR filters on a real-time audio stream.

OpenAL Vs. AVAudioPlayer/AVAudioRecorder on iPhone

What is the difference between OpenAL and AVAudioPlayer on the iPhone? It seems that both can be used for playing/recording audio. When would you use either? What advantages/features does each provide?
Thanks!
-MT
OpenAL is an advanced audio playback engine. You have to get a lot more "down-and-dirty" with it than you do with AVAudio player.
OpenAL is often used in games. OpenAL provides 3D sound positioning which is where things like distance related parameters come into play. It is a C based API while AVAudio is Objective C. If you don't know medium to advanced C programing than you are going to struggle with OpenAL programming.
AVAudioPlayer is best if you just need basic playback and recording. If you need more control than OpenAL or Audio Queues is the way to go (though Audio Queues are also a C-based API). Many people seem to prefer OpenAL over Audio Queues as it's a cross platform library and works similar to OpenGL which game programers already are quite familiar with.
In most caes outside of gaming or advanced audio synchronization situations, AVAudio is the way to go and works great. Even in games, I'm often seeing people use a combination of OpenAL (for sound effects) and AVAudio (for music playback).
OpenAL provides more control if you want to change the audio parameters like distance related, patter-based playback. AVAudioPlayer is a pretty basic one, but you can achieve similar with AudioQueues and AudioUnit using low level api's. In a nutshell, OpenAL does few things with snap of finger but then again it needs what you are looking at? Hope this helps.

What's the difference between the Apple audio frameworks?

In the documentation I see several Apple frameworks for audio. All of them seem to be targeted at playing and recording audio. So I wonder what the big differences are between these?
Audio Toolbox
Audio Unit
AV Foundation
Core Audio
Did I miss a guide that gives a good overview of all these?
I made a brief graphical overview of Core Audio and its (containing) frameworks:
The framework closest to the hardware is Audio Unit. Based on that there is OpenAL and AudioToolbox with AudioQueue. On top, you can find the Media Player and AVFoundation (Audio & Video) frameworks.
Now it depends on what you want to do: just a small recording, use AVFoundation, which is the easiest one to use. (Media Player has no options for recording, it is - as the name says - just a media player.)
Do you want to do serious real-time signal processing? Use Audio Unit. But believe me, this is the hardest way. :-)
With iOS 8.0 Apple introduced AVAudioEngine, an Objective-C/Swift based audio graph system in AV Foundation. This encapsulates some dirty C-stuff from Audio Units. Due to the complexity of Audio Unit it is maybe worth a look.
Further readings in the Apple Documentation:
Core Audio Overview: Introduction
Multimedia Programming Guide
Audio & Video Starting Point
Core Audio is the lowest-level of all the frameworks and also the oldest.
Audio Toolbox is just above Core Audio and provides many different APIs that make it easier to deal with sound but still gives you a lot of control. There's ExtAudioFile, AudioConverter, and several other useful APIs.
Audio Unit is a framework for working with audio processing chains for both sampled audio data and MIDI. It's where the mixer and the various filters and effects such as reverb live.
AV Foundation is a new and fairly high-level API for recording and playing audio on the iPhone OS. All of them are available on both OS X and iOS, though AV Foundation requires OS X 10.8+.
Core Audio is not actually a framework, but an infrastructure that contains many different frameworks. Any audio that comes out of you iOS speaker is, in fact, managed by Core Audio.
The lowest-level in Core Audio that you can get is by using Audio Units, which you can work with by using the AudioToolbox and the AudioUnit frameworks.
The AudioToolbox framework also provides a bit higher level abstractions to deal with playing/recording of audio using AudioQueues, or managing various audio formats by using various Converter and File Services.
Finally, AV Foundation provides high level access to playing one specific file, and MediaPlayer gives you access (and playback) to your iPod library.
This site has a short and excellent overview of core features the different API's:
http://cocoawithlove.com/2011/03/history-of-ios-media-apis-iphone-os-20.html
Here you can find an overview of all iOS and OSX audio frameworks:
https://developer.apple.com/library/ios/documentation/MusicAudio/Conceptual/CoreAudioOverview/WhatsinCoreAudio/WhatsinCoreAudio.html#//apple_ref/doc/uid/TP40003577-CH4-SW4

Sounds effects in iPhone game

I'm making an opengl game for iPhone. And I'm about to start adding sound effects to the app. I wonder what's the best framework for this purpose.
Is AV foundation my best option? Any others I'm missing, like Open AL perhaps?
General strength/weakness summary of iPhone sound APIs from a game perspective:
AVFoundation: plays long compressed files. No low-level access, high latency. Good for theme song or background music. Bad for short-lived effects.
System sounds: plays short (think 0-5 sec) sounds. Must be PCM or IMA4 in .aif, .wav, or .caf. Fire-and-forget (can't stop it once it starts). C-based API. Appropriate for short sound effects (taps, clicks, bangs, crashes)
OpenAL: 3D spatialized audio. API resembles OpenGL and is a natural accompaniment to it. Easy to mix multiple sources. Audio needs to be PCM (probably loaded by Core Audio's "Audio File Services"). Pretty significant low-level access. Potentially very low latency.
Audio Queue: stream playback from a source you provide (reading from file, from network, software synthesis, etc.). C-based. Can be fairly low-latency. Not really ideal for a lot of game tasks: background music is better suited to AVFoundation, shorter sounds to system sounds, and mixing to OpenAL or Audio Units. Can record from mic.
Audio Units: lowest public level of Core Audio. Extremely low latency (< 30 ms). C, and hard-core C at that. Everything must be PCM. Multi-channel mixer unit lets you mix sources. Can record.
Be sure you set up your audio session appropriately, meaning you declare a category that indicates how you interact with the rest of audio on the device (allow/disallow iPod playback in the background, honor/ignore ring/silent switch, etc.). AV Foundation has the Obj-C version of this, and Core Audio has somewhat more powerful equivalents.
Kowalski is another game oriented sound engine that runs on the iPhone/iPad (and OSX and Windows).
You might want to check out Finch, an OpenAL sound effect engine writter exactly with games in mind.

Most performant audio layer?

I'm curious as to which of the available audio layers is the most performant, out of the ones available on the iPhone. Currently I've used the SystemSoundID method, and the AVAudioPlayer method, and I'm wondering if it's worth investigating AudioQueue or OpenAL...are there significant performance gains to be had?
Thanks!
Audio is a complex issue, and most of it is done by hardware, so there is no performance gains in changing APIs.
The different APIs are for different tasks:
SystemSound is for short notification sounds (max 10 sec)
AudioQueue is for everything longer than a SystemSound
AVAudioPlayer is just an Objective-C layer above AudioQueue, and you don't lose any performance for this layer. (So if AVAudioPlayer is working for you, stay with it!)
OpenAL is for sound effects.
What about FMOD for the iPhone? It's mostly used for game development and availible for various platforms.
I've been reading about very low level and very low latency audio using RemoteIO. Take a look at this article and subsequent (long) discussion : Using RemoteIO audio units. I wouldn't recommend going down this path unless the higher level libraries completely fail for your application. The author found very distinct performance differences between the different approaches - some quite unexpected. YMMV