What's the difference between the Apple audio frameworks? - iphone

In the documentation I see several Apple frameworks for audio. All of them seem to be targeted at playing and recording audio. So I wonder what the big differences are between these?
Audio Toolbox
Audio Unit
AV Foundation
Core Audio
Did I miss a guide that gives a good overview of all these?

I made a brief graphical overview of Core Audio and its (containing) frameworks:
The framework closest to the hardware is Audio Unit. Based on that there is OpenAL and AudioToolbox with AudioQueue. On top, you can find the Media Player and AVFoundation (Audio & Video) frameworks.
Now it depends on what you want to do: just a small recording, use AVFoundation, which is the easiest one to use. (Media Player has no options for recording, it is - as the name says - just a media player.)
Do you want to do serious real-time signal processing? Use Audio Unit. But believe me, this is the hardest way. :-)
With iOS 8.0 Apple introduced AVAudioEngine, an Objective-C/Swift based audio graph system in AV Foundation. This encapsulates some dirty C-stuff from Audio Units. Due to the complexity of Audio Unit it is maybe worth a look.
Further readings in the Apple Documentation:
Core Audio Overview: Introduction
Multimedia Programming Guide
Audio & Video Starting Point

Core Audio is the lowest-level of all the frameworks and also the oldest.
Audio Toolbox is just above Core Audio and provides many different APIs that make it easier to deal with sound but still gives you a lot of control. There's ExtAudioFile, AudioConverter, and several other useful APIs.
Audio Unit is a framework for working with audio processing chains for both sampled audio data and MIDI. It's where the mixer and the various filters and effects such as reverb live.
AV Foundation is a new and fairly high-level API for recording and playing audio on the iPhone OS. All of them are available on both OS X and iOS, though AV Foundation requires OS X 10.8+.

Core Audio is not actually a framework, but an infrastructure that contains many different frameworks. Any audio that comes out of you iOS speaker is, in fact, managed by Core Audio.
The lowest-level in Core Audio that you can get is by using Audio Units, which you can work with by using the AudioToolbox and the AudioUnit frameworks.
The AudioToolbox framework also provides a bit higher level abstractions to deal with playing/recording of audio using AudioQueues, or managing various audio formats by using various Converter and File Services.
Finally, AV Foundation provides high level access to playing one specific file, and MediaPlayer gives you access (and playback) to your iPod library.

This site has a short and excellent overview of core features the different API's:
http://cocoawithlove.com/2011/03/history-of-ios-media-apis-iphone-os-20.html

Here you can find an overview of all iOS and OSX audio frameworks:
https://developer.apple.com/library/ios/documentation/MusicAudio/Conceptual/CoreAudioOverview/WhatsinCoreAudio/WhatsinCoreAudio.html#//apple_ref/doc/uid/TP40003577-CH4-SW4

Related

AudioToolbox framework capabilities

I want to know if AudioToolbox.framework will suppport an array of pageviewcontrollers to change with audio synchronization programmtically. From the apple documentation I found that Audio Queue Services lets you record, play, pause, loop, and synchronize audio but when they say synchronize audio are they referring to page change synchronization or something else?
I already have AVFoundation.framework in my app to play an audio file.
It's a 'Core' AV library. You'll have to you your own program to interact with or manipulate your UI. The library does not rely on either AppKit or UIKit.
Sync: consider it along the lines of accurate timing of audio playback.

Where should I start in making industry standard audio in my iPhone applications?

I am trying to get started in advanced audio with the iPhone SDK. I really want to make professional level audio components. I know the basics (e.g. how to use NSAVAudioPlayers), but I don't know what to do for the more complicated sort of audio (e.g. osculation and audio cones). Does anyone know where to go for this? (I tried research online, and all that came up weer the sort of simplistic audio components).
Core Audio. That's where you'll find an OpenAL implementation. You might also want to look at the Audio Processing Graph API (also part of core audio).

audio frameworks in iPhone

I would like to know the follwing information about iPhone audio system
Heirarchy of the audio framework in iPhone OS.
i know that there are 3 main audio frameworks in iPhone OS.i.e
AVFoundation Framework
CoreAudio Framework
OpenAL Framework
what are the audio formats supported in each of the above framework?I mean will all the framework support all audio formats or are they dependent about the audio formats it support?
Thank You
For AVFoundation,CoreAudio pls check this link -- LINK
And for OpenAL this link might help you -- Link

What's the most suitable sound/audio framework for iPhone OpenGL-ES games?

I'm writing a game for iPhone/iPod.
My engine is using OpenGL-ES, and this means game requires some performance.
(realtime games, not a static board-game like games.)
I looked at basic sound framework in iPhone, there're several frameworks,(Core Audio, Audio Toolbox, OpenAL...) but I cannot determine differences of them in detail.
I think OpenAL will gain best performance, but it's just a guess with no clue. And iPhone/iPod is a music player hardware, I cannot know in-depth features of iPhone/iPod.
I'm new to all of those framework, so I have to study one of them. And now I'm choosing one.
The features required for me is:
Delay-less playback. Sound effect should be a realtime feedback.
Streamed long music playback with very small memory footprint.
Volume control per playback of sound effect.
Mixing. Multiple difference sound effect can be played at same time. (around 4 or more)
Other feature required for games.
Hardware acceleration (if exists)
Realtime filtering effect (reverb, echo, 3D, ...) if possible.
...
Can you recommend a framework for my game? And some explanation about each framework also will be very appreciated.
You can do everything you want with OpenAL. It's what I'd recommend for a game.
Plus, it's the only framework for 3D positional audio which often goes hand-in-hand with a 3D game.
OpenAL, Core Audio, AudioToolbox etc. are wrappers around the same things: namely, Apple’s own audio processing features. OpenAL is just a different interface but has the same performance as Core Audio, as it sends commands to the same things.
There are several other “audio engines” out there that are just wrappers.
At risk of tooting my own horn, Superpowered is the only audio SDK that outperforms Apple’s Core Audio on mobile devices. It’s specifically designed to outperform every single one of those, with lower memory footprint, CPU load and battery usage. For example, the Superpowered reverb is 5x faster than Apple’s. See http://superpowered.com/reverb/

Sounds effects in iPhone game

I'm making an opengl game for iPhone. And I'm about to start adding sound effects to the app. I wonder what's the best framework for this purpose.
Is AV foundation my best option? Any others I'm missing, like Open AL perhaps?
General strength/weakness summary of iPhone sound APIs from a game perspective:
AVFoundation: plays long compressed files. No low-level access, high latency. Good for theme song or background music. Bad for short-lived effects.
System sounds: plays short (think 0-5 sec) sounds. Must be PCM or IMA4 in .aif, .wav, or .caf. Fire-and-forget (can't stop it once it starts). C-based API. Appropriate for short sound effects (taps, clicks, bangs, crashes)
OpenAL: 3D spatialized audio. API resembles OpenGL and is a natural accompaniment to it. Easy to mix multiple sources. Audio needs to be PCM (probably loaded by Core Audio's "Audio File Services"). Pretty significant low-level access. Potentially very low latency.
Audio Queue: stream playback from a source you provide (reading from file, from network, software synthesis, etc.). C-based. Can be fairly low-latency. Not really ideal for a lot of game tasks: background music is better suited to AVFoundation, shorter sounds to system sounds, and mixing to OpenAL or Audio Units. Can record from mic.
Audio Units: lowest public level of Core Audio. Extremely low latency (< 30 ms). C, and hard-core C at that. Everything must be PCM. Multi-channel mixer unit lets you mix sources. Can record.
Be sure you set up your audio session appropriately, meaning you declare a category that indicates how you interact with the rest of audio on the device (allow/disallow iPod playback in the background, honor/ignore ring/silent switch, etc.). AV Foundation has the Obj-C version of this, and Core Audio has somewhat more powerful equivalents.
Kowalski is another game oriented sound engine that runs on the iPhone/iPad (and OSX and Windows).
You might want to check out Finch, an OpenAL sound effect engine writter exactly with games in mind.