Is AudioSession actually the same thing as OpenAL?
They are quite different in purpose.
OpenAL is a low level, cross-platform API for playing and controlling sounds.
AudioSession, as the documentation puts it, is a C interface for managing an application’s audio behavior in the context of other applications. You may want to take a look at AVAudioSession which is a convenient Objective-C alternative to AudioSession.
You would typically use Audio Sessions for getting sound hardware information, determining if other applications are playing sounds, specifying what happens to those sounds when your application also tries to play sounds, etc.
Audio Sessions are all about managing the environment in which your application plays sounds. Even sounds played using OpenAL are subject to the rules imposed by your application's audio session.
You should really check out the docs. There is a lot to cover.
Related
I am developing one application. In that I want to record the sounds and I want to play that recorded sound file. I know the frameworks for doing this. But how to develop programmatically by using that frameworks?
You can refer to this link:
I have implemented this code in one of my apps and it works completely fine.
How do I record audio on iPhone with AVAudioRecorder?
For Playing the sound you have option to use AVAudioRecorder.
Hope this helps.
The best way to do it - and I am talking from painful experience here - is with the RemoteIO audio unit. You can also do it with AudioQueue, but it has a higher latency, and the queue type approach becomes very problematic.
So, I think that they are really different tools for different jobs. Note that you won't play a sound file as such. You will play the contents of a buffer held in memory. As long as the buffer is not too large, this should not be an issue.
So, going with RemoteIO, you will find this blog and tutorial very useful. It includes code samples.
Using RemoteIO audio unit By MICHAEL TYSON
What is the difference between OpenAL and AVAudioPlayer on the iPhone? It seems that both can be used for playing/recording audio. When would you use either? What advantages/features does each provide?
Thanks!
-MT
OpenAL is an advanced audio playback engine. You have to get a lot more "down-and-dirty" with it than you do with AVAudio player.
OpenAL is often used in games. OpenAL provides 3D sound positioning which is where things like distance related parameters come into play. It is a C based API while AVAudio is Objective C. If you don't know medium to advanced C programing than you are going to struggle with OpenAL programming.
AVAudioPlayer is best if you just need basic playback and recording. If you need more control than OpenAL or Audio Queues is the way to go (though Audio Queues are also a C-based API). Many people seem to prefer OpenAL over Audio Queues as it's a cross platform library and works similar to OpenGL which game programers already are quite familiar with.
In most caes outside of gaming or advanced audio synchronization situations, AVAudio is the way to go and works great. Even in games, I'm often seeing people use a combination of OpenAL (for sound effects) and AVAudio (for music playback).
OpenAL provides more control if you want to change the audio parameters like distance related, patter-based playback. AVAudioPlayer is a pretty basic one, but you can achieve similar with AudioQueues and AudioUnit using low level api's. In a nutshell, OpenAL does few things with snap of finger but then again it needs what you are looking at? Hope this helps.
I have trouble choosing the right audio playback technology. There's a ton of technologies to use on the iPhone, it's so confusing.
What I need to do is this:
start playing short sounds ranging between 0.1 and 2 seconds
high quality playback, no crackle (I heard some of the iPhone audio playback technologies do a crackle sound on start or end, which is bad!)
ability to start playback of a sound, while there's already another one playing right now (two, three or more sounds at the same time)
What would you suggest here, and why? Thanks :-)
There are basically four options for playing audio on the iPhone:
Audio Toolbox. Easy, but only good for playing sound effects in applications (sample code).
Audio Queue Services. Very powerful, can do anything. C API, pretty messy to work with. Callbacks, buckets, pain.
AVAudioPlayer. About the easiest option. Can play compressed audio, with a simple wrapper you can easily play multiple instances of the same sample at once (non-compressed audio only, as there is only one HW audio decoder). Starting to play a sound with AVAudioPlayer seems to lag about 20 ms, could be a problem.
OpenAL. Decent compromise between complexity and features. Sounds do not lag, you can play multiple sounds just fine, but you have to do a lot of the work yourself. I’ve written a sound engine called Finch that can help you.
Don’t know much about cracking, never experienced it. I think there were some issues with playing seamless compressed loops with AVAudioPlayer, can be overcome by saving the loop without compression.
I'm writing a game for iPhone/iPod.
My engine is using OpenGL-ES, and this means game requires some performance.
(realtime games, not a static board-game like games.)
I looked at basic sound framework in iPhone, there're several frameworks,(Core Audio, Audio Toolbox, OpenAL...) but I cannot determine differences of them in detail.
I think OpenAL will gain best performance, but it's just a guess with no clue. And iPhone/iPod is a music player hardware, I cannot know in-depth features of iPhone/iPod.
I'm new to all of those framework, so I have to study one of them. And now I'm choosing one.
The features required for me is:
Delay-less playback. Sound effect should be a realtime feedback.
Streamed long music playback with very small memory footprint.
Volume control per playback of sound effect.
Mixing. Multiple difference sound effect can be played at same time. (around 4 or more)
Other feature required for games.
Hardware acceleration (if exists)
Realtime filtering effect (reverb, echo, 3D, ...) if possible.
...
Can you recommend a framework for my game? And some explanation about each framework also will be very appreciated.
You can do everything you want with OpenAL. It's what I'd recommend for a game.
Plus, it's the only framework for 3D positional audio which often goes hand-in-hand with a 3D game.
OpenAL, Core Audio, AudioToolbox etc. are wrappers around the same things: namely, Apple’s own audio processing features. OpenAL is just a different interface but has the same performance as Core Audio, as it sends commands to the same things.
There are several other “audio engines” out there that are just wrappers.
At risk of tooting my own horn, Superpowered is the only audio SDK that outperforms Apple’s Core Audio on mobile devices. It’s specifically designed to outperform every single one of those, with lower memory footprint, CPU load and battery usage. For example, the Superpowered reverb is 5x faster than Apple’s. See http://superpowered.com/reverb/
I'm making an opengl game for iPhone. And I'm about to start adding sound effects to the app. I wonder what's the best framework for this purpose.
Is AV foundation my best option? Any others I'm missing, like Open AL perhaps?
General strength/weakness summary of iPhone sound APIs from a game perspective:
AVFoundation: plays long compressed files. No low-level access, high latency. Good for theme song or background music. Bad for short-lived effects.
System sounds: plays short (think 0-5 sec) sounds. Must be PCM or IMA4 in .aif, .wav, or .caf. Fire-and-forget (can't stop it once it starts). C-based API. Appropriate for short sound effects (taps, clicks, bangs, crashes)
OpenAL: 3D spatialized audio. API resembles OpenGL and is a natural accompaniment to it. Easy to mix multiple sources. Audio needs to be PCM (probably loaded by Core Audio's "Audio File Services"). Pretty significant low-level access. Potentially very low latency.
Audio Queue: stream playback from a source you provide (reading from file, from network, software synthesis, etc.). C-based. Can be fairly low-latency. Not really ideal for a lot of game tasks: background music is better suited to AVFoundation, shorter sounds to system sounds, and mixing to OpenAL or Audio Units. Can record from mic.
Audio Units: lowest public level of Core Audio. Extremely low latency (< 30 ms). C, and hard-core C at that. Everything must be PCM. Multi-channel mixer unit lets you mix sources. Can record.
Be sure you set up your audio session appropriately, meaning you declare a category that indicates how you interact with the rest of audio on the device (allow/disallow iPod playback in the background, honor/ignore ring/silent switch, etc.). AV Foundation has the Obj-C version of this, and Core Audio has somewhat more powerful equivalents.
Kowalski is another game oriented sound engine that runs on the iPhone/iPad (and OSX and Windows).
You might want to check out Finch, an OpenAL sound effect engine writter exactly with games in mind.