Sound Engine with good memory management (iPhone/iPad) - iphone

I'm looking for simple sound engine without advanced effects but with good management of buffers memory. At least it must track all playing sounds, unload not used sound buffers (but keep all currently playing), adjust gain for sound groups. Support of input streaming and compressed formats would be advantage.
PS: FMOD and BASS are good engines but too expensive for these low requirements.

I have couple of references those should be helpful to you:
The Kowalski Engine is a real time audio engine written in C, based on a hierarchical mix bus system.
http://kowalski.sourceforge.net/
The CLUNK C++ library provides support for real-time 3D(binaural) sound generation. It puts virtually no limitations on the developer
http://sourceforge.net/projects/clunk/
Easy Objective-C interface to OpenAL, AVAudioPlayer, and audio session management.
https://github.com/kstenerud/ObjectAL-for-iPhone
I believe this should be helpful to get the solution with the great memory management in place.

I've found CocosDenshion (part of Cocos2d) to be easy to use and to have a simple memory management.

Have you tried STK? It can interface with core-audio for iOS. I strongly recommend it, it has all the main important building blocks for sound synthesis, without any additional stuff (like CLAM), is very lightweight and highly portable.

I'm using SoundMaster engine.
It's super simple and has good memory management.

Related

Sound Effect Library/Extension for OpenAL (running on iOS)?

I want to do some DSP effect processing, create effect like flanger, echo, etc.
Could it be done via OpenAL? Or should I use enterely different framework/library?
Since iOS 5.0 some of the DSP effects are natively supported by OpenAL.
For example, reverb is supported with emulation for more than 10 different spaces (Small/Medium/Large Room, Medium/Large Hall, Plate, Medium/Large Chamber, Cathedral and several variations).
You can find a good reference implementation in the ObjectAL wrapper. The repository is available at https://github.com/kstenerud/ObjectAL-for-iPhone
Grab the source from this repository, load "ObjectAL.xcodeproj" and run the ObjectALDemo target on any iOS 5.0 device (should also work on the simulator). This will give you a good starting point and feeling of what the reverb effect is capable of. I personally recommend taking advantage of the ObjectAL library instead of working with OpenAL directly.
Good luck with your project!
Just write your own audio library. iOS devices don't have hardware acceleration for OpenAL. It isnt particularly difficult to do, and then you can also use apples audio units (some of which are hardware accelerated).

Audio Library C++ for a multicross platform use (iphone, android...)

i'm trying to make a C++ engine that will read a mp3 file, and make some image zoom/translation depending on the time of the reading sound file. I think I could use OpenGL ES to render what I want, and calling some OpenGL ES instructions in my C++ files, and init my drawing context in Obj-C/Java. I want to do the same for the sound, but i don't really know what to use, and if I can really do it or not in C++.
I searched for library so I found Bass and Fmod (which is not free for commercial use). They said it's multicross platform (Windows, Unix, MacOS) but I dont understand if it manners for mobile, and if I can really use it. Does anyone have been through this? Do you recommand me another free library?
Thanks again, and I apologize for my poor english,
Arnaud
Have a look at libpd (Pure Data for embedded applications)
http://download.puredata.info/libpd (the library has been released very recently, but the code is very mature indeed)
http://createdigitalmusic.com/2010/10/libpd-put-pure-data-in-your-app-on-an-iphone-or-android-and-everywhere-free/
Audio is often problematic and it is pretty much always a good idea to write your own high-level API that does exactly what you want to do (and nothing else) and to assume you will then be writing a thin layer between it and whatever audio library you are using underneath. If you're lucky and there's a library available that does things the way you would do them then it's trivial. If not, at least it's still possible. In either case, your app code is not tied to an external sound API.
I have used FMOD on multiple different commercial projects over the years for PC, Mac and iPhone, and have always liked it - but it's not free. OpenAL has always seemed sorta, I dunno: clunky? But you only have to deal with it when writing your API layer, and your app code never has to see it.
It's easy for me to say "write your own API" since I've been writing commercial games for 20 years and so know what I think an audio API should look like. If you don't have your own idea how you think it should be, then I suggest you look at a 3rd party library that makes sense to you and take the functions from it that you will be using and write your own API to do be a set of functions that do nothing but call those.
Since you have both OpenAL and FMOD available to you free for development you can then make your API work with both, and chances are it's then going to work with anything else you might come across.
OpenAL may be your best bet. Look at this answer: Android OpenAL?
It seems possible to compile OpenAL on Android.
If you are looking for just PCM audio input/output ( and not MP3 decoding ) try PortAudio or RTAudio
RtAudio or PortAudio, which one to use?

How to develop an iphone app with reverb functionality?

I am developing an iPhone application (like Audio Processing). I have to give some effect to the audios.
If it is desktop app, many options are there. We can get good examples and full project like audacity. But I want to develop for iPhone.
I got an app with reverb option; (take a look at following link). Just I watch the "video", I did not test this application in my iPhone device.
http://www.appstorehq.com/reverb-iphone-89870/app
My question is; How can I develop the app with reverb functionality ? Is there any documentation for that ? If it is, just share with us.
NOTE: We can use AudioUnit to develop the app with reverb functionality (I am not clear with this.).
EDIT: I don't like to use any third party library.
If anybody having knowledge about this, please share with us.
Thanks.
if yourre targeting ios5 you can just the audio unit subtype kAudioUnitSubType_Reverb2 of the effect audio unit.
reverb unit
AudioComponentDescription auEffectUnitDescription;
auEffectUnitDescription.componentType = kAudioUnitType_Effect;
auEffectUnitDescription.componentSubType = kAudioUnitSubType_Reverb2;
auEffectUnitDescription.componentManufacturer = kAudioUnitManufacturer_Apple;
AUGraphAddNode(
processingGraph,
&auEffectUnitDescription,
&auEffectNode),
Failing that you could just write your own reverb code in the remoteio callback. A simple delay might be easier to do and would sound similar.
iOS 5.0 brings native OpenAL support, so it is now much easier - you don't have to code the algorithm yourself. It also bring support for a variety of reverb spaces:
Small Room
Medium Room
Large Room (2 configurations)
Medium Hall (3 configurations)
Large Hall (2 configurations)
Plate
Medium Chamber
Large Chamber
Cathedral
I suggest that you try the ObjectAL wrapper which already has a great support for the reverb effect:
https://github.com/kstenerud/ObjectAL-for-iPhone
Grab the source from this repository, load "ObjectAL.xcodeproj" and run the ObjectALDemo target on any iOS 5.0 device (should also work on the simulator). This will give you a good starting point and feeling of what the reverb effect is capable of.
If you still don't to use any 3rd party library, you can just grab the relevant pieces from ObjectAL. Look for the reverb-related code in the following source files (and their corresponding headers):
https://github.com/kstenerud/ObjectAL-for-iPhone/blob/master/ObjectAL/ObjectAL/OpenAL/ALListener.m
https://github.com/kstenerud/ObjectAL-for-iPhone/blob/master/ObjectAL/ObjectAL/OpenAL/ALSource.m
https://github.com/kstenerud/ObjectAL-for-iPhone/blob/master/ObjectAL/ObjectAL/OpenAL/ALWrapper.m
Good luck with your project!
AUs are a good place to start.
write your own reverb AU which contains a reverb implementation. there are tons of ways to implement a reverb. a medium/long convolution reverb is much to ask from a phone, but something such as a FDN (feedback delay network) will not require a lot of memory or CPU.
both implementations are easy to implement, if you're familiar with audio programming and optimization. the tough part is actually making one that sounds very good and performs well.
if you're unable to write optimal low level code or you do not (presently) understand basic audio signal processing, then you'll have a few obstacles to overcome -- it may be a long road in that case.
Searching the iOS documentation for "reverb" produces a link to the Core Audio Overview, which references reverb as an "effect unit." Perhaps that's worth further study?
No good, I have attempted the audio unit approach and even though it is in the documentation it is "not" implemented yet by the apple engineers. Each time you call the function to set the reverb property you will only get failure status code. You would have to implement your own reverb effect. Try reading some DSP book and you might find a clue.
you need to learn some DSP-level coding, the DSP cookbook book is okay and there are others out there. But basically you need to be comfortable with handling audio signal in the frequency domain and things such as FFT's. Once you have that, implementing a reverb filter should be straight-forward.
This is an answer I've given before, but I believe it is relevant here. I am going to agree with the others and say that you are going to have to become a bit more familiar with core-audio if you want to do this properly.
I highly recommend this core-audio book. It will teach what you need to do this right and will save you a lot of frustration.
The chapter on audio effects has not been published yet, but if it is anything like the rest of the book it's worth the wait.
EDIT
You will most likely need to do this with an audio effect (which is a form of an audio unit).

Using graphics hardware for audio processing in iphone app

We are developing an iphone app that needs to process audio data in real time, but we are suffering with performance. The bottlenecks are in audio effects, which are in fact quite simple, but the performance hit is noticeable when several are added.
Most of the audio effects code is written in C.
We think there are two places we can use gpu hardware to speed things up: using openCL for effects and hardware for interpolation/smoothing. We are fairly new to this and don't know where to begin.
You probably mean OpenGL, as OpenCL is only present on the desktop. Yes, you could use OpenGL ES 2.0 programmable shaders for this, if you wanted to perform some very fast parallel processing, but that will be extremely complex to pull off.
You might first want to look at the Accelerate framework, which has hardware-accelerated functions for doing just the kind of tasks needed for audio processing. A great place to start is Apple's WWDC 2010 session 202 - "The Accelerate framework for iPhone OS", along with their "Taking Advantage of the Accelerate Framework" article.
Also, don't dismiss Hans' suggestion that you profile your code first, because your performance bottleneck might be somewhere you don't expect.
You might get better DSP acceleration coding for the ARM NEON SIMD unit. NEON is designed for DSP operations and can pipeline multiple single precision floating point operations per cycle. Whereas getting audio data in and out of GPU memory may be possible, but may not be that fast.
But you might want to profile your code to see if something else is the bottleneck. The iPhone 4 CPU can easily keep up with doing multiple FFT's and IIR filters on a real-time audio stream.

Most performant audio layer?

I'm curious as to which of the available audio layers is the most performant, out of the ones available on the iPhone. Currently I've used the SystemSoundID method, and the AVAudioPlayer method, and I'm wondering if it's worth investigating AudioQueue or OpenAL...are there significant performance gains to be had?
Thanks!
Audio is a complex issue, and most of it is done by hardware, so there is no performance gains in changing APIs.
The different APIs are for different tasks:
SystemSound is for short notification sounds (max 10 sec)
AudioQueue is for everything longer than a SystemSound
AVAudioPlayer is just an Objective-C layer above AudioQueue, and you don't lose any performance for this layer. (So if AVAudioPlayer is working for you, stay with it!)
OpenAL is for sound effects.
What about FMOD for the iPhone? It's mostly used for game development and availible for various platforms.
I've been reading about very low level and very low latency audio using RemoteIO. Take a look at this article and subsequent (long) discussion : Using RemoteIO audio units. I wouldn't recommend going down this path unless the higher level libraries completely fail for your application. The author found very distinct performance differences between the different approaches - some quite unexpected. YMMV