I need to mix two simultaneous looping m4a sounds for my application, and the only 100% reliable loop method i have come accross is using AudioQueue with this method: http://developer.apple.com/mac/library/qa/qa2009/qa1636.html
However, when I initialize two instances of AudioQueue, I can only seem to get one instance playing. I know that the sdk used to only support playing one compressed audio file at a time, but that changed with 3.0, so I wonder if there is something I am missing?
One current devices, there seems to be hardware support for only playing one compressed audio file at a time, and I'm not sure if m4a is de-compressible in software in real-time (there are only specific types supported by the software decoder). You might be able to decompress the second sound before playing your mixed audio loops.
Related
What I need to do is be able to mix 4 channels of audio (not from a live source, just prerecorded audio files in the app bundle), and change their volumes individually, in real time, preferably with MP3s. What's the best/correct road for me to take, regarding all the various sound APIs for the iPhone?
Thanks!
Storm Sim does this with AVAudioPlayer, which is certainly the simplest methdod. You can call prepareToPlay on each of the player objects then kick them off with play later so there won't be any delay. I also use a blank 1-second audio player on eternal loop to keep the deviceTime counting down, so you can use playAfter to give a specific deviceTime in the future to make all the samples play in-sync or offset relative to each other (deviceTime only ticks if there is some sort of audio playing). The AVAudioPlayerDelegate has interrupted/resumed events and finishedPlaying so you can get notification of what is happening.
However there is only one hardware MP3/AAC decoder, so the other three will use up CPU (and thus battery) doing the decoding. If you want to maximize battery life, use CAF files in IMA4#44100. It is about 1/4 the size of the raw WAV files so it isn't as good as MP3 but the performance is much better, especially if you are using a lot of small audio tracks. If you are using voice you can get away with much less fidelity and smash the files even more. afconvert in terminal can help you getting your source files in the CAF format (you should use CAF files no matter what the encoding).
I have trouble choosing the right audio playback technology. There's a ton of technologies to use on the iPhone, it's so confusing.
What I need to do is this:
start playing short sounds ranging between 0.1 and 2 seconds
high quality playback, no crackle (I heard some of the iPhone audio playback technologies do a crackle sound on start or end, which is bad!)
ability to start playback of a sound, while there's already another one playing right now (two, three or more sounds at the same time)
What would you suggest here, and why? Thanks :-)
There are basically four options for playing audio on the iPhone:
Audio Toolbox. Easy, but only good for playing sound effects in applications (sample code).
Audio Queue Services. Very powerful, can do anything. C API, pretty messy to work with. Callbacks, buckets, pain.
AVAudioPlayer. About the easiest option. Can play compressed audio, with a simple wrapper you can easily play multiple instances of the same sample at once (non-compressed audio only, as there is only one HW audio decoder). Starting to play a sound with AVAudioPlayer seems to lag about 20 ms, could be a problem.
OpenAL. Decent compromise between complexity and features. Sounds do not lag, you can play multiple sounds just fine, but you have to do a lot of the work yourself. I’ve written a sound engine called Finch that can help you.
Don’t know much about cracking, never experienced it. I think there were some issues with playing seamless compressed loops with AVAudioPlayer, can be overcome by saving the loop without compression.
I want to record the sound of my iPhone-App. So like someone is playing something on a iPhone instrument and after that you can hear it.
Is it possible without the micro?
Do you mean an App you build yourself? If yes, you could just save the rendered waveform (maybe encoded/compressed to save space) for later playback. (see: Extended Audio File Services, it can write the same AudioBufferList to a file that you would render to the RemoteAudio Unit when playing audio in your Instrument-App)
[Edit: removed comments on recording third-party app audio output ...]
With the AVFoundation you are currently using, you're always working on the level of the sound files. Your code never sees the actual audio signal. Thus, you can't 'grab' the audio signal that your app generates when it is used. Also, AVAudioPlayer does not provide any means of getting to the final signal. If you're using multiple instances of AVAudio player to play multiple sounds at the same time you also wouldn't be able to get at the mixed signal.
Alas, you probably need to use CoreAudio which is a much more low level interface.
I'd like to suggest an alternative approach: Instead of recording the audio output, why not record the sequence of actions together with their time which lead to the audio being played? Write this sequence of events to a file and read it back in to reproduce the 'performance' - it's a bit like your own MIDI sequencer :)
What kind of audio files are you using in your iPhone games/apps?
I have a game with 30MB of sounds in .wav format and I'm thinking of maybe converting to .mp3 to reduce the app size... Is there a major difference in performance? Any other issues?
Keep in mind that certain codecs run in hardware and others in software. Therefore not all compressions will allow for simultaneous playback of more than one sound. For example, if you have a sound playing, a UI sound like a beep may not play if both were trying to use the same codec. For more info, see:
http://developer.apple.com/iphone/library/documentation/iPhone/Conceptual/iPhoneOSProgrammingGuide/AudioandVideoTechnologies/AudioandVideoTechnologies.html#//apple_ref/doc/uid/TP40007072-CH19-SW6
iPhone Audio Hardware Codecs
iPhone OS applications can use a wide range of audio data formats. Starting in iPhone OS 3.0, most of these formats can use software-based encoding and decoding. You can simultaneously play multiple sounds in all formats, although for performance reasons you should consider which format is best in a given scenario. Hardware decoding generally entails less of a performance impact than software decoding.
The following iPhone OS audio formats can employ hardware decoding for playback:
AAC
ALAC (Apple Lossless)
MP3
The device can play only a single instance of one of these formats at a time through hardware. For example, if you are playing a stereo MP3 sound, a second simultaneous MP3 sound will use software decoding. Similarly, you cannot simultaneously play an AAC and an ALAC sound using hardware. If the iPod application is playing an AAC sound in the background, your application plays AAC, ALAC, and MP3 audio using software decoding.
To play multiple sounds with best performance, or to efficiently play sounds while the iPod is playing in the background, use linear PCM (uncompressed) or IMA4 (compressed) audio.
To learn how to check which hardware and software codecs are available on a device, read the discussion for the kAudioFormatProperty_HardwareCodecCapabilities constant in Audio Format Services Reference.
Both AAC and CAF formats work fine and offer decent file sizes. For certain background looping tracks I found MP3 files getting too big, but YMMV. Experimenting with a decent sound editing app is the only way to find the right balance between size and quality. I've had pretty good luck with Audacity and Amadeus Pro.
Suggest listening to the output with a pair of really good noise-isolating headphones on the device itself. Most people won't be listening to your stuff with these but as you decrease sound quality to shrink file sizes you'll start getting static and hum artifacts. It's just a matter of balancing size vs. quality and what you're willing to live with.
I use a combination of WAV files (for sound effects) and MP3 (for music), which seems to work fine. You can have trouble if you try to play multiple MP3 files at the same time - drop outs, or performance degradation, depending on your AudioSession settings.
If I had to compress my sound effects, I'm not sure which codec has the least decoding overhead. Something like Apple Lossless would likely work well, and would cut the size roughly in half.
I find mp3 fine, but keep in mind that decoding on the iPhone/Touch2G is only about 2.5x realtime speed.
Can iphone mix two sound files or build custom equalizer?
I have studied for weeks about this problem,
and it seems unable to use iphone-sdk to mix two or more sound files or to build custom equalizer.
Is anyone have the experience to do this?
Yes you can. AVAudioPlayer can play multiple sounds and you can control the volume for each. Or you can use Audio Units and have more control over the audio data.
aurioTouch is a good sample app for what you are thinking of.
For simple playback of sound files you can use the AVAudioPlayer class introduced in the 2.2 SDK. It provides playback and volume controls for playing any audio file. As far as I am aware, there are no restrictions on the number of sound files you can play on the iPhone. The only restriction on playing sound files is that you may only play one AAC or MP3 compressed file at a time, the rest of the files must be either uncompressed or in the IMA4 format.
If your needs are more low-level (If you need to do DSP) you might want to look at AudioQueue Services or AudioUnits - two Mac OS X audio processing APIs that are also available on the iPhone.