iPhone SDK Audio Mixer - iphone

What I need to do is be able to mix 4 channels of audio (not from a live source, just prerecorded audio files in the app bundle), and change their volumes individually, in real time, preferably with MP3s. What's the best/correct road for me to take, regarding all the various sound APIs for the iPhone?
Thanks!

Storm Sim does this with AVAudioPlayer, which is certainly the simplest methdod. You can call prepareToPlay on each of the player objects then kick them off with play later so there won't be any delay. I also use a blank 1-second audio player on eternal loop to keep the deviceTime counting down, so you can use playAfter to give a specific deviceTime in the future to make all the samples play in-sync or offset relative to each other (deviceTime only ticks if there is some sort of audio playing). The AVAudioPlayerDelegate has interrupted/resumed events and finishedPlaying so you can get notification of what is happening.
However there is only one hardware MP3/AAC decoder, so the other three will use up CPU (and thus battery) doing the decoding. If you want to maximize battery life, use CAF files in IMA4#44100. It is about 1/4 the size of the raw WAV files so it isn't as good as MP3 but the performance is much better, especially if you are using a lot of small audio tracks. If you are using voice you can get away with much less fidelity and smash the files even more. afconvert in terminal can help you getting your source files in the CAF format (you should use CAF files no matter what the encoding).

Related

How can I fix multiple audio streams in an iPhone app from creating distortion?

I am using several instances of AVAudioPlayer to play overlapping sounds, and getting harsh distortion as a result. Here is my situation... I have an app with several piano keys. Upon touching a key, it plays a note. If I touch 6-7 keys in rapid succession, my app plays a 2 second .mp3 clip for each key. Since I am using separate audio streams, they sounds overlap (which they should), but the result is lots of distortion, pops, or buzzing sounds!
How can I make the overlapping audio crisp and clean? I recorded the piano sounds myself and they are very nice, clean, noise-free recordings, and I don't understand why the overlapping streams sound so bad. Even at low volume or through headphones, the quality is just very degraded.
Any suggestions are appreciated!
Couple of things:
Clipping
The "buzzing" you describe is almost assuredly clipping—the result of adding two or more waveforms together and the resulting, combined waveform having its peaks cut off—clipped—at unity.
When you're designing virtual synthesizers with polyphony, you have to take into consideration how many voices will likely play at once and provide headroom, typically by attenuating each voice.
In practice, you can achieve this with AVAudioPlayer by setting each instances volume property to 0.316 for 10 dB of headroom. (Enough for 8 simultaneous voices)
The obvious problem here that when the user plays a single voice, it may seem too quiet—you'll want to experiment with various headroom values and typical user behavior and adjust to taste (it's also signal-dependent. Your piano samples may clip more/less easily than other waveforms depending on their recorded amplitude.)
Depending on your app's intended user, you might consider making this headroom parameter available to them.
Discontinuities/Performance
The pops and clicks you're hearing may not be a result of clipping, but rather a side effect of the fact you're using mp3 as your audio file format. This is a Bad Idea™. iOS devices only have one hardware stereo mp3 decoder, so as soon as you spin up a second, third, etc. voice, iOS has to decode the mp3 audio data on the cpu. Depending on the device, you can only decode a couple audio streams this way before suffering from underflow discontinuities (cut that in half for stereo files, obviously)... the CPU simply can't decode enough samples for the output audio stream in time, so you hear nasty pops and clicks.
For sample playback, you want to use an LPCM audio encoding (like wav or aiff) or something extremely efficient to decode, like ima4. One strategy that I've used in every app I've shipped that has these types of audio samples is to ship samples in mp3 or aac format, but decode them once to an LPCM file in the app's sandbox the first time the app is launched. This way you get the benefit of a smaller app bundle and low CPU utilization/higher polyphony at runtime when decoding the samples. (With a small hit to the first-time user experience while the user waits for the samples to be decoded.)
My understanding is that AVAudioPlayer isn't meant to be used like that. In general, when combining lots of sounds into a single output like that, you want to open a single stream and mix the sounds yourself.
What you are encountering is clipping — it's occurring because the combined volumes of the sounds you're playing are exceeding the maximum possible volume. You need to decrease the volume of these sounds when there's more than one playing at a time.

create video from images [duplicate]

I have come across some sample codes where set of images are added to make a QTmovie.
I am targeting this for OS X platform without any QT frameworks.
I have ague idea of creating a file with extension and embed it with appropriate metadata and find a way to insert images and audio in required format. So when the file is created it can simply be played.
I am not sure of what format/extension is better.
pointers are much appreciated.
Without QuickTime (or an equivalent multimedia framework), what you describe is quite a lot of work. Ordinarily, you would use a video compression algorithm (such as H.264) to encode your images into video, and an audio compression algorithm (such as AAC) to encode your audio track. Then you would write these streams into a container file, such as an MPEG-4 file, which interleaves the streams for playback, contains metadata and indexes and so on. Then for playback, you parse the file, decode the video and audio data, and schedule them for playback, taking care to keep them in sync.
QuickTime does all this (and more) for you, and it would be an enormous undertaking to write it all yourself. Is there some reason why you are running on OS X but cannot use QuickTime?
Given the question is tagged with iPhone, why can't you just use QTKit?
If you had to do it from scratch, you could adopt a very simple solution whereby you store your image sequence as a set of JPEG files (but then you would require libjpeg; use raw RGB or PPM if you must), the audio track as a raw WAV data, and then have another file (a text file you define) that stored timing information, so you would simply stream out the audio, and have the frame numbers of the images stored with their corresponding timecode/sample offset. That is a very simple solution that could be made to work without too much effort.
If you give us some more idea of what you are trying to achieve, we could offer some more specific suggestions.
If you want to write a program to do this, you could use Xuggler in Java to do it. It will allow you to save your final video in a format playable by almost any media player.
Start out by gaining an understanding of how video files (e.g. MP4, Quicktime) actually represent audio and video with this Overly Simplistic Guide to Internet Video.
Then, play around with the MediaTool tutorials. You can write programs that make raw images into video files (see this sample code). Finally, to write a program that makes audio and video that are in sync, see this tutorial; it generates a set of images, and makes some audio noise that is timed to change when a ball hits the edge of a box.
Hope that helps.
Art

Playing two simultaneous m4a sounds with audioqueue

I need to mix two simultaneous looping m4a sounds for my application, and the only 100% reliable loop method i have come accross is using AudioQueue with this method: http://developer.apple.com/mac/library/qa/qa2009/qa1636.html
However, when I initialize two instances of AudioQueue, I can only seem to get one instance playing. I know that the sdk used to only support playing one compressed audio file at a time, but that changed with 3.0, so I wonder if there is something I am missing?
One current devices, there seems to be hardware support for only playing one compressed audio file at a time, and I'm not sure if m4a is de-compressible in software in real-time (there are only specific types supported by the software decoder). You might be able to decompress the second sound before playing your mixed audio loops.

How can I record the audio output of the iPhone? (like sounds of my app)

I want to record the sound of my iPhone-App. So like someone is playing something on a iPhone instrument and after that you can hear it.
Is it possible without the micro?
Do you mean an App you build yourself? If yes, you could just save the rendered waveform (maybe encoded/compressed to save space) for later playback. (see: Extended Audio File Services, it can write the same AudioBufferList to a file that you would render to the RemoteAudio Unit when playing audio in your Instrument-App)
[Edit: removed comments on recording third-party app audio output ...]
With the AVFoundation you are currently using, you're always working on the level of the sound files. Your code never sees the actual audio signal. Thus, you can't 'grab' the audio signal that your app generates when it is used. Also, AVAudioPlayer does not provide any means of getting to the final signal. If you're using multiple instances of AVAudio player to play multiple sounds at the same time you also wouldn't be able to get at the mixed signal.
Alas, you probably need to use CoreAudio which is a much more low level interface.
I'd like to suggest an alternative approach: Instead of recording the audio output, why not record the sequence of actions together with their time which lead to the audio being played? Write this sequence of events to a file and read it back in to reproduce the 'performance' - it's a bit like your own MIDI sequencer :)

Creating video file from images and audio( pre-recorded )

I have come across some sample codes where set of images are added to make a QTmovie.
I am targeting this for OS X platform without any QT frameworks.
I have ague idea of creating a file with extension and embed it with appropriate metadata and find a way to insert images and audio in required format. So when the file is created it can simply be played.
I am not sure of what format/extension is better.
pointers are much appreciated.
Without QuickTime (or an equivalent multimedia framework), what you describe is quite a lot of work. Ordinarily, you would use a video compression algorithm (such as H.264) to encode your images into video, and an audio compression algorithm (such as AAC) to encode your audio track. Then you would write these streams into a container file, such as an MPEG-4 file, which interleaves the streams for playback, contains metadata and indexes and so on. Then for playback, you parse the file, decode the video and audio data, and schedule them for playback, taking care to keep them in sync.
QuickTime does all this (and more) for you, and it would be an enormous undertaking to write it all yourself. Is there some reason why you are running on OS X but cannot use QuickTime?
Given the question is tagged with iPhone, why can't you just use QTKit?
If you had to do it from scratch, you could adopt a very simple solution whereby you store your image sequence as a set of JPEG files (but then you would require libjpeg; use raw RGB or PPM if you must), the audio track as a raw WAV data, and then have another file (a text file you define) that stored timing information, so you would simply stream out the audio, and have the frame numbers of the images stored with their corresponding timecode/sample offset. That is a very simple solution that could be made to work without too much effort.
If you give us some more idea of what you are trying to achieve, we could offer some more specific suggestions.
If you want to write a program to do this, you could use Xuggler in Java to do it. It will allow you to save your final video in a format playable by almost any media player.
Start out by gaining an understanding of how video files (e.g. MP4, Quicktime) actually represent audio and video with this Overly Simplistic Guide to Internet Video.
Then, play around with the MediaTool tutorials. You can write programs that make raw images into video files (see this sample code). Finally, to write a program that makes audio and video that are in sync, see this tutorial; it generates a set of images, and makes some audio noise that is timed to change when a ball hits the edge of a box.
Hope that helps.
Art