I want to know if AudioToolbox.framework will suppport an array of pageviewcontrollers to change with audio synchronization programmtically. From the apple documentation I found that Audio Queue Services lets you record, play, pause, loop, and synchronize audio but when they say synchronize audio are they referring to page change synchronization or something else?
I already have AVFoundation.framework in my app to play an audio file.
It's a 'Core' AV library. You'll have to you your own program to interact with or manipulate your UI. The library does not rely on either AppKit or UIKit.
Sync: consider it along the lines of accurate timing of audio playback.
Related
I have an audio app in which all of the sound generating work is accomplished by pure data (using libpd).
I've coded a special sequencer in swift which controls the start/stop playback of multiple sequences, played by the synth engines in pure data.
Until now, I've completely avoided using Core Audio or AVFoundation for any aspect of my app, because I know nothing about them, and they both seem to require C or Objective C coding, which I know nearly nothing about.
However, I've been told from a previous q&a on here, that I need to use Core Audio or AVFoundation to get accurate timing. Without it, I've tried everything else, and the timing is totally messed up (laggy, jittery).
All of the tutorials and books on Core Audio seem overwhelmingly broad and deep to me. If all I need from one of these frameworks is accurate timing for my sequencer, how do you suggest I achieve this as someone who is a total novice to Core Audio and Objective-C, but otherwise has a 95% finished audio app?
If your sequencer is Swift code that depends on being called just-in-time to push audio, it won't work with good timing accuracy. e.g. you can't get the timing you need.
Core Audio uses a real-time pull-model (which excludes Swift code of any interesting complexity). AVFoundation likely requires you to create your audio ahead of time, and schedule buffers. An iOS app needs to be designed nearly from the ground up for one of these two solutions.
Added: If your existing code can generate audio samples a bit ahead of time, enough to statistically cover using a jittery OS timer, you can schedule this pre-generated output to be played a few milliseconds later (e.g. when pulled at the correct sample time).
AudioKit is an open source audio framework that provides Swift access to Core Audio services. It includes a Core Audio based sequencer, and there is plenty of sample code available in the form of Swift Playgrounds.
The AudioKit AKSequencer class has the transport controls you need. You can add MIDI events to your sequencer instance programmatically, or read them from a file. You could then connect your sequencer to an AKCallbackInstrument which can execute code upon receiving MIDI noteOn and noteOff commands, which might be one way to trigger your generated audio.
I want to use the MTAudioProcessingTap's functionality, but instead of using the AVPlayer like CHris in his tutorial i want to use the iPhones microphone.
Can this be done and / or is it documented anywhere?
The way I would proceed with this is to set the session to AVAudioSessionCategoryPlayAndRecord.
The tutorial says that you can apply the MTAudioProcessingTap to modify any file on your phone. If you follow this tutorial, it shows you how to create a file you recorded using AVFoundation and then play it back.
Right now AV Foundation is not set up to do real-time audio processing as you record audio. It can only modify the audio in real time while it is being played back or it can do offline audio processing as detailed in the Audio Session Programming Guide.
I also do not recommend doing a destructive process to a sound as it is being processed. Best practice for audio creation is to leave the master untouched and to change the sound after you capture it.
As of the beginning of 2014 there is a great deal of information about AV Foundation that is not yet documented. There is a new audio session category that has not been included in the Audio Session Programming Guide. In a few month a whole book on AV Foundation will be published and hopefully that book will provide more solutions to some of these questions.
In the documentation I see several Apple frameworks for audio. All of them seem to be targeted at playing and recording audio. So I wonder what the big differences are between these?
Audio Toolbox
Audio Unit
AV Foundation
Core Audio
Did I miss a guide that gives a good overview of all these?
I made a brief graphical overview of Core Audio and its (containing) frameworks:
The framework closest to the hardware is Audio Unit. Based on that there is OpenAL and AudioToolbox with AudioQueue. On top, you can find the Media Player and AVFoundation (Audio & Video) frameworks.
Now it depends on what you want to do: just a small recording, use AVFoundation, which is the easiest one to use. (Media Player has no options for recording, it is - as the name says - just a media player.)
Do you want to do serious real-time signal processing? Use Audio Unit. But believe me, this is the hardest way. :-)
With iOS 8.0 Apple introduced AVAudioEngine, an Objective-C/Swift based audio graph system in AV Foundation. This encapsulates some dirty C-stuff from Audio Units. Due to the complexity of Audio Unit it is maybe worth a look.
Further readings in the Apple Documentation:
Core Audio Overview: Introduction
Multimedia Programming Guide
Audio & Video Starting Point
Core Audio is the lowest-level of all the frameworks and also the oldest.
Audio Toolbox is just above Core Audio and provides many different APIs that make it easier to deal with sound but still gives you a lot of control. There's ExtAudioFile, AudioConverter, and several other useful APIs.
Audio Unit is a framework for working with audio processing chains for both sampled audio data and MIDI. It's where the mixer and the various filters and effects such as reverb live.
AV Foundation is a new and fairly high-level API for recording and playing audio on the iPhone OS. All of them are available on both OS X and iOS, though AV Foundation requires OS X 10.8+.
Core Audio is not actually a framework, but an infrastructure that contains many different frameworks. Any audio that comes out of you iOS speaker is, in fact, managed by Core Audio.
The lowest-level in Core Audio that you can get is by using Audio Units, which you can work with by using the AudioToolbox and the AudioUnit frameworks.
The AudioToolbox framework also provides a bit higher level abstractions to deal with playing/recording of audio using AudioQueues, or managing various audio formats by using various Converter and File Services.
Finally, AV Foundation provides high level access to playing one specific file, and MediaPlayer gives you access (and playback) to your iPod library.
This site has a short and excellent overview of core features the different API's:
http://cocoawithlove.com/2011/03/history-of-ios-media-apis-iphone-os-20.html
Here you can find an overview of all iOS and OSX audio frameworks:
https://developer.apple.com/library/ios/documentation/MusicAudio/Conceptual/CoreAudioOverview/WhatsinCoreAudio/WhatsinCoreAudio.html#//apple_ref/doc/uid/TP40003577-CH4-SW4
I'm making an opengl game for iPhone. And I'm about to start adding sound effects to the app. I wonder what's the best framework for this purpose.
Is AV foundation my best option? Any others I'm missing, like Open AL perhaps?
General strength/weakness summary of iPhone sound APIs from a game perspective:
AVFoundation: plays long compressed files. No low-level access, high latency. Good for theme song or background music. Bad for short-lived effects.
System sounds: plays short (think 0-5 sec) sounds. Must be PCM or IMA4 in .aif, .wav, or .caf. Fire-and-forget (can't stop it once it starts). C-based API. Appropriate for short sound effects (taps, clicks, bangs, crashes)
OpenAL: 3D spatialized audio. API resembles OpenGL and is a natural accompaniment to it. Easy to mix multiple sources. Audio needs to be PCM (probably loaded by Core Audio's "Audio File Services"). Pretty significant low-level access. Potentially very low latency.
Audio Queue: stream playback from a source you provide (reading from file, from network, software synthesis, etc.). C-based. Can be fairly low-latency. Not really ideal for a lot of game tasks: background music is better suited to AVFoundation, shorter sounds to system sounds, and mixing to OpenAL or Audio Units. Can record from mic.
Audio Units: lowest public level of Core Audio. Extremely low latency (< 30 ms). C, and hard-core C at that. Everything must be PCM. Multi-channel mixer unit lets you mix sources. Can record.
Be sure you set up your audio session appropriately, meaning you declare a category that indicates how you interact with the rest of audio on the device (allow/disallow iPod playback in the background, honor/ignore ring/silent switch, etc.). AV Foundation has the Obj-C version of this, and Core Audio has somewhat more powerful equivalents.
Kowalski is another game oriented sound engine that runs on the iPhone/iPad (and OSX and Windows).
You might want to check out Finch, an OpenAL sound effect engine writter exactly with games in mind.
Can iphone mix two sound files or build custom equalizer?
I have studied for weeks about this problem,
and it seems unable to use iphone-sdk to mix two or more sound files or to build custom equalizer.
Is anyone have the experience to do this?
Yes you can. AVAudioPlayer can play multiple sounds and you can control the volume for each. Or you can use Audio Units and have more control over the audio data.
aurioTouch is a good sample app for what you are thinking of.
For simple playback of sound files you can use the AVAudioPlayer class introduced in the 2.2 SDK. It provides playback and volume controls for playing any audio file. As far as I am aware, there are no restrictions on the number of sound files you can play on the iPhone. The only restriction on playing sound files is that you may only play one AAC or MP3 compressed file at a time, the rest of the files must be either uncompressed or in the IMA4 format.
If your needs are more low-level (If you need to do DSP) you might want to look at AudioQueue Services or AudioUnits - two Mac OS X audio processing APIs that are also available on the iPhone.