I know that the MediaPickerConroller can be used to access the iPod library, however I'm not sure that it can play two songs at the same time. AVFoundation would allow this, but then how do you get AVFoundation to access the library? Is there an easier way of doing this? Any help appreciated, thanks.
One way - though not exactly an easy way - to play more than one audio files at the same time is to use an Audio Processing Graph. "MixerHost", the example app by Apple, is a good starting point. "AudioGraph" (https://github.com/tkzic/audiograph) has more functionality than you want, but the manual is helpful to understand the basics of audio processing graphs.
The MediaPickerConroller will allow you to select more than one item from the iTunes library. You will have to get the list of URLs from the media picker and connect them to the graph yourself.
Related
I am trying to get started in advanced audio with the iPhone SDK. I really want to make professional level audio components. I know the basics (e.g. how to use NSAVAudioPlayers), but I don't know what to do for the more complicated sort of audio (e.g. osculation and audio cones). Does anyone know where to go for this? (I tried research online, and all that came up weer the sort of simplistic audio components).
Core Audio. That's where you'll find an OpenAL implementation. You might also want to look at the Audio Processing Graph API (also part of core audio).
On iOS, is it possible to get the user's audio stream in a decompressed format? For example, the MP3 is returned as a WAV that can be used for audio analysis? I'm relatively new to the iOS platform, and I remember seeing that this wasn't possible in older iOS versions. I read that iOS 4 brought in some advanced APIs but I'm not sure where I can find documentations/samples for these.
If you don't mind using API for iOS 4.1 and above, you could try using the AVAssetReader class and friends. In this similar question you have a full example on how to extract video frames. I would expect the same to work for audio, and the nice thing is that the reader deals with all the details of decompression. You can even do composition with AVComposition to merge several streams.
These classes are part of the AVFramework, which allows not only reading but also creating your own content.
Apple has an OpenAL example at http://developer.apple.com/library/mac/#samplecode/OpenALExample/Introduction/Intro.html where Scene.m should interest you.
The Apple documentation has this picture where the Core Audio framework clearly shows that it gives you MP3 out. It also states that you can access audio units in a more radical way if you so need.
The same Core Audio document gives also some information about using MIDI if it may help you.
Edit:
You're in luck today.
In this example an audio file is loaded and fed into an AudioUnit graph. You could fairly easily write an AudioUnit of your own to put into this graph and which analyzes the PCM stream as you see fit. You can even do it in the callback function, although that's probably not a good idea because callbacks are encouraged to be as simple as possible.
I need some guidance on how to make a audio stream app for multiple audio files, so the app user can choose from the list and listen to the item.Do you know any good tutorials, or place from where i can learn how to do this by my self. I'm familiar with the concept on how it wood look like, i need something on how to pause and resume, how to go to next and previous are there some classes that can help me do this. Can someon help me?
Look into AVQueuePlayer if you can require your users to run iOS 4.1+. It can help you stream a sequential playlist of items. AVPlayer is another option, which only requires iOS 4.0+, but it can only handle one item at a time, so you'd have to write your own code to manage the playlist.
The AV Foundation Programming Guide does a pretty good job of explaining how to use these classes.
Well, I will try best not to make it as a 'I just want the code' question...
I'm recently working on a project which requires some audio signal processing from local music files (e.g. iTunes Library). The whole work includes:
Get the PCM data of an audio file (normally from iTunes library); <--AudioQueue (?)
Write the PCM data to a new file (it seems that Apple does not allow direct modification on music tracks); <--CoreAudio(?)
Do some processing and modification, like filters, manipulators, etc. <-- Will be developed in C++
Play the processed track. <--RemoteIO
The problem is, after going through some blogs and discussions:
http://lists.apple.com/archives/coreaudio-api/2009/Aug/msg00100.html, http://atastypixel.com/blog/using-remoteio-audio-unit/
http://osdir.com/ml/coreaudio-api/2009-08/msg00093.html
as well as the official sample codes, I got a feeling that the CoreAudio SDK allow us to apply audio processing only on voice demos recorded from Mic.
My question is that:
Can I get raw data from iTunes library tracks instead of Mic input?
If the first question is 'No', is there a way to 'fool' the SDK to let it think it is getting data from Mic input, not from iTunes? (I have done some similar 'hacking' stuff in C# before XD)
If the whole processing just doesn't work, can anyone provide some alternative ideas?
Any help will be appreciated. Thank you very much :-)
Thanks.
Just found something really cool yesterday.
From iPhone Media Library to PCM Samples in Dozens of Confounding, Potentially Lossy Steps
(http://www.subfurther.com/blog/?p=1103
And also a class library by MIT:
TSLibraryImport: Objective-C class + sample code for importing files from user's iPod Library in iOS4.
(http://bitbucket.org/artgillespie/tslibraryimport/changeset/a81838f8c78a
Hope they help!
Cheers,
Manca
1) No. Apple does not allow direct access to PCM data of songs. Otherwise you could create music-sharing apps, which is not in Apple's interests.
2) No. Hacking and getting approved is impossible due to Apple's code approval mechanism.
3) The only alternative I could think of is that you have to do the processing part on PC/Mac and then transfer it to the iPhone. Or you would have to store the files in your own applications folder - you should be able to load and process these via CoreAudio.
I know this thread is old but... did this work for you, Manca? And did this app get approved?
EDIT: just discovered the AVAssetReader class, introduced since iOS 4.1, should help
I am trying to make a small music app on the iphone. I want to have an octave a piano which will respond to touches and play the key or keys that the user touches. How would i be able to get two or more sounds to play at the same time so it sounds like a chord? I tried using AVFoundation but the two sounds just play one after the other.
You would have to use AudioQueueServices. The docs are here:
Apple.com - AudioQueueServices Reference
Essentially you would have to write some code to open up multiple outputs, and then prime the queue and have them block before AudioQueueStart(AudioQueueRef aq) until everything was primed and ready and then let them go.
AVAudioPlayer isn't really good enough for this sort of thing, unfortunately.
If you create multiple AVAudioPlayers you can play them at the same time, and they will mix per the restrictions outlined in the documentation (not all file types can be played simultaneously).
If you allocate them first, then play them in order, they will play at virtually the same time.
If you need absolutely perfect timing, use the AudioQueue as described above or use OpenAL.