I need to play mp3 audio data in iPhone continuously. I am getting continuous mp3 data via server in iPhone. Now I need to play this data in iPhone. I cannot access url to play as it is of mms protocol. So, for playing this type of data which is the best method to use. Can anyone help me out with this thing.
Thanks in advance.
You have a few options sanctioned by or directly from Apple. You should look into:
Core Audio and the Audio Toolbox
The AVFoundation Framework
The OpenAL Framework
Also, you can try Matt Gallagher's AudioStreamer class. It should be able to do the job, or at least help you figure out how to do so, if you look at the code.
Between those four options, there should be something to help.
NOTE:
After writing this, I did a bit of Googling and I found this thread that discusses the possibility of streaming MMS media to iPhone. It appears that it is not possible, due to the cost of a license from Microsoft. In theory, the above-mentioned frameworks should do everything you need, but it seems that you cannot because of the licensing issue.
Good luck!
audioPlayer = [[AVAudioPlayer alloc]initWithData:<#(NSData )data#> error:<#(NSError *)outError#>];
try this else tell me more clear what you want to do
Related
I would like to stream audio from my iPhone to a remote server but I don't really know what is my best bet.
I tried here a code for sending small chunks but I have some some audio gaps between chunks.
So I think about FFmpeg or as suggested here writing my own AAC parser.
Any code sample or advices would be appreciated because I have hard time to get started
Another core audio based Audio Player: https://github.com/douban/DOUAudioStreamer .
Just see the examples to use.
In my opinion, this one is better designed than Matt Gallagher's one.
Another alternative is to use my Audjustable AudioPlayer player here: https://github.com/tumtumtum/audjustable
Some good audio stream player on gitHub:-
mattgallagher/AudioStreamer:-https://github.com/mattgallagher/AudioStreamer
tumtumtum/StreamingKit:-https://github.com/tumtumtum/StreamingKit
you can also sreach on github :-
1)muhku/FreeStreamer
2)nicklockwood/SoundManager
3)AFSoundManager
4)GVMusicPlayerController
for live audio streaming StreamingAudioPlayer is best which is developed by Matt Gallagher and it is easy to use
see this AudioStreame
On iOS, is it possible to get the user's audio stream in a decompressed format? For example, the MP3 is returned as a WAV that can be used for audio analysis? I'm relatively new to the iOS platform, and I remember seeing that this wasn't possible in older iOS versions. I read that iOS 4 brought in some advanced APIs but I'm not sure where I can find documentations/samples for these.
If you don't mind using API for iOS 4.1 and above, you could try using the AVAssetReader class and friends. In this similar question you have a full example on how to extract video frames. I would expect the same to work for audio, and the nice thing is that the reader deals with all the details of decompression. You can even do composition with AVComposition to merge several streams.
These classes are part of the AVFramework, which allows not only reading but also creating your own content.
Apple has an OpenAL example at http://developer.apple.com/library/mac/#samplecode/OpenALExample/Introduction/Intro.html where Scene.m should interest you.
The Apple documentation has this picture where the Core Audio framework clearly shows that it gives you MP3 out. It also states that you can access audio units in a more radical way if you so need.
The same Core Audio document gives also some information about using MIDI if it may help you.
Edit:
You're in luck today.
In this example an audio file is loaded and fed into an AudioUnit graph. You could fairly easily write an AudioUnit of your own to put into this graph and which analyzes the PCM stream as you see fit. You can even do it in the callback function, although that's probably not a good idea because callbacks are encouraged to be as simple as possible.
Well, I will try best not to make it as a 'I just want the code' question...
I'm recently working on a project which requires some audio signal processing from local music files (e.g. iTunes Library). The whole work includes:
Get the PCM data of an audio file (normally from iTunes library); <--AudioQueue (?)
Write the PCM data to a new file (it seems that Apple does not allow direct modification on music tracks); <--CoreAudio(?)
Do some processing and modification, like filters, manipulators, etc. <-- Will be developed in C++
Play the processed track. <--RemoteIO
The problem is, after going through some blogs and discussions:
http://lists.apple.com/archives/coreaudio-api/2009/Aug/msg00100.html, http://atastypixel.com/blog/using-remoteio-audio-unit/
http://osdir.com/ml/coreaudio-api/2009-08/msg00093.html
as well as the official sample codes, I got a feeling that the CoreAudio SDK allow us to apply audio processing only on voice demos recorded from Mic.
My question is that:
Can I get raw data from iTunes library tracks instead of Mic input?
If the first question is 'No', is there a way to 'fool' the SDK to let it think it is getting data from Mic input, not from iTunes? (I have done some similar 'hacking' stuff in C# before XD)
If the whole processing just doesn't work, can anyone provide some alternative ideas?
Any help will be appreciated. Thank you very much :-)
Thanks.
Just found something really cool yesterday.
From iPhone Media Library to PCM Samples in Dozens of Confounding, Potentially Lossy Steps
(http://www.subfurther.com/blog/?p=1103
And also a class library by MIT:
TSLibraryImport: Objective-C class + sample code for importing files from user's iPod Library in iOS4.
(http://bitbucket.org/artgillespie/tslibraryimport/changeset/a81838f8c78a
Hope they help!
Cheers,
Manca
1) No. Apple does not allow direct access to PCM data of songs. Otherwise you could create music-sharing apps, which is not in Apple's interests.
2) No. Hacking and getting approved is impossible due to Apple's code approval mechanism.
3) The only alternative I could think of is that you have to do the processing part on PC/Mac and then transfer it to the iPhone. Or you would have to store the files in your own applications folder - you should be able to load and process these via CoreAudio.
I know this thread is old but... did this work for you, Manca? And did this app get approved?
EDIT: just discovered the AVAssetReader class, introduced since iOS 4.1, should help
My real objective is to be able to use 1 audio file and create X amount of different pitches and then playing them in the app using some code to handle the timing.
TIA for any helpful insight
You can read Core Audio documentation.
See this answer, which recommends using the AVFoundation framework.
Core Audio is supposed to be fairly low level. Great if you need more flexibility/control, but AVFoundation may be more appropriate for your app.
I see that the IPhone core audio does not include audioDevice objects to render audio input directly into RAM. I hear people talking about using files to do this(like speak here) but I am thinking there must be a way to do this otherwise. Your thoughts would be appreciated.
Check out the aurioTouch sample in the iPhone Developer site.