Mp3 from iPhone library convert to PCM? - iphone

I need to read some audio file from the iPhone/iPod library and then pass it to a function to do some detection.
I need to decode from MP3 to PCM in order to pass at the function some chunks of data. My question is: what's the best ay to do this? I know that it can be done by using AVAsset by reading and writing different file or using ExtAudioFileconvert (as seen in apple example). What's the most efficient?

Related

how to insert and overwrite audio file in iOS

Am developing an application which has audio recorder. The user should be able to play the audio file & insert recording into that.. cut unwanted audio .. overwrite some part of audio file.
Have you seen how to Insert , overwrite audio files -Audio Editing iphone? but no one answered this...
Atleast suggest me a way to implement this....
Thanx in advance...
What type of audio file are you talking about? You will almost certainly need to convert whatever you are using into PCM WAV data for this type of manipulation. Luckily, Core Audio, which others have pointed you towards has some convenience methods for doing this.
Once you have the raw PCM data, you can insert by simply inserting other PCM data at the desired point in the data. You want to make sure you don't do something like write in the middle of a stereo packet, or something like that, but besides that, most simply-formatted PCM data is pretty easy to manipulate. Think of it like a string -- you can start with "Hello World" and change it to "Hello, Beautiful World" but simply inserting data in the middle.
Overwriting is the same principal.
Once you are done with the edits, you'll need to transform the PCM data back into whatever format you had saved in before.
Have a look at Core Audio
Core Audio provides software interfaces for implementing audio features in applications you create for iOS and OS X. Under the hood, it handles all aspects of audio on each of these platforms. In iOS, Core Audio capabilities include recording, playback, sound effects, positioning, format conversion, and file stream parsing, as well as:

objective-c record audio session output

I am writing an app that generates music. I am using OpenAL to: modify gain; modify pitch; mix audio; and play the resulting audio. I now need to record the audio as it is being played. I understand that OpenAL does not let you record the output audio. The other options I have found is to use audio units. However because I need to mix/pitch/gain the audio and record it, it seems I need to write all the audio processing so I can have access to the output buffer. Is this correct? Or is there a different iOS API I can use to do this. If not then is there a 3rd party solution already that lets me record the output (paid solutions are fine)?
You are correct.
Audio Units are the only iOS public API that allows an app to both process and then record audio.
Trying to record the OpenAL output may well be a violation of Apple's rules against using non-public APIs.
The alternative may be to completely rewrite the portions of OpenAL you need (there may be open source for some portions) running on top of the RemoteIO Audio Unit.
The best way to go is likely to be Core Audio, since it will give you as much flexibility as you need. Take a look into the Extended Audio File Services reference pages.
Using and extended audio file you should be able to set up a file format and audio stream buffer to send the final mixed output to, and then use the ExtAudioFileWrite() function to write the samples to the file.

create video from images [duplicate]

I have come across some sample codes where set of images are added to make a QTmovie.
I am targeting this for OS X platform without any QT frameworks.
I have ague idea of creating a file with extension and embed it with appropriate metadata and find a way to insert images and audio in required format. So when the file is created it can simply be played.
I am not sure of what format/extension is better.
pointers are much appreciated.
Without QuickTime (or an equivalent multimedia framework), what you describe is quite a lot of work. Ordinarily, you would use a video compression algorithm (such as H.264) to encode your images into video, and an audio compression algorithm (such as AAC) to encode your audio track. Then you would write these streams into a container file, such as an MPEG-4 file, which interleaves the streams for playback, contains metadata and indexes and so on. Then for playback, you parse the file, decode the video and audio data, and schedule them for playback, taking care to keep them in sync.
QuickTime does all this (and more) for you, and it would be an enormous undertaking to write it all yourself. Is there some reason why you are running on OS X but cannot use QuickTime?
Given the question is tagged with iPhone, why can't you just use QTKit?
If you had to do it from scratch, you could adopt a very simple solution whereby you store your image sequence as a set of JPEG files (but then you would require libjpeg; use raw RGB or PPM if you must), the audio track as a raw WAV data, and then have another file (a text file you define) that stored timing information, so you would simply stream out the audio, and have the frame numbers of the images stored with their corresponding timecode/sample offset. That is a very simple solution that could be made to work without too much effort.
If you give us some more idea of what you are trying to achieve, we could offer some more specific suggestions.
If you want to write a program to do this, you could use Xuggler in Java to do it. It will allow you to save your final video in a format playable by almost any media player.
Start out by gaining an understanding of how video files (e.g. MP4, Quicktime) actually represent audio and video with this Overly Simplistic Guide to Internet Video.
Then, play around with the MediaTool tutorials. You can write programs that make raw images into video files (see this sample code). Finally, to write a program that makes audio and video that are in sync, see this tutorial; it generates a set of images, and makes some audio noise that is timed to change when a ball hits the edge of a box.
Hope that helps.
Art

Is it possible at all to record audio in AAC/MP4 format on the iPhone?

In this link it says it does:
http://developer.apple.com/library/ios/#documentation/AudioVideo/Conceptual/MultimediaPG/UsingAudio/UsingAudio.html
However I have searched for an example that works on the web and can only see people complaining they can't get it to work. I have a working AudioQueue example for PCM but the very moment I switch this to AAC initialization fails. SpeakHere example also only uses PCM.
Has anyone ever managed to make this work or has a link to a code snippet that works?
Basically iPhone only record PCM. To get an AAC encoded file the original PCM stream should be stored into a AudioFile and then this AudioFile should be converted to AAC.
There's no way to record and convert on the fly on iPhone unless you include a third-party library that will do this for you.

Creating video file from images and audio( pre-recorded )

I have come across some sample codes where set of images are added to make a QTmovie.
I am targeting this for OS X platform without any QT frameworks.
I have ague idea of creating a file with extension and embed it with appropriate metadata and find a way to insert images and audio in required format. So when the file is created it can simply be played.
I am not sure of what format/extension is better.
pointers are much appreciated.
Without QuickTime (or an equivalent multimedia framework), what you describe is quite a lot of work. Ordinarily, you would use a video compression algorithm (such as H.264) to encode your images into video, and an audio compression algorithm (such as AAC) to encode your audio track. Then you would write these streams into a container file, such as an MPEG-4 file, which interleaves the streams for playback, contains metadata and indexes and so on. Then for playback, you parse the file, decode the video and audio data, and schedule them for playback, taking care to keep them in sync.
QuickTime does all this (and more) for you, and it would be an enormous undertaking to write it all yourself. Is there some reason why you are running on OS X but cannot use QuickTime?
Given the question is tagged with iPhone, why can't you just use QTKit?
If you had to do it from scratch, you could adopt a very simple solution whereby you store your image sequence as a set of JPEG files (but then you would require libjpeg; use raw RGB or PPM if you must), the audio track as a raw WAV data, and then have another file (a text file you define) that stored timing information, so you would simply stream out the audio, and have the frame numbers of the images stored with their corresponding timecode/sample offset. That is a very simple solution that could be made to work without too much effort.
If you give us some more idea of what you are trying to achieve, we could offer some more specific suggestions.
If you want to write a program to do this, you could use Xuggler in Java to do it. It will allow you to save your final video in a format playable by almost any media player.
Start out by gaining an understanding of how video files (e.g. MP4, Quicktime) actually represent audio and video with this Overly Simplistic Guide to Internet Video.
Then, play around with the MediaTool tutorials. You can write programs that make raw images into video files (see this sample code). Finally, to write a program that makes audio and video that are in sync, see this tutorial; it generates a set of images, and makes some audio noise that is timed to change when a ball hits the edge of a box.
Hope that helps.
Art