iPhone: Mix two audio files programmatically? - iphone

I want to have two audio files and mix and play it programmatically. When I am playing the first audio file, after some time(dynamic time) I need to add the second small audio file with the first audio file when somewhere middle of the first audio file is playing, then finally I need to save as one audio file on the device. It should play the audio file with the mixer audio I included the second one.
I have gone through many forums, but couldn't get the clue exactly how to achieve this?
Could someone please clarify my below doubts?
In this case, what audio file/format I should use? Can I use .avi files?
How to add the second audio after the dynamic time set onto the first audio file programmatically? For ex: If the first audio total time is 2 mins, I might need to mix the second audio file (3 seconds audio) somewhere in 1 min or 1.5 mins or 55 seconds of the first file. Its dynamic.
How to save the final output audio file on the device? If I save the audio file programmatically somewhere, can I play back again?
I don't know how to achieve this. Please suggest your thoughts!

Open each audio file
Read the header info
Get raw uncompressed audio into memory as an array of ints for each file
Starting at the point in file 1's array where you want to mix in file2, loop through, adding file2's int value to file1's, being sure to 'clip' any values above or below the max (this is how you mix audio ... yes, it's that simple). If file2 is longer, you'll have to make the first array long enough to hold the remainder of file2 completely.
Write new header info and then the audio from the array to which you added file2.
If there is compression involved or the files won't fit in memory, you may have to implement a more complex buffering scheme.

In this case, what audio file/format I should use? Can I use .avi files?
You can choose a compressed or non-compressed format. Common non-compressed formats include Wav and AIFF. CAF can represent compressed and non compressed data. .avi is not an option (offered by the OS).
If the files are large and storage space (on disk) is a concern, you may consider AAC format saved in a CAF (or simply .m4a). For most applications, 16 bit samples will be enough, and you can also save space, memory and cpu by saving these files at an appropriate sample rate (ref: CDs are 44.1kHz).
Since ExtAudioFile interface abstract the conversion process, you should not have to change your program to compare size and speed differences of compressed and non-compressed formats for your distribution (AAC in CAF would be fine for normal applications).
Noncompressed CD quality audio will consume about 5.3 MB per minute, per channel. So if you have 2 stereo audio files, each 3 minutes long, and a 3 minute destination buffer, your memory requirement would be around 50 MB.
Since you have 'minutes' of audio, you may need to consider avoiding loading all audio data into memory at once. In order to read, manipulate, and combine audio, you will need a non-compressed representation to work with in memory, so compression formats would not help here. As well, converting a compressed representation to pcm takes a good amount of resources; reading a compressed file, although fewer bytes, can take more (or less) time.
How to add the second audio after the dynamic time set onto the first audio file programmatically? For ex: If the first audio total time is 2 mins, I might need to mix the second audio file (3 seconds audio) somewhere in 1 min or 1.5 mins or 55 seconds of the first file. Its dynamic.
To read the files and convert them to the format you want to use, use ExtAudioFile APIs - this will convert to your destination sample format for you. Common PCM sample representations in memory include SInt32, SInt16, and float, but that can vary wildly based on the application and the hardware (beyond iOS). ExtAudioFile APIs would also convert compressed formats to PCM, if needed.
Your input audio files should have the same sample rate. If not, you will have to resample the audio, a complex process which also takes a lot of resources (if done correctly/accurately). If you need to support resampling, double the time you've allocated to completing this task (not detailing the process here).
To add the sounds, you would request PCM samples from the files, process, and write to the output file (or buffer in memory).
To determine when to add the other sounds, you will need to get the sample rates for the input files (via ExtAudioFileGetProperty). If you want to write the second sound to the destination buffer at 55s, then you would start adding the sounds at sample number SampleRate * 55, where SampleRate is the sample rate of the files you are reading.
To mix audio, you will just use this form (pseudocode):
mixed[i] = fileA[i] + fileB[i];
but you have to be sure you avoid over/underflow and other arithmetic errors. Typically, you will perform this process using some integer value, because floating point calculations can take a long time (when there are so many). For some applications, you could just shift and add with no worry of overflow - this would effectively reduce each input by one half before adding them. The amplitude of the result would be one half. If you have control over the files' content (e.g. they are all bundled as resources) then you could simply ensure no peak sample in the files exceeded one half of the full scale value (about -6dBFS). Of course, saving as float would solve this issue at the expense of introducing higher CPU, memory, and file i/o demands.
At this point, you'd have 2 files open for reading, and one open for writing, then a few small temporary buffers for processing and mixing the inputs before writing to the output file. You should perform these requests in blocks for efficiency (e.g. read 1024 samples from each file, process the samples, write 1024 samples). The APIs don't guarantee much regarding caching and buffering for efficiency.
How to save the final output audio file on the device? If I save the audio file programmatically somewhere, can I play back again?
ExtAudioFile APIs would work for your read and writing needs. Yes, you can read/play it later.

Hello You can do this by using av foundation
- (BOOL) combineVoices1
{
NSError *error = nil;
BOOL ok = NO;
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
CMTime nextClipStartTime = kCMTimeZero;
//Create AVMutableComposition Object.This object will hold our multiple AVMutableCompositionTrack.
AVMutableComposition *composition = [[AVMutableComposition alloc] init];
AVMutableCompositionTrack *compositionAudioTrack = [composition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
[compositionAudioTrack setPreferredVolume:0.8];
NSString *soundOne =[[NSBundle mainBundle]pathForResource:#"test1" ofType:#"caf"];
NSURL *url = [NSURL fileURLWithPath:soundOne];
AVAsset *avAsset = [AVURLAsset URLAssetWithURL:url options:nil];
NSArray *tracks = [avAsset tracksWithMediaType:AVMediaTypeAudio];
AVAssetTrack *clipAudioTrack = [[avAsset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0];
[compositionAudioTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, avAsset.duration) ofTrack:clipAudioTrack atTime:kCMTimeZero error:nil];
AVMutableCompositionTrack *compositionAudioTrack1 = [composition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
[compositionAudioTrack setPreferredVolume:0.3];
NSString *soundOne1 =[[NSBundle mainBundle]pathForResource:#"test" ofType:#"caf"];
NSURL *url1 = [NSURL fileURLWithPath:soundOne1];
AVAsset *avAsset1 = [AVURLAsset URLAssetWithURL:url1 options:nil];
NSArray *tracks1 = [avAsset1 tracksWithMediaType:AVMediaTypeAudio];
AVAssetTrack *clipAudioTrack1 = [[avAsset1 tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0];
[compositionAudioTrack1 insertTimeRange:CMTimeRangeMake(kCMTimeZero, avAsset.duration) ofTrack:clipAudioTrack1 atTime:kCMTimeZero error:nil];
AVMutableCompositionTrack *compositionAudioTrack2 = [composition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
[compositionAudioTrack2 setPreferredVolume:1.0];
NSString *soundOne2 =[[NSBundle mainBundle]pathForResource:#"song" ofType:#"caf"];
NSURL *url2 = [NSURL fileURLWithPath:soundOne2];
AVAsset *avAsset2 = [AVURLAsset URLAssetWithURL:url2 options:nil];
NSArray *tracks2 = [avAsset2 tracksWithMediaType:AVMediaTypeAudio];
AVAssetTrack *clipAudioTrack2 = [[avAsset2 tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0];
[compositionAudioTrack1 insertTimeRange:CMTimeRangeMake(kCMTimeZero, avAsset2.duration) ofTrack:clipAudioTrack2 atTime:kCMTimeZero error:nil];
AVAssetExportSession *exportSession = [AVAssetExportSession
exportSessionWithAsset:composition
presetName:AVAssetExportPresetAppleM4A];
if (nil == exportSession) return NO;
NSString *soundOneNew = [documentsDirectory stringByAppendingPathComponent:#"combined10.m4a"];
//NSLog(#"Output file path - %#",soundOneNew);
// configure export session output with all our parameters
exportSession.outputURL = [NSURL fileURLWithPath:soundOneNew]; // output path
exportSession.outputFileType = AVFileTypeAppleM4A; // output file type
// perform the export
[exportSession exportAsynchronouslyWithCompletionHandler:^{
if (AVAssetExportSessionStatusCompleted == exportSession.status) {
NSLog(#"AVAssetExportSessionStatusCompleted");
} else if (AVAssetExportSessionStatusFailed == exportSession.status) {
// a failure may happen because of an event out of your control
// for example, an interruption like a phone call comming in
// make sure and handle this case appropriately
NSLog(#"AVAssetExportSessionStatusFailed");
} else {
NSLog(#"Export Session Status: %d", exportSession.status);
}
}];
return YES;
}

If you are going to play multiple sounds at once, definitely use the *.caf format. Apple recommends it for playing multiple sounds at once. In terms of mixing them programmatically, I am assuming you just want them to play at the same time. While one sound is playing, just tell the other sound to play at whatever time you would like. To set a specific time, use NSTimer (NSTimer Class Reference) and create a method to have the sound play when the timer fires.

Related

How store a video while streaming that? [duplicate]

So far I know how to stream a video and how to download it and afterwards stream it, but here's the tricky bit: streaming it once, storing it on the device and in the future play it from the device.
Is that possible?
Not quite sure here how you get your stream but look in to the AVAssetWriter, AVAssetWriterInput and AVAssetWriterPixelBufferAdaptor and as soon as you receive data you should be able to append the data to the to the pixel buffer adaptor using:
appendPixelBuffer:withPresentationTime:
not sure it will work for you but with some fiddling you should be able to adapt your input to match this method. There are lots of example code for setting up the writer
It's quite easy to save the video. Do something similar to this:
//Saving Movie
NSMutableData *data = [[NSMutableData alloc] init];
NSKeyedArchiver *archiver = [[NSKeyedArchiver alloc] initForWritingWithMutableData:data];
[archiver encodeObject:*MovieObject* forKey:#"MovieObjectDataKey"];
[archiver finishEncoding];
[[NSUserDefaults standardUserDefaults] setObject:data forKey:#"MovieObjectDefaultsDataKey"];
[archiver release];
[data release];
//Retrieving movie
NSData *savedMovieData = [[NSUserDefaults standardUserDefaults] objectForKey:#"MovieObjectDefaultsDataKey"];
if (savedMovieData != nil) {
NSKeyedUnarchiver *unarchiver = [[NSKeyedUnarchiver alloc] initForReadingWithData:savedMovieData];
*MovieObject* = [[unarchiver decodeObjectForKey:#"MovieObjectDataKey"] retain];
[unarchiver finishDecoding];
[savedMovieData release];
[unarchiver release];
} else {
//Download Stream of Your Movie
}
The only thing you really have to change there is * MovieObject *, once in each step.
I know what you want to achieve, I only got a workaround. I had to implement the same behavior and ended up with streaming the video from the server and downloading it next to streaming. Next time the user tries to stream the video determine whether it was downloaded to disk, otherwise stream it again. In a normal case the video was downloaded properly and could be reviewed offline.
BOOL fileExists = [[NSFileManager defaultManager] fileExistsAtPath:somePath];
and
fileURLWithPath:isDirectory:
Initializes and returns a newly created NSURL object as a file URL with a specified path.
+ (id)fileURLWithPath:(NSString *)path isDirectory:(BOOL)isDir
Parameters
path
The path that the NSURL object will represent. path should be a valid system path. If path begins with a tilde, it must first be expanded with stringByExpandingTildeInPath. If path is a relative path, it is treated as being relative to the current working directory.
Passing nil for this parameter produces an exception.
isDir
A Boolean value that specifies whether path is treated as a directory path when resolving against relative path components. Pass YES if the path indicates a directory, NO otherwise.
Return Value
An NSURL object initialized with path.
Availability
Available in iOS 2.0 and later.
You can't stream it and save it at the same time, especially with large video files as the Apple doc sais that you must use a transport stream for HTTP Live Streaming.
ASIHttpRequest might make your life easier.
ASIHTTPRequest *request = [ASIHTTPRequest requestWithURL:url];
[request setDownloadDestinationPath:#"video.m4v"]; // use [NSBundle mainBundle] to find a better place
From your delegate, handle this:
- (void)request:(ASIHTTPRequest *)request didReceiveData:(NSData *)data;
Do whatever data transcoding with data as you get it and push it off to your AVAssetWriter or movie player layer in real time, whatever you are using. When you're done, the asset should still be saved so you can get it later.

Recording all using sounds as one track (iOS)

I work with Cocos2d. This CocosDention not have the property that i need.
Maybe anybody know what the way record and save to file all sound using in the current stage on ios. How about DJ app - they keep the music from different pieces and put them on each other.
For example, in the game play background music. When player jumping, cocosdenchion can play jumping sound. Can i create new track combine background music and other music effects realtime and save it?
How i understand AVAudioRecorder only for recording for microphone devices.
What i can use for this – AudioToolbox? OpenAL? CoreAudio? Other frameworks?
Before i think use FMOD, but there are license 500$ and i'm not sure my project give me some match money. On this i find free method or framework.
Thanks.
I found some solution. I use BASS iOS library.
one minus – it's paid. And second minus – it's written on C++.
And not some example for iOS. But have good help on the forum.
//BASS initialisation. use BASS_free for free resources
BASS_Init(-1,44100,0,NULL,NULL);
//create mixer
BASS_GetInfo(&info); // get output device info. needet to get freq
mixer=BASS_Mixer_StreamCreate(info.freq, 2, 0); // create a stereo mixer with the same sample rate
NSString *shortSound = [[NSBundle mainBundle] pathForResource:#"piano2" ofType:#"wav"];
//create channel
chan2 = BASS_StreamCreateFile(FALSE, [shortSound cStringUsingEncoding:NSUTF8StringEncoding], 0, 0, BASS_STREAM_DECODE);
//create channel from file use absolute path url
NSString *soundFileName = [[NSBundle mainBundle] pathForResource:#"ff13" ofType:#"mp3"];
chan = BASS_StreamCreateFile(FALSE, [soundFileName cStringUsingEncoding:NSUTF8StringEncoding], 0, 0, BASS_STREAM_DECODE|BASS_SAMPLE_FLOAT|BASS_SAMPLE_LOOP);
//add channel into mixer
BASS_Mixer_StreamAddChannel(mixer, chan, 0);
//point the dirrectory and file name for our new file saved mixer data
NSString *documentDir = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) objectAtIndex:0];
NSString *filename = [documentDir stringByAppendingString:#"/file.mp3"];
//start encode mixer to file when view did load. With parameters for encode.
BASS_Encode_StartCAFile(mixer, 'm4af', 'alac', 0, 0, [filename cStringUsingEncoding:NSUTF8StringEncoding]);
//mixer add any delay for play added channel. Fix it using this option
BASS_ChannelSetAttribute(mixer, BASS_ATTRIB_NOBUFFER, 1);
BASS_ChannelPlay(mixer, 0); // start mixer
Here used mixer for record all sounds and microphone voice.

Adding metadata to AAC M4A via AVAssetExportSession

I am creating and storing an AAC-encoded .m4a file using AVAudioRecorder. This produces a playable .m4a file just fine. I want to then use AVAssetExportSession to process the file in order to add metadata to the file. The below code is producing a .m4a file of a similar size (1 KB less than source), but when it plays back, there is just silence.
NSURL* url = [NSURL fileURLWithPath:self.m4aPath];
AVURLAsset* asset = [AVAsset assetWithURL:url];
AVMutableMetadataItem* t = [AVMutableMetadataItem metadataItem];
t.key = AVMetadataCommonKeyTitle;
t.keySpace = AVMetadataKeySpaceCommon;
t.value = #"Unit Test";
NSArray* metadata = [NSArray arrayWithObject:t];
AVAssetExportSession *exportSession = [[AVAssetExportSession alloc] initWithAsset:asset presetName:AVAssetExportPresetAppleM4A];
exportSession.outputURL = [NSURL fileURLWithPath:[[NSFileManager rawRecordingsDirectory] stringByAppendingPathComponent:#"test.m4a"]];
exportSession.outputFileType = AVFileTypeAppleM4A;
exportSession.metadata = metadata;
[exportSession exportAsynchronouslyWithCompletionHandler:^{....}];
One more piece of info: When I look at the source and exported file in the Finder, the source file has the black iTunes icon, while the exported file has the white iTunes icon. Not sure what this means in practice, but hoping it might be helpful. Moreover, double-clicking source adds it to iTunes and starts playback, while double-clicking the exported opens iTunes but does nothing.
I had a similar issue where my output m4a file had the white icon (instead of black) and wouldn't play. Though that was when I was creating the original source file from raw sample data, not when adding metadata to it.
My issue was that I wasn't closing the exported file in my code (I was just terminating the app before calling the close function). Once I called the close function, it started working. You might want to check that.
Also, I found "open with->Quicktime" useful as that gives an error when the file is corrupt, and plays it fine when it isn't. More useful than iTunes silently ignoring the error.

Using AVAssetReader to read (stream) from a remote asset

My main goal is to stream a video from a server, and cut it frame by frame while streaming (so that it can be used by OpenGL). For that, I've used this code that I found everywhere on the Internet (as I recall it was from Apple's GLVideoFrame sample code):
NSArray * tracks = [asset tracks];
NSLog(#"%d", tracks.count);
for(AVAssetTrack* track in tracks) {
NSLog(#"type: %#", [track mediaType]);
initialFPS = track.nominalFrameRate;
width = (GLuint)track.naturalSize.width;
height = (GLuint)track.naturalSize.height;
NSError * error = nil;
// _movieReader is a member variable
#try {
self._movieReader = [[[AVAssetReader alloc] initWithAsset:asset error:&error] autorelease];
}
#catch (NSException *exception) {
NSLog(#"%# -- %#", [exception name], [exception reason]);
NSLog(#"skipping track");
continue;
}
if (error)
{
NSLog(#"CODE:%d\nDOMAIN:%#\nDESCRIPTION:%#\nFAILURE_REASON:%#", [error code], [error domain], error.localizedDescription, [error localizedFailureReason]);
continue;
}
NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey;
NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA];
NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key];
[_movieReader addOutput:[AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:track
outputSettings:videoSettings]];
[_movieReader startReading];
[self performSelectorOnMainThread:#selector(frameStarter) withObject:nil waitUntilDone:NO];
}
But I always get this exception at [[AVAssetReader alloc] initWithAsset:error:].
NSInvalidArgumentException -- *** -[AVAssetReader initWithAsset:error:] Cannot initialize an instance of AVAssetReader with an asset at non-local URL 'http://devimages.apple.com/iphone/samples/bipbop/bipbopall.m3u8'
So my two questions are:
Is the exception really telling me that AVAssetReader must have a local URL? Can it be used for streaming (just like the rest of the AVFoundation classes)?
If the AVFoundation approach won't work, what are other suggestions to stream the video and split its frames at the same time?
Thanks a lot for your help.
AVFoundation does not seem to distinguish as much between local and non-local files, as it does between the KIND of files or protocols used. There is a VERY clear distinction between using mp4/mov's versus using the HTTP Live streaming protocol via m3u8's, but the differences using a local or remote mp4 are a little fuzzier.
To expand on the above:
a) If your 'remote' asset is an M3U8 (that is, you are using HTTP 'live' streaming), then no chance whatsoever. No matter if the M3U8 is in your local filesystem or on a remote server, for a multitude of reasons AVAssetReader and all AVAsset-associated functionality just does NOT work. However, AVPlayer, AVPlayerItem etc would work just fine.
b) If it is an MP4/MOV, a little further investigation is due. Local MP4/MOV's work flawlessly. While in case of remote MP4/MOV's, I'm able to create (or retrieve from an AVPlayerItem or AVPlayer or AVAssetTracks) an AVURLAsset with which I'm sometimes able to initialize an AVAssetReader successfully (I'll expand on the 'sometimes' as well, shortly). HOWEVER, copyNextSampleBuffer always returns nil in case of remote MP4's. Since several things UPTO the point of invoking copyNextSampleBuffer work, I'm not 100% sure if:
i) copyNextSampleBuffer not working for remote mp4's, after all the other steps having been successful, is intended/expected functionality.
ii) That the 'other steps' seem to work at all for remote MP4's is an accident of Apple's implementation, and this incompatibility is simply coming to the fore when we hit copyNextSampleBuffer..............what these 'other steps' are, I'll detail shortly.
iii) I'm doing something wrong when trying to invoke copyNextSampleBuffer for remote MP4's.
So #Paula you could try to investigate a little further with remote MOV/MP4's.
For reference, here are the approaches I tried for capturing a frame from videos:
a)
Create an AVURLAsset directly from the video URL.
Retrieve the video track using [asset tracksWithMediaType:AVMediaTypeVideo]
Prepare an AVAssetReaderTrackOutput using the video track as the source.
Create an AVAssetReader using the AVURLAsset.
Add AVAssetReaderTrackOutput to the AVAssetReader and startReading.
Retrieve images using copyNextSampleBuffer.
b)
Create an AVPlayerItem from the video URL, and then an AVPlayer from it (or create the AVPlayer directly from the URL).
Retrieve the AVPlayer's 'asset' property and load its 'tracks' using "loadValuesAsynchronouslyForKeys:".
Separate the tracks of type AVMediaTypeVideo (or simply call tracksWithMediaType: on the asset once the tracks are loaded), and create your AVAssetReaderTrackOutput using the video track.
Create AVAssetReader using the AVPlayer's 'asset', 'startReading' and then retrieve images using copyNextSampleBuffer.
c)
Create an AVPlayerItem+AVPlayer or AVPlayer directly from the video URL.
KVO the AVPlayerItem's 'tracks' property, and once the tracks are loaded, separate the AVAssetTracks of type AVMediaTypeVideo.
Retrieve the AVAsset from AVPlayerItem/AVPlayer/AVAssetTrack's 'asset' property.
Remaining steps are similar to approach (b).
d)
Create an AVPlayerItem+AVPlayer or AVPlayer directly from the video URL.
KVO the AVPlayerItem's 'tracks' property, and once the tracks are loaded, separate the ones of type AVMediaTypeVideo.
Create an AVMutableComposition, and initialize an associated AVMutableCompositionTrack of type AVMediaTypeVideo.
Insert the appropriate CMTimeRange from video track retrieved earlier, into this AVMutableCompositionTrack.
Similar to (b) and (c), now create your AVAssetReader and AVAssetReaderTrackOutput, but with the difference that you use the AVMutableComposition as the base AVAsset for initializing your AVAssetReader, and AVMutableCompositionTrack as the base AVAssetTrack for your AVAssetReaderTrackOutput.
'startReading' and use copyNextSampleBuffer to get frames from the AVAssetReader.
P.S: I tried approach (d) here to get around the fact that the AVAsset retrieved directly from AVPlayerItem or AVPlayer was not behaving. So I wanted to create a new AVAsset from the AVAssetTracks I already had in hand. Admittedly hacky, and perhaps pointless (where else would the track information be ultimately retrieved from if not the original AVAsset!) but it was worth a desperate try anyway.
Here's a summary of the results for different types of files:
1) Local MOV/MP4's - All 4 approaches work flawlessly.
2) Remote MOV/MP4's - The asset and tracks are retrieved correctly in approaches (b) through (d), and the AVAssetReader is initialized as well but copyNextSampleBuffer always returns nil. In case of (a), creation of the AVAssetReader itself fails with an 'Unknown Error' NSOSStatusErrorDomain -12407.
3) Local M3U8's (accessed through an in-app/local HTTP server) - Approaches (a), (b) and (c) fail miserably as trying to get an AVURLAsset/AVAsset in any shape or form for files streamed via M3U8's is a fools errand.
In case of (a), the asset is not created at all, and the initWithURL: call on AVURLAsset fails with an 'Unknown Error' AVFoundationErrorDomain -11800.
In case of (b) and (c), retrieving the AVURLAsset from the AVPlayer/AVPlayerItem or AVAssetTracks returns SOME object, but accessing the 'tracks' property on it always returns an empty array.
In case of (d), I'm able to retrieve and isolate the video tracks successfully, but while trying to create the AVMutableCompositionTrack, it fails when trying to insert the CMTimeRange from the source track into the AVMutableCompositionTrack, with an 'Unknown Error' NSOSStatusErrorDomain -12780.
4) Remote M3U8's, behave exactly the same as local M3U8's.
I'm not entirely educated on why these differences exist, or could not have been mitigated by Apple. But there you go.
You can get a remote file on AVMutableCompositionTrack
AVURLAsset* soundTrackAsset = [[AVURLAsset alloc]initWithURL:[NSURL URLWithString:#"http://www.yoururl.com/yourfile.mp3"] options:nil];
AVMutableCompositionTrack *compositionAudioSoundTrack = [mixComposition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
[compositionAudioSoundTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, audioAsset.duration)
ofTrack:[[soundTrackAsset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0]
atTime:kCMTimeZero error:nil];
However, this approach does not work very well with files that have a higher compression like MP4s

How to set pitch of an audio file or recorded audio file in iphone sdk?

I am recoding a file or I have audio file I want to change the pitch and play the audio file. How can I set the pitch in a iphone program that is using objective-c.
Please help me out of this.
Thank you,
Madan Mohan.
The naive approach is to play it using a different sampling rate as compared to the sampling rate used to record the file. For example, if the file was recorded with Fs=44100Hz, then playing it with Fs=22050Hz, will give you half the original pitch.
Of course this naive approach involves changing the duration of the file, and other sound-related artifacts. If you need something less naive, you will have to implement a pitch-shifting algorithm yourself, and that's a huge topic --- I suggest you start by searching pitch shift in google.
You can use the soundtouch open source project to change pitch
Here is the link : http://www.surina.net/soundtouch/
Once you add soundtouch to your project, you have to give the input sound file path, output sound file path and pitch change as the input.
Since it takes more time to process your sound its better to modify soundtouch so that when you record the voice, directly give the data for processing. It will make your application better.
References
Real-time Pitch Shifting on the iPhone
Create pitch changing code?
you can play sound with changed pitch using AVAudioEngine, AVAudioPlayerNode.The sample tested code is given bellow .
#interface ViewController (){
AVAudioEngine *engine;
}
- (void)viewDidLoad {
[super viewDidLoad];
[self playAudio];
}
-(void)playAudio{
engine = [[AVAudioEngine alloc] init];
NSString* path=[[NSBundle mainBundle] pathForResource:#"test" ofType:#"mp3"];
NSURL *soundUrl = [NSURL fileURLWithPath:path];
AVAudioFile* File = [[AVAudioFile alloc] initForReading: soundUrl error: nil];
AVAudioPlayerNode* node = [AVAudioPlayerNode new];
[node stop];
[engine stop];
[engine reset];
[engine attachNode: node];
AVAudioUnitTimePitch* changeAudioUnitTime = [AVAudioUnitTimePitch new];
//change this rate variable to play fast or slow .
changeAudioUnitTime.rate = 1;
//change pitch variable to according your requirement.
changeAudioUnitTime.pitch = 100;
[engine attachNode: changeAudioUnitTime];
[engine connect:node to: changeAudioUnitTime format: nil];
[engine connect:changeAudioUnitTime to: engine.outputNode format:nil];
[node scheduleFile:File atTime: nil completionHandler: nil];
[engine startAndReturnError: nil];
[node play];
}
Make sure path variable have valid file path
Note :: this code is tested on actual device and working fine in my project.don't forget to add CoreAudio.framework , AVFoundation.framework.