I work with Cocos2d. This CocosDention not have the property that i need.
Maybe anybody know what the way record and save to file all sound using in the current stage on ios. How about DJ app - they keep the music from different pieces and put them on each other.
For example, in the game play background music. When player jumping, cocosdenchion can play jumping sound. Can i create new track combine background music and other music effects realtime and save it?
How i understand AVAudioRecorder only for recording for microphone devices.
What i can use for this – AudioToolbox? OpenAL? CoreAudio? Other frameworks?
Before i think use FMOD, but there are license 500$ and i'm not sure my project give me some match money. On this i find free method or framework.
Thanks.
I found some solution. I use BASS iOS library.
one minus – it's paid. And second minus – it's written on C++.
And not some example for iOS. But have good help on the forum.
//BASS initialisation. use BASS_free for free resources
BASS_Init(-1,44100,0,NULL,NULL);
//create mixer
BASS_GetInfo(&info); // get output device info. needet to get freq
mixer=BASS_Mixer_StreamCreate(info.freq, 2, 0); // create a stereo mixer with the same sample rate
NSString *shortSound = [[NSBundle mainBundle] pathForResource:#"piano2" ofType:#"wav"];
//create channel
chan2 = BASS_StreamCreateFile(FALSE, [shortSound cStringUsingEncoding:NSUTF8StringEncoding], 0, 0, BASS_STREAM_DECODE);
//create channel from file use absolute path url
NSString *soundFileName = [[NSBundle mainBundle] pathForResource:#"ff13" ofType:#"mp3"];
chan = BASS_StreamCreateFile(FALSE, [soundFileName cStringUsingEncoding:NSUTF8StringEncoding], 0, 0, BASS_STREAM_DECODE|BASS_SAMPLE_FLOAT|BASS_SAMPLE_LOOP);
//add channel into mixer
BASS_Mixer_StreamAddChannel(mixer, chan, 0);
//point the dirrectory and file name for our new file saved mixer data
NSString *documentDir = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) objectAtIndex:0];
NSString *filename = [documentDir stringByAppendingString:#"/file.mp3"];
//start encode mixer to file when view did load. With parameters for encode.
BASS_Encode_StartCAFile(mixer, 'm4af', 'alac', 0, 0, [filename cStringUsingEncoding:NSUTF8StringEncoding]);
//mixer add any delay for play added channel. Fix it using this option
BASS_ChannelSetAttribute(mixer, BASS_ATTRIB_NOBUFFER, 1);
BASS_ChannelPlay(mixer, 0); // start mixer
Here used mixer for record all sounds and microphone voice.
Related
I want to have two audio files and mix and play it programmatically. When I am playing the first audio file, after some time(dynamic time) I need to add the second small audio file with the first audio file when somewhere middle of the first audio file is playing, then finally I need to save as one audio file on the device. It should play the audio file with the mixer audio I included the second one.
I have gone through many forums, but couldn't get the clue exactly how to achieve this?
Could someone please clarify my below doubts?
In this case, what audio file/format I should use? Can I use .avi files?
How to add the second audio after the dynamic time set onto the first audio file programmatically? For ex: If the first audio total time is 2 mins, I might need to mix the second audio file (3 seconds audio) somewhere in 1 min or 1.5 mins or 55 seconds of the first file. Its dynamic.
How to save the final output audio file on the device? If I save the audio file programmatically somewhere, can I play back again?
I don't know how to achieve this. Please suggest your thoughts!
Open each audio file
Read the header info
Get raw uncompressed audio into memory as an array of ints for each file
Starting at the point in file 1's array where you want to mix in file2, loop through, adding file2's int value to file1's, being sure to 'clip' any values above or below the max (this is how you mix audio ... yes, it's that simple). If file2 is longer, you'll have to make the first array long enough to hold the remainder of file2 completely.
Write new header info and then the audio from the array to which you added file2.
If there is compression involved or the files won't fit in memory, you may have to implement a more complex buffering scheme.
In this case, what audio file/format I should use? Can I use .avi files?
You can choose a compressed or non-compressed format. Common non-compressed formats include Wav and AIFF. CAF can represent compressed and non compressed data. .avi is not an option (offered by the OS).
If the files are large and storage space (on disk) is a concern, you may consider AAC format saved in a CAF (or simply .m4a). For most applications, 16 bit samples will be enough, and you can also save space, memory and cpu by saving these files at an appropriate sample rate (ref: CDs are 44.1kHz).
Since ExtAudioFile interface abstract the conversion process, you should not have to change your program to compare size and speed differences of compressed and non-compressed formats for your distribution (AAC in CAF would be fine for normal applications).
Noncompressed CD quality audio will consume about 5.3 MB per minute, per channel. So if you have 2 stereo audio files, each 3 minutes long, and a 3 minute destination buffer, your memory requirement would be around 50 MB.
Since you have 'minutes' of audio, you may need to consider avoiding loading all audio data into memory at once. In order to read, manipulate, and combine audio, you will need a non-compressed representation to work with in memory, so compression formats would not help here. As well, converting a compressed representation to pcm takes a good amount of resources; reading a compressed file, although fewer bytes, can take more (or less) time.
How to add the second audio after the dynamic time set onto the first audio file programmatically? For ex: If the first audio total time is 2 mins, I might need to mix the second audio file (3 seconds audio) somewhere in 1 min or 1.5 mins or 55 seconds of the first file. Its dynamic.
To read the files and convert them to the format you want to use, use ExtAudioFile APIs - this will convert to your destination sample format for you. Common PCM sample representations in memory include SInt32, SInt16, and float, but that can vary wildly based on the application and the hardware (beyond iOS). ExtAudioFile APIs would also convert compressed formats to PCM, if needed.
Your input audio files should have the same sample rate. If not, you will have to resample the audio, a complex process which also takes a lot of resources (if done correctly/accurately). If you need to support resampling, double the time you've allocated to completing this task (not detailing the process here).
To add the sounds, you would request PCM samples from the files, process, and write to the output file (or buffer in memory).
To determine when to add the other sounds, you will need to get the sample rates for the input files (via ExtAudioFileGetProperty). If you want to write the second sound to the destination buffer at 55s, then you would start adding the sounds at sample number SampleRate * 55, where SampleRate is the sample rate of the files you are reading.
To mix audio, you will just use this form (pseudocode):
mixed[i] = fileA[i] + fileB[i];
but you have to be sure you avoid over/underflow and other arithmetic errors. Typically, you will perform this process using some integer value, because floating point calculations can take a long time (when there are so many). For some applications, you could just shift and add with no worry of overflow - this would effectively reduce each input by one half before adding them. The amplitude of the result would be one half. If you have control over the files' content (e.g. they are all bundled as resources) then you could simply ensure no peak sample in the files exceeded one half of the full scale value (about -6dBFS). Of course, saving as float would solve this issue at the expense of introducing higher CPU, memory, and file i/o demands.
At this point, you'd have 2 files open for reading, and one open for writing, then a few small temporary buffers for processing and mixing the inputs before writing to the output file. You should perform these requests in blocks for efficiency (e.g. read 1024 samples from each file, process the samples, write 1024 samples). The APIs don't guarantee much regarding caching and buffering for efficiency.
How to save the final output audio file on the device? If I save the audio file programmatically somewhere, can I play back again?
ExtAudioFile APIs would work for your read and writing needs. Yes, you can read/play it later.
Hello You can do this by using av foundation
- (BOOL) combineVoices1
{
NSError *error = nil;
BOOL ok = NO;
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
CMTime nextClipStartTime = kCMTimeZero;
//Create AVMutableComposition Object.This object will hold our multiple AVMutableCompositionTrack.
AVMutableComposition *composition = [[AVMutableComposition alloc] init];
AVMutableCompositionTrack *compositionAudioTrack = [composition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
[compositionAudioTrack setPreferredVolume:0.8];
NSString *soundOne =[[NSBundle mainBundle]pathForResource:#"test1" ofType:#"caf"];
NSURL *url = [NSURL fileURLWithPath:soundOne];
AVAsset *avAsset = [AVURLAsset URLAssetWithURL:url options:nil];
NSArray *tracks = [avAsset tracksWithMediaType:AVMediaTypeAudio];
AVAssetTrack *clipAudioTrack = [[avAsset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0];
[compositionAudioTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, avAsset.duration) ofTrack:clipAudioTrack atTime:kCMTimeZero error:nil];
AVMutableCompositionTrack *compositionAudioTrack1 = [composition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
[compositionAudioTrack setPreferredVolume:0.3];
NSString *soundOne1 =[[NSBundle mainBundle]pathForResource:#"test" ofType:#"caf"];
NSURL *url1 = [NSURL fileURLWithPath:soundOne1];
AVAsset *avAsset1 = [AVURLAsset URLAssetWithURL:url1 options:nil];
NSArray *tracks1 = [avAsset1 tracksWithMediaType:AVMediaTypeAudio];
AVAssetTrack *clipAudioTrack1 = [[avAsset1 tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0];
[compositionAudioTrack1 insertTimeRange:CMTimeRangeMake(kCMTimeZero, avAsset.duration) ofTrack:clipAudioTrack1 atTime:kCMTimeZero error:nil];
AVMutableCompositionTrack *compositionAudioTrack2 = [composition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
[compositionAudioTrack2 setPreferredVolume:1.0];
NSString *soundOne2 =[[NSBundle mainBundle]pathForResource:#"song" ofType:#"caf"];
NSURL *url2 = [NSURL fileURLWithPath:soundOne2];
AVAsset *avAsset2 = [AVURLAsset URLAssetWithURL:url2 options:nil];
NSArray *tracks2 = [avAsset2 tracksWithMediaType:AVMediaTypeAudio];
AVAssetTrack *clipAudioTrack2 = [[avAsset2 tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0];
[compositionAudioTrack1 insertTimeRange:CMTimeRangeMake(kCMTimeZero, avAsset2.duration) ofTrack:clipAudioTrack2 atTime:kCMTimeZero error:nil];
AVAssetExportSession *exportSession = [AVAssetExportSession
exportSessionWithAsset:composition
presetName:AVAssetExportPresetAppleM4A];
if (nil == exportSession) return NO;
NSString *soundOneNew = [documentsDirectory stringByAppendingPathComponent:#"combined10.m4a"];
//NSLog(#"Output file path - %#",soundOneNew);
// configure export session output with all our parameters
exportSession.outputURL = [NSURL fileURLWithPath:soundOneNew]; // output path
exportSession.outputFileType = AVFileTypeAppleM4A; // output file type
// perform the export
[exportSession exportAsynchronouslyWithCompletionHandler:^{
if (AVAssetExportSessionStatusCompleted == exportSession.status) {
NSLog(#"AVAssetExportSessionStatusCompleted");
} else if (AVAssetExportSessionStatusFailed == exportSession.status) {
// a failure may happen because of an event out of your control
// for example, an interruption like a phone call comming in
// make sure and handle this case appropriately
NSLog(#"AVAssetExportSessionStatusFailed");
} else {
NSLog(#"Export Session Status: %d", exportSession.status);
}
}];
return YES;
}
If you are going to play multiple sounds at once, definitely use the *.caf format. Apple recommends it for playing multiple sounds at once. In terms of mixing them programmatically, I am assuming you just want them to play at the same time. While one sound is playing, just tell the other sound to play at whatever time you would like. To set a specific time, use NSTimer (NSTimer Class Reference) and create a method to have the sound play when the timer fires.
One of my apps has a simple metronome-style feature that plays a click sound a specified number of times per minute (bpm). I'm doing this by starting an NSTimer, with an interval calculated from the specified bpm, that calls a method that plays the sound.
If I put an NSLog line into the play method, I can see that NSTimer is firing accurately to about 1 millisecond. However, if I record the sound output into an audio editor and then measure the interval between clicks, I can see that they are not evenly spaced. For example, with 150 bpm, the timer fires every 400 milliseconds. But most of the sounds play after 395 milliseconds, with every third or fourth sound playing after 418 milliseconds.
So the sounds are not uniformly delayed, but rather, they follow a pattern of shorter and longer intervals. It seems as if the iOS has a lower resolution for timing of sounds, and is rounding each sound event to the nearest available point, rounding up or down as needed to keep on track overall.
I have tried this with system sounds, AVAudioPlayer and OpenAL and have gotten the exact same results with all three methods. With each method, I'm doing all the setup when the view loads, so each time I play the sound all I have to do is play it. With AVAudioPlayer, I tried calling prepareToPlay using a second timer after each time the sound plays, so it is initialized and ready to go next time, but got the same results.
Here's the code for setting up the OpenAL sound in viewDidLoad (adapted from this tutorial):
// set up the context and device
ALCcontext *context;
ALCdevice *device;
OSStatus result;
device = alcOpenDevice(NULL); // select the "preferred device"
if (device) {
context = alcCreateContext(device, NULL); // use the device to make a context
alcMakeContextCurrent(context); // set the context to the currently active one
}
// open the sound file
NSString *soundFilePath = [[NSBundle mainBundle] pathForResource:#"TempoClick" ofType:#"caf"];
NSURL *soundFileURL = [NSURL fileURLWithPath:soundFilePath];
AudioFileID fileID;
result = AudioFileOpenURL((CFURLRef)soundFileURL, kAudioFileReadPermission, 0, &fileID);
if (result != 0) DLog(#"cannot open file %#: %ld", soundFilePath, result);
// get the size of the file data
UInt32 fileSize = 0;
UInt32 propSize = sizeof(UInt64);
result = AudioFileGetProperty(fileID, kAudioFilePropertyAudioDataByteCount, &propSize, &fileSize);
if (result != 0) DLog(#"cannot find file size: %ld", result);
DLog(#"file size: %li", fileSize);
// copy the data into a buffer, then close the file
unsigned char *outData = malloc(fileSize);
AudioFileOpenURL((CFURLRef)soundFileURL, kAudioFileReadPermission, 0, &fileID); // we get a "file is not open" error on the next line if we don't open this again
result = AudioFileReadBytes(fileID, false, 0, &fileSize, outData);
if (result != 0) NSLog(#"cannot load data: %ld", result);
AudioFileClose(fileID);
alGenBuffers(1, &tempoSoundBuffer);
alBufferData(self.tempoSoundBuffer, AL_FORMAT_MONO16, outData, fileSize, 44100);
free(outData);
outData = NULL;
// connect the buffer to the source and set some preferences
alGenSources(1, &tempoSoundSource);
alSourcei(tempoSoundSource, AL_BUFFER, tempoSoundBuffer);
alSourcef(tempoSoundSource, AL_PITCH, 1.0f);
alSourcef(tempoSoundSource, AL_GAIN, 1.0f);
alSourcei(tempoSoundSource, AL_LOOPING, AL_FALSE);
And then in the play method I just call:
alSourcePlay(self.tempoSoundSource);
Can anyone explain what is happening here, and how I can work around it?
UPDATE 1:
I have another project that plays brief sounds with audio units, so as a quick test I added a timer to that project to play my click sound every 400 milliseconds. In that case, the timing is nearly perfect. So, it seems that NSTimer is fine but system sounds, AVAudioPlayer and OpenAL are less accurate in their playback than audio units.
UPDATE 2:
I just reworked my project to use audio units and now the audio is playing back much more accurately. It still occasionally drifts by up to four milliseconds in either direction, but this is better than the other audio methods. I'm still curious why the other methods all show a pattern of short, short, short, long intervals -- it's like the audio playback times are being rounded up or down to map to some kind of frame rate -- so I'll leave this question open for anyone who can explain that and/or offer a workaround for the other audio methods.
NSTimer does not guarantee when your method will actually get fired.
More info here: How to program a real-time accurate audio sequencer on the iphone?
Regarding your edits:
AVAudioPlayer takes some time to initialize itself. If you call prepareToPlay, it will initialize itself such that it can play the currently loaded sound immediately upon calling play. Once playback stops, it uninitializes itself, so you'd need to call prepareToPlay again to reinitialize. It's best to use this class for stream-y playback rather than discrete sound playback.
With OpenAL, once you've loaded the buffer, attaching it to a source and playing it should cause no delay at all.
You can encapsulate your audio units code into a .mm file and then call that from .m modules without having to compile those as C++.
Okay, I've figured it out. The real reason audio units worked better than the other audio methods is that my audio unit class, which I was adapting from another project, was setting a buffer duration property in the audio session, like this:
Float32 preferredBufferSize = .001;
UInt32 size = sizeof(preferredBufferSize);
AudioSessionSetProperty(kAudioSessionProperty_PreferredHardwareIOBufferDuration, size, &preferredBufferSize);
When I added this code to the OpenAL version, or even to the AVAudioPlayer version, I got accuracy to within a few milliseconds, the same as with audio units. (System Sounds, however, were still not very accurate.) I can verify the connection by increasing the buffer size and watching the playback intervals get less accurate.
Of course I only figured this out after spending an entire day adapting my project to use audio units -- tweaking it to compile under C++, testing the interruption handlers, etc. I hope this can save someone else from the same trouble.
I'm currently using OpenAL to play game music. It works fine, except that it doesn't work with anything except for raw WAV files. This means that I end up with a ~9mb soundtrack.
I'm new to OpenAL, and I'm using code directly from Apple's example (https://developer.apple.com/library/ios/#samplecode/MusicCube/Listings/Classes_MyOpenALSupport_h.html%23//apple_ref/doc/uid/DTS40008978-Classes_MyOpenALSupport_h-DontLinkElementID_9) to get the buffer data.
Question: Is there any way to modify this function so it reads compressed audio and decodes it on the fly?
I'm not so worried about the audio file format, just as long as it can be played and is compressed (like mp3, aac, caf). The only reason I want to do this (obviously) is to reduce file size.
Edit: It seems that the problem is not so much in OpenAL as the method I'm using to get the buffer. The function at https://developer.apple.com/library/ios/#samplecode/MusicCube/Listings/Classes_MyOpenALSupport_h.html%23//apple_ref/doc/uid/DTS40008978-Classes_MyOpenALSupport_h-DontLinkElementID_9 uses AudioFileOpenURL and AudioFileReadBytes. Is there any way to get the framework to decode the audio for me using ExtAudioFileOpenURL and ExtAudioFileRead?
I have tried the code here: https://devforums.apple.com/message/10678#10678, but I don't know what to make of it. The function I use to get the buffer is at https://developer.apple.com/library/ios/#samplecode/MusicCube/Listings/Classes_MyOpenALSupport_h.html%23//apple_ref/doc/uid/DTS40008978-Classes_MyOpenALSupport_h-DontLinkElementID_9, and I haven't really modified it, so that's what I need to build on.
I've started a bounty because I really need this, hopefully someone can point me in the right direction.
You'll need to use audio services to load other formats. Bear in mind that OpenAL ONLY supports uncompressed PCM data, so any data you load needs to be uncompressed during load.
Here's some code that will load any format supported by iOS: https://github.com/kstenerud/ObjectAL-for-iPhone/blob/master/ObjectAL/ObjectAL/Support/OALAudioFile.m
If you want to stream compressed soundtrack-type audio, use AVAudioPlayer since it plays compressed audio straight from disk.
You don't need any third party library to open archived files. With a little help from AudioToolbox/AudioToolbox.h framework you can open and read the data of a .caf file which is a very good choice by the way (better than mp3 or ogg) in terms of performance (minimal CPU impact during decompression). So ,when the data gets to OpenAL it is already PCM, ready to fill the buffers. Here is some sample code on how you can achieve this:
-(void) prepareFiles:(NSString *) filePath{
// get the full path of the file
NSString* fileName = [[NSBundle mainBundle] pathForResource:filePath ofType:#"caf"];
// open the file using the custom created methods (see below)
AudioFileID fileID = [self openAudioFile:fileName];
preparedAudioFileSize = [self audioFileSize:fileID];
if (preparedAudioFile){
free(preparedAudioFile);
preparedAudioFile = nil;
}
else{
;
}
preparedAudioFile = malloc(preparedAudioFileSize);
//read the data from the file into soundOutData var
AudioFileReadBytes(fileID, false, 0, &preparedAudioFileSize, preparedAudioFile);
//close the file
AudioFileClose(fileID);
}
-(AudioFileID)openAudioFile:(NSString*)filePath
{
AudioFileID fileID;
NSURL * url = [NSURL fileURLWithPath:filePath];
OSStatus result = AudioFileOpenURL((CFURLRef)url, kAudioFileReadPermission, 0, &fileID);
if (result != noErr) {
NSLog(#"fail to open: %#",filePath);
}
else {
;
}
return fileID;
}
-(UInt32)audioFileSize:(AudioFileID)fileDescriptor
{
UInt64 outDataSize = 0;
UInt32 thePropSize = sizeof(UInt64);
OSStatus result = AudioFileGetProperty(fileDescriptor, kAudioFilePropertyAudioDataByteCount, &thePropSize, &outDataSize);
if(result != 0) NSLog(#"cannot find file size");
return (UInt32)outDataSize;
}
based on Karl's reply above, I made a minimal single c++ function which opens a file and gives you back a buffer of pcm audio ( suitable for OpenAL ) and all the info you need to create an OpenAL sound ( format, samplerate, buffersize etc ).
the two files you need are here:
https://gist.github.com/ofTheo/5171369
hope it helps!
theo
Try if this works: http://kcat.strangesoft.net/openal-tutorial.html
You might try to use a third party library to load a mp3-ogg into a char* buffer, and then give this buffer to openAL. That would solve the file size problem.
For ogg, you should find the libraries on their website
For mp3, I honestly don't know where to find a lightweight library which could do that. But that should exist.
I am recoding a file or I have audio file I want to change the pitch and play the audio file. How can I set the pitch in a iphone program that is using objective-c.
Please help me out of this.
Thank you,
Madan Mohan.
The naive approach is to play it using a different sampling rate as compared to the sampling rate used to record the file. For example, if the file was recorded with Fs=44100Hz, then playing it with Fs=22050Hz, will give you half the original pitch.
Of course this naive approach involves changing the duration of the file, and other sound-related artifacts. If you need something less naive, you will have to implement a pitch-shifting algorithm yourself, and that's a huge topic --- I suggest you start by searching pitch shift in google.
You can use the soundtouch open source project to change pitch
Here is the link : http://www.surina.net/soundtouch/
Once you add soundtouch to your project, you have to give the input sound file path, output sound file path and pitch change as the input.
Since it takes more time to process your sound its better to modify soundtouch so that when you record the voice, directly give the data for processing. It will make your application better.
References
Real-time Pitch Shifting on the iPhone
Create pitch changing code?
you can play sound with changed pitch using AVAudioEngine, AVAudioPlayerNode.The sample tested code is given bellow .
#interface ViewController (){
AVAudioEngine *engine;
}
- (void)viewDidLoad {
[super viewDidLoad];
[self playAudio];
}
-(void)playAudio{
engine = [[AVAudioEngine alloc] init];
NSString* path=[[NSBundle mainBundle] pathForResource:#"test" ofType:#"mp3"];
NSURL *soundUrl = [NSURL fileURLWithPath:path];
AVAudioFile* File = [[AVAudioFile alloc] initForReading: soundUrl error: nil];
AVAudioPlayerNode* node = [AVAudioPlayerNode new];
[node stop];
[engine stop];
[engine reset];
[engine attachNode: node];
AVAudioUnitTimePitch* changeAudioUnitTime = [AVAudioUnitTimePitch new];
//change this rate variable to play fast or slow .
changeAudioUnitTime.rate = 1;
//change pitch variable to according your requirement.
changeAudioUnitTime.pitch = 100;
[engine attachNode: changeAudioUnitTime];
[engine connect:node to: changeAudioUnitTime format: nil];
[engine connect:changeAudioUnitTime to: engine.outputNode format:nil];
[node scheduleFile:File atTime: nil completionHandler: nil];
[engine startAndReturnError: nil];
[node play];
}
Make sure path variable have valid file path
Note :: this code is tested on actual device and working fine in my project.don't forget to add CoreAudio.framework , AVFoundation.framework.
I need to build a visual graph that represents voice levels (dB) in a recorded file. I tried to do it this way:
NSError *error = nil;
AVAudioPlayer *meterPlayer = [[AVAudioPlayer alloc]initWithContentsOfURL:[NSURL fileURLWithPath:self.recording.fileName] error:&error];
if (error) {
_lcl_logger(lcl_cEditRecording, lcl_vError, #"Cannot initialize AVAudioPlayer with file %# due to: %# (%#)", self.recording.fileName, error, error.userInfo);
} else {
[meterPlayer prepareToPlay];
meterPlayer.meteringEnabled = YES;
for (NSTimeInterval i = 0; i <= meterPlayer.duration; ++i) {
meterPlayer.currentTime = i;
[meterPlayer updateMeters];
float averagePower = [meterPlayer averagePowerForChannel:0];
_lcl_logger(lcl_cEditRecording, lcl_vTrace, #"Second: %f, Level: %f dB", i, averagePower);
}
}
[meterPlayer release];
It would be cool if it worked out however it didn't. I always get -160 dB. Any other ideas on how to implement that?
UPD: Here is what I got finally:
alt text http://img22.imageshack.us/img22/5778/waveform.png
I just want to help the others who have come into this same question and used a lot of time to search. To save your time, I put out my answer. I dislike somebody here who treat this as kind of secret...
After search around the articles about extaudioservice, audio queue and avfoundation.
I realised that i should use AVFoundation, reason is simple, it is the latest bundle and it is Objective C but not so cpp style.
So the steps to do it is not complicated:
Create AVAsset from the audio file
Create avassetreader from the avasset
Create avassettrack from avasset
Create avassetreadertrackoutput from avassettrack
Add the avassetreadertrackoutput to the previous avassetreader to start reading out the audio data
From the avassettrackoutput you can copyNextSampleBuffer one by one (it is a loop to read all data out).
Each copyNextSampleBuffer gives you a CMSampleBufferRef which can be used to get AudioBufferList by CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer. AudioBufferList is array of AudioBuffer. AudioBuffer is the a bunch of audio data which is stored in its mData part.
You can implement the above in extAudioService as well. But i think the above avfoundation approach is easier.
So next question, what to do with the mData? Note that when you get the avassetreadertrackoutput, you can specify its output format, so we specify the output is lpcm.
Then the mData you finally get is actually a float format amplitude value.
Easy right? Though i used a lot of time to organise this from piece here and there.
Two useful resource for share:
Read this article to know basic terms and conceptions: https://www.mikeash.com/pyblog/friday-qa-2012-10-12-obtaining-and-interpreting-audio-data.html
Sample code: https://github.com/iluvcapra/JHWaveform
You can copy most of the above mentioned code from this sample directly and used for your own purpose.
I haven't used it myself, but Apple's avTouch iPhone sample has bar graphs powered by AVAudioPlayer, and you can easily check to see how they do it.
I don't think you can use AVAudioPlayer based on your constraints. Even if you could get it to "start" without actually playing the sound file, it would only help you build a graph as fast as the audio file would stream. What you're talking about is doing static analysis of the sound, which will require a much different approach. You'll need to read in the file yourself and parse it manually. I don't think there's a quick solution using anything in the SDK.
Ok guys, seems I'm going to answer my own question again: http://www.supermegaultragroovy.com/blog/2009/10/06/drawing-waveforms/ No a lot of concretics, but at least you will know what Apple docs to read.