In my iOS app, I am playing some short wave files and finally trying to export everything that I played to a single audio file such as WAV or CAF file. I have managed to do the playback using AUFilePlayer. How do I save the audio played via AUFilePlayer to a WAV or CAF file?
You'll probably want to look into the ExtAudioFile API. This exposes a function called ExtAudioFileWrite which is designed to tie in nicely with the data your Audio Units are passing around. ExtAudioFileWrite's signature is as follows:
OSStatus ExtAudioFileWrite (
ExtAudioFileRef inExtAudioFile,
UInt32 inNumberFrames,
const AudioBufferList *ioData
);
Which coincides nicely with an Audio Unit render callback, which looks like this:
OSStatus (*AURenderCallback)(
void * inRefCon,
AudioUnitRenderActionFlags * ioActionFlags,
const AudioTimeStamp * inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList * ioData
);
Notice the shared UInt32 inNumberFrames and AudioBufferList * ioData args.
So, your workflow could be:
Get an ExtAudioFile set up for writing
Get your AUGraph set up to render audio
Capture the AudioBufferLists that your AudioUnits are passing around
3 requires a bit more knowledge of how your app is set up, so I can't really help you out too much there. If you want to have your audio going to the speakers as well as being written to a file, you'll probably want to make use of AUGraphAddRenderNotify, which will let you know whenever a render happens (and let you hook in your own AURenderCallback to write to your ExtAudioFile).
If you're doing things this way (i.e. live in a render callback) make sure to use ExtAudioFileWriteAsync so you don't block the render thread.
Related
Been banging my head against this problem all morning.
I have setup a connection to a datasource which returns audio data (It is a recording device, so there is no set length on the data. the data just streams in. Like, if you would open a stream to a radio)
and I have managed to receive all the packets of data in my code. Now I just need to play it. I want to play the data that is coming in, so I do not want to queue a few minutes or anything, I want to use the data I am recieving at that exact moment and play it.
Now I been searching all morning finding different examples but none were really layed out.
in the
(void)connection:(NSURLConnection )connection didReceiveData:(NSData)data {
function, the "data" package is the audio package. I tried streaming it with AVPlayer, MFVideoPlayer but nothing has worked for me so far. Also tried looking at mattgallagher's Audiostreamer but still was unable to achieve it.
Anyone here can help, has some (preferably) working examples?
Careful: The answer below is only valid if you receive PCM data from the server. This is of course never happens. That's why between rendering the audio and receiving the data you need another step: data conversion.
Depending on format, this could be more or less tricky, but in general you should use Audio Converter Services for this step.
You should use -(void)connection:(NSURLConnection )connection didReceiveData:(NSData)data only to fill a buffer with the data that comes from the server, playing it should not have anything to do with this method.
Now, to play the data you 'stored' in memory using the buffer you need to use RemoteIO and audio units. Here is a good, comprehensive tutorial. You can remove the "record" part from the tutorial as you don't really need it.
As you can see, they define a callback for playback:
callbackStruct.inputProc = playbackCallback;
callbackStruct.inputProcRefCon = self;
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Global,
kOutputBus,
&callbackStruct,
sizeof(callbackStruct));
and playbackCallback function looks like this:
static OSStatus playbackCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
for (int i = 0 ; i < ioData->mNumberBuffers; i++){
AudioBuffer buffer = ioData->mBuffers[i];
unsigned char *frameBuffer = buffer.mData;
for (int j = 0; j < inNumberFrames*2; j++){
frameBuffer[j] = getNextPacket();//this here is a function you have to make to get the next chunk of bytes available in the stream buffer
}
}
return noErr;
}
Basically what it does is to fill up the ioData buffer with the next chunk of bytes that need to be played. Be sure to zero out (silence) the ioData buffer if there is no new data to play (the player is silenced if not enough data is in the stream buffer).
Also, you can achieve the same thing with OpenAL using alSourceQueueBuffers and alSourceUnqueueBuffers to queue buffers one after the other.
That's it. Happy codding!
In my application, I am receiving audio data in LinearPCM format, which I need to play.
I am following iOS SpeakHere example. However I cannot get how and where I should provide a buffer to AudioQueue.
Can anyone provide me a working example of playing audio buffer in iOS via AudioQueue?
In the SpeakHere example playback is achieved using AudioQueue.
In the set up of AudioQueue, a function is specified that will be called when the queue wants more data.
You can see that in this method:
void AQPlayer::SetupNewQueue()
Here's the line that specifies the callback function:
XThrowIfError(AudioQueueNewOutput(&mDataFormat, AQPlayer::AQBufferCallback, this,
CFRunLoopGetCurrent(), kCFRunLoopCommonModes, 0, &mQueue), "AudioQueueNew failed");
If you take a look at AQPlayer::AQBufferCallback, you'll see where it gets the data from. In this example, the data has been written out to a file on disk. That's a good solution if you want to save memory, or if there's a possibility the audio file could be quite large.
Anyway, looking at AQPlayer::AQBufferCallback, you'll see a call to a function AudioFileReadPackets. That's what reads in the audio packets from the file on disk. It reads them straight into the buffer that AudioQueue will use:
OSStatus result = AudioFileReadPackets(THIS->GetAudioFileID(), false, &numBytes, inCompleteAQBuffer->mPacketDescriptions, THIS->GetCurrentPacket(), &nPackets,
inCompleteAQBuffer->mAudioData);
That buffer is inCompleteAQBuffer->mAudioData.
Finally, the callback function must enqueue the buffer as follows:
if (nPackets > 0) {
inCompleteAQBuffer->mAudioDataByteSize = numBytes;
inCompleteAQBuffer->mPacketDescriptionCount = nPackets;
AudioQueueEnqueueBuffer(inAQ, inCompleteAQBuffer, 0, NULL);
THIS->mCurrentPacket = (THIS->GetCurrentPacket() + nPackets);
}
Note first that it has to check that we have some packets to play. It also has to specify how many bytes are in the buffer.
Then, this line here:
THIS->mCurrentPacket = (THIS->GetCurrentPacket() + nPackets);
That keeps a track of where we are overall in our audio buffer. In other words, as more data is copied in from the file, we need to position the mCurrentPacket forward to that the next copy puts data in the correct place.
i need to play linear pcm data live on a iphone.
i get a LIVE datastream via RTSP, and i can currently read it out from iphone, save it into a file, play it on a desktop audioplayer that supports pcm, therefore i think the transport is okay.
now i got stuck, i have completely! no idea what to do with my NSData object containing the data.
i did a bit of research, ending up with AudioUnits, but i just cannot assign my NSdata to the audiobuffer, respectivly i have no clue how.
for my instance, i assigned the callback:
AURenderCallbackStruct input;
input.inputProc = makeSound;
input.inputProcRefCon = self;
and having the function 'makeSound':
OSStatus makeSound(
void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
//so what to do here?
//ioData->mBuffers[0].mdata = [mySound bytes]; does not work, nor does
//ioData->mBuffers = [mySound byes];
return noErr;
}
is my approeach wrong in gerneral?
of what do i need to know/learn/implement? i am a complete audio-newbie, so my suggestion was, that i dont need several buffers, since when i get the new sound-package from rtsp, the old one is ended, since its a live stream (i base this on my recordings, that just appended the bytes w/o looking up presentation timestamps, since i dont receive some anyways)
Cheers
I don't know if this is exactly what you are looking for but some of Matt Gallagher's AudioStreamer code might be helpful to you. In particular, check out how he handles the audio buffering.
http://cocoawithlove.com/2010/03/streaming-mp3aac-audio-again.html
While there are plenty of tutorials for how to use AVCaptureSession to grab camera data, I can find no information (even on apple's dev network itself) on how to properly handle microphone data.
I have implemented AVCaptureAudioDataOutputSampleBufferDelegate, and I'm getting calls to my delegate, but I have no idea how the contents of the CMSampleBufferRef I get are formatted. Are the contents of the buffer one discrete sample? What are its properties? Where can these properties be set?
Video properties can be set using [AVCaptureVideoDataOutput setVideoSettings:], but there is no corresponding call for AVCaptureAudioDataOutput (no setAudioSettings or anything similar).
They are formatted as LPCM! You can verify this by getting the AudioStreamBasicDescription like so:
CMFormatDescriptionRef formatDescription = CMSampleBufferGetFormatDescription(sampleBuffer);
const AudioStreamBasicDescription *streamDescription = CMAudioFormatDescriptionGetStreamBasicDescription(formatDescription);
and then checking the stream description’s mFormatId.
I want to use OpenAL to play music in an iOS game. The music files are stored in mp3 format and I want to stream them using a buffer queue. I load audio data into the buffers using AudioFileReadPacketData(). However playing the buffers only gives me noise. It works perfectly for caf files, but not for mp3s. Did I miss some vital step in decoding the file?
Code I use to open the sound file:
- (void) openFile:(NSString*)fileName {
NSBundle *bundle = [NSBundle mainBundle];
CFURLRef url = (CFURLRef)[[NSURL fileURLWithPath:[bundle pathForResource:fileName ofType:#"mp3"]] retain];
AudioFileOpenURL(url, kAudioFileReadPermission, 0, &audioFile);
AudioStreamBasicDescription theFormat;
UInt32 formatSize = sizeof(theFormat);
AudioFileGetProperty(audioFile, kAudioFilePropertyDataFormat, &formatSize, &theFormat);
freq = (ALsizei)theFormat.mSampleRate;
CFRelease(url);
}
Code I use to fill in buffers:
- (void) loadOneChunkIntoBuffer:(ALuint)buffer {
char data[STREAM_BUFFER_SIZE];
UInt32 loadSize = STREAM_BUFFER_SIZE;
AudioStreamPacketDescription packetDesc[STREAM_PACKETS];
UInt32 numPackets = STREAM_PACKETS;
AudioFileReadPacketData(audioFile, NO, &loadSize, packetDesc, packetsLoaded, &numPackets, data);
alBufferData(buffer, AL_FORMAT_STEREO16, data, loadSize, freq);
packetsLoaded += numPackets;
}
Because you're reading bytes of MP3 data and treating them as PCM data.
You almost certainly want AudioFileReadPacketData(). EDIT: Except that still gives you MP3 data; it just gives it in packets and (possibly) parses packet headers.
If you don't require OpenAL, AVAudioPlayer is probably the better way to go (according to the Multimedia Programming Guide, there's also Audio Queue services if you want more control).
If you really need to use OpenAL, according to TN2199 you'll need to convert it to PCM in the native byte order. See oalTouch/Classes/MyOpenALSupport.c for an example of using Extended Audio File Services to do this. Note that TN2199 says the format "must ... not use hardware decompression" — according to the Multimedia Programming Guide, software decoding is supported for everything except HE-AAC since OS 3.0. Also note that software MP3 decoding can use a significant amount of CPU time.
Alternatively, explicitly convert the audio using AudioConverter or (possibly) AudioUnit with kAudioUnitSubType_AUConverter. If you do this, it might be worthwhile decompressing everything once and keeping it in memory to minimize overhead.