Been banging my head against this problem all morning.
I have setup a connection to a datasource which returns audio data (It is a recording device, so there is no set length on the data. the data just streams in. Like, if you would open a stream to a radio)
and I have managed to receive all the packets of data in my code. Now I just need to play it. I want to play the data that is coming in, so I do not want to queue a few minutes or anything, I want to use the data I am recieving at that exact moment and play it.
Now I been searching all morning finding different examples but none were really layed out.
in the
(void)connection:(NSURLConnection )connection didReceiveData:(NSData)data {
function, the "data" package is the audio package. I tried streaming it with AVPlayer, MFVideoPlayer but nothing has worked for me so far. Also tried looking at mattgallagher's Audiostreamer but still was unable to achieve it.
Anyone here can help, has some (preferably) working examples?
Careful: The answer below is only valid if you receive PCM data from the server. This is of course never happens. That's why between rendering the audio and receiving the data you need another step: data conversion.
Depending on format, this could be more or less tricky, but in general you should use Audio Converter Services for this step.
You should use -(void)connection:(NSURLConnection )connection didReceiveData:(NSData)data only to fill a buffer with the data that comes from the server, playing it should not have anything to do with this method.
Now, to play the data you 'stored' in memory using the buffer you need to use RemoteIO and audio units. Here is a good, comprehensive tutorial. You can remove the "record" part from the tutorial as you don't really need it.
As you can see, they define a callback for playback:
callbackStruct.inputProc = playbackCallback;
callbackStruct.inputProcRefCon = self;
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Global,
kOutputBus,
&callbackStruct,
sizeof(callbackStruct));
and playbackCallback function looks like this:
static OSStatus playbackCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
for (int i = 0 ; i < ioData->mNumberBuffers; i++){
AudioBuffer buffer = ioData->mBuffers[i];
unsigned char *frameBuffer = buffer.mData;
for (int j = 0; j < inNumberFrames*2; j++){
frameBuffer[j] = getNextPacket();//this here is a function you have to make to get the next chunk of bytes available in the stream buffer
}
}
return noErr;
}
Basically what it does is to fill up the ioData buffer with the next chunk of bytes that need to be played. Be sure to zero out (silence) the ioData buffer if there is no new data to play (the player is silenced if not enough data is in the stream buffer).
Also, you can achieve the same thing with OpenAL using alSourceQueueBuffers and alSourceUnqueueBuffers to queue buffers one after the other.
That's it. Happy codding!
Related
In my iOS app, I am playing some short wave files and finally trying to export everything that I played to a single audio file such as WAV or CAF file. I have managed to do the playback using AUFilePlayer. How do I save the audio played via AUFilePlayer to a WAV or CAF file?
You'll probably want to look into the ExtAudioFile API. This exposes a function called ExtAudioFileWrite which is designed to tie in nicely with the data your Audio Units are passing around. ExtAudioFileWrite's signature is as follows:
OSStatus ExtAudioFileWrite (
ExtAudioFileRef inExtAudioFile,
UInt32 inNumberFrames,
const AudioBufferList *ioData
);
Which coincides nicely with an Audio Unit render callback, which looks like this:
OSStatus (*AURenderCallback)(
void * inRefCon,
AudioUnitRenderActionFlags * ioActionFlags,
const AudioTimeStamp * inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList * ioData
);
Notice the shared UInt32 inNumberFrames and AudioBufferList * ioData args.
So, your workflow could be:
Get an ExtAudioFile set up for writing
Get your AUGraph set up to render audio
Capture the AudioBufferLists that your AudioUnits are passing around
3 requires a bit more knowledge of how your app is set up, so I can't really help you out too much there. If you want to have your audio going to the speakers as well as being written to a file, you'll probably want to make use of AUGraphAddRenderNotify, which will let you know whenever a render happens (and let you hook in your own AURenderCallback to write to your ExtAudioFile).
If you're doing things this way (i.e. live in a render callback) make sure to use ExtAudioFileWriteAsync so you don't block the render thread.
I'm using Matt Gallagher's AudioStreamer to play a mp3 audio stream. Now I want to do FFT in realtime and visualize the frequencies using OpenGL ES on the iPhone.
I'm wondering where to catch the audio data and pass it to my "Super-Fancy-FFT-Computing-3D-Visualization-Method". Matt is using the AudioQueue Framework and there is a Callback function that is set with:
err = AudioQueueNewOutput(&asbd, ASAudioQueueOutputCallback, self, NULL, NULL, 0, &audioQueue);
The Callback looks like this:
static void ASAudioQueueOutputCallback(void* inClientData,
AudioQueueRef inAQ,
AudioQueueBufferRef inBuffer){...}
In the moment I'm passing the data from the AudioQueueBufferRef and the result looks very weird. But with FFT and visualizations there are so many points where you can screw it up that I wanted to be sure to pass at least the right data to the FFT. I'm reading the data from the Buffer this way ignoring every second value because I only want to analyze one channel:
SInt32* buffPointer = (SInt32*)inBuffer->mAudioData;
int count = 0;
for (int i = 0; i < inBuffer->mAudioDataByteSize/2; i++) {
myBuffer[i] = buffPointer[count];
count += 2;
}
Then follows FFT computing with myBuffer containing 512 values.
Instead of sending the data you receive from the audio file stream callback directly to the audio queue, you could convert it to PCM, run your analysis, and then feed it to the audio queue (as PCM) if you still need to play it. To do the conversion, you could use Audio Converter Services (which will be a screaming nightmare without end), or an offline audio queue.
Option 3: look into the new Audio Queue "tap" on iOS 6, which lets you look at data inside a queue. I still need to check this out… it looks cool (and I'm giving a talk on it three weeks at CocoaConf, so, yeah…)
(repost from: http://lists.apple.com/archives/coreaudio-api/2012/Oct/msg00034.html )
In my application, I am receiving audio data in LinearPCM format, which I need to play.
I am following iOS SpeakHere example. However I cannot get how and where I should provide a buffer to AudioQueue.
Can anyone provide me a working example of playing audio buffer in iOS via AudioQueue?
In the SpeakHere example playback is achieved using AudioQueue.
In the set up of AudioQueue, a function is specified that will be called when the queue wants more data.
You can see that in this method:
void AQPlayer::SetupNewQueue()
Here's the line that specifies the callback function:
XThrowIfError(AudioQueueNewOutput(&mDataFormat, AQPlayer::AQBufferCallback, this,
CFRunLoopGetCurrent(), kCFRunLoopCommonModes, 0, &mQueue), "AudioQueueNew failed");
If you take a look at AQPlayer::AQBufferCallback, you'll see where it gets the data from. In this example, the data has been written out to a file on disk. That's a good solution if you want to save memory, or if there's a possibility the audio file could be quite large.
Anyway, looking at AQPlayer::AQBufferCallback, you'll see a call to a function AudioFileReadPackets. That's what reads in the audio packets from the file on disk. It reads them straight into the buffer that AudioQueue will use:
OSStatus result = AudioFileReadPackets(THIS->GetAudioFileID(), false, &numBytes, inCompleteAQBuffer->mPacketDescriptions, THIS->GetCurrentPacket(), &nPackets,
inCompleteAQBuffer->mAudioData);
That buffer is inCompleteAQBuffer->mAudioData.
Finally, the callback function must enqueue the buffer as follows:
if (nPackets > 0) {
inCompleteAQBuffer->mAudioDataByteSize = numBytes;
inCompleteAQBuffer->mPacketDescriptionCount = nPackets;
AudioQueueEnqueueBuffer(inAQ, inCompleteAQBuffer, 0, NULL);
THIS->mCurrentPacket = (THIS->GetCurrentPacket() + nPackets);
}
Note first that it has to check that we have some packets to play. It also has to specify how many bytes are in the buffer.
Then, this line here:
THIS->mCurrentPacket = (THIS->GetCurrentPacket() + nPackets);
That keeps a track of where we are overall in our audio buffer. In other words, as more data is copied in from the file, we need to position the mCurrentPacket forward to that the next copy puts data in the correct place.
i need to play linear pcm data live on a iphone.
i get a LIVE datastream via RTSP, and i can currently read it out from iphone, save it into a file, play it on a desktop audioplayer that supports pcm, therefore i think the transport is okay.
now i got stuck, i have completely! no idea what to do with my NSData object containing the data.
i did a bit of research, ending up with AudioUnits, but i just cannot assign my NSdata to the audiobuffer, respectivly i have no clue how.
for my instance, i assigned the callback:
AURenderCallbackStruct input;
input.inputProc = makeSound;
input.inputProcRefCon = self;
and having the function 'makeSound':
OSStatus makeSound(
void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
//so what to do here?
//ioData->mBuffers[0].mdata = [mySound bytes]; does not work, nor does
//ioData->mBuffers = [mySound byes];
return noErr;
}
is my approeach wrong in gerneral?
of what do i need to know/learn/implement? i am a complete audio-newbie, so my suggestion was, that i dont need several buffers, since when i get the new sound-package from rtsp, the old one is ended, since its a live stream (i base this on my recordings, that just appended the bytes w/o looking up presentation timestamps, since i dont receive some anyways)
Cheers
I don't know if this is exactly what you are looking for but some of Matt Gallagher's AudioStreamer code might be helpful to you. In particular, check out how he handles the audio buffering.
http://cocoawithlove.com/2010/03/streaming-mp3aac-audio-again.html
I want to use OpenAL to play music in an iOS game. The music files are stored in mp3 format and I want to stream them using a buffer queue. I load audio data into the buffers using AudioFileReadPacketData(). However playing the buffers only gives me noise. It works perfectly for caf files, but not for mp3s. Did I miss some vital step in decoding the file?
Code I use to open the sound file:
- (void) openFile:(NSString*)fileName {
NSBundle *bundle = [NSBundle mainBundle];
CFURLRef url = (CFURLRef)[[NSURL fileURLWithPath:[bundle pathForResource:fileName ofType:#"mp3"]] retain];
AudioFileOpenURL(url, kAudioFileReadPermission, 0, &audioFile);
AudioStreamBasicDescription theFormat;
UInt32 formatSize = sizeof(theFormat);
AudioFileGetProperty(audioFile, kAudioFilePropertyDataFormat, &formatSize, &theFormat);
freq = (ALsizei)theFormat.mSampleRate;
CFRelease(url);
}
Code I use to fill in buffers:
- (void) loadOneChunkIntoBuffer:(ALuint)buffer {
char data[STREAM_BUFFER_SIZE];
UInt32 loadSize = STREAM_BUFFER_SIZE;
AudioStreamPacketDescription packetDesc[STREAM_PACKETS];
UInt32 numPackets = STREAM_PACKETS;
AudioFileReadPacketData(audioFile, NO, &loadSize, packetDesc, packetsLoaded, &numPackets, data);
alBufferData(buffer, AL_FORMAT_STEREO16, data, loadSize, freq);
packetsLoaded += numPackets;
}
Because you're reading bytes of MP3 data and treating them as PCM data.
You almost certainly want AudioFileReadPacketData(). EDIT: Except that still gives you MP3 data; it just gives it in packets and (possibly) parses packet headers.
If you don't require OpenAL, AVAudioPlayer is probably the better way to go (according to the Multimedia Programming Guide, there's also Audio Queue services if you want more control).
If you really need to use OpenAL, according to TN2199 you'll need to convert it to PCM in the native byte order. See oalTouch/Classes/MyOpenALSupport.c for an example of using Extended Audio File Services to do this. Note that TN2199 says the format "must ... not use hardware decompression" — according to the Multimedia Programming Guide, software decoding is supported for everything except HE-AAC since OS 3.0. Also note that software MP3 decoding can use a significant amount of CPU time.
Alternatively, explicitly convert the audio using AudioConverter or (possibly) AudioUnit with kAudioUnitSubType_AUConverter. If you do this, it might be worthwhile decompressing everything once and keeping it in memory to minimize overhead.