In my application, I am receiving audio data in LinearPCM format, which I need to play.
I am following iOS SpeakHere example. However I cannot get how and where I should provide a buffer to AudioQueue.
Can anyone provide me a working example of playing audio buffer in iOS via AudioQueue?
In the SpeakHere example playback is achieved using AudioQueue.
In the set up of AudioQueue, a function is specified that will be called when the queue wants more data.
You can see that in this method:
void AQPlayer::SetupNewQueue()
Here's the line that specifies the callback function:
XThrowIfError(AudioQueueNewOutput(&mDataFormat, AQPlayer::AQBufferCallback, this,
CFRunLoopGetCurrent(), kCFRunLoopCommonModes, 0, &mQueue), "AudioQueueNew failed");
If you take a look at AQPlayer::AQBufferCallback, you'll see where it gets the data from. In this example, the data has been written out to a file on disk. That's a good solution if you want to save memory, or if there's a possibility the audio file could be quite large.
Anyway, looking at AQPlayer::AQBufferCallback, you'll see a call to a function AudioFileReadPackets. That's what reads in the audio packets from the file on disk. It reads them straight into the buffer that AudioQueue will use:
OSStatus result = AudioFileReadPackets(THIS->GetAudioFileID(), false, &numBytes, inCompleteAQBuffer->mPacketDescriptions, THIS->GetCurrentPacket(), &nPackets,
inCompleteAQBuffer->mAudioData);
That buffer is inCompleteAQBuffer->mAudioData.
Finally, the callback function must enqueue the buffer as follows:
if (nPackets > 0) {
inCompleteAQBuffer->mAudioDataByteSize = numBytes;
inCompleteAQBuffer->mPacketDescriptionCount = nPackets;
AudioQueueEnqueueBuffer(inAQ, inCompleteAQBuffer, 0, NULL);
THIS->mCurrentPacket = (THIS->GetCurrentPacket() + nPackets);
}
Note first that it has to check that we have some packets to play. It also has to specify how many bytes are in the buffer.
Then, this line here:
THIS->mCurrentPacket = (THIS->GetCurrentPacket() + nPackets);
That keeps a track of where we are overall in our audio buffer. In other words, as more data is copied in from the file, we need to position the mCurrentPacket forward to that the next copy puts data in the correct place.
Related
all.
I have a project where I need to interface with a A/V receiver via an X-Fi sound blaster card. The A/V receiver is connected to a 7.1 speaker system. I would like to know the start to finish way to access each of the 7.1 channels individually so that I can direct aircraft cockpit information in a simulator. I am using OpenAL and am writing this code in C. I have developed some code that I thought should do the trick, but I am getting audio bleed through on the other 6 speakers. Below is a sample of some of the code I have already written. I hope that someone can help me here.
Thanks, Vincent.`{
ALuint NorthWestSource;
ALint PlayStatus;
switch (event)
{
case EVENT_COMMIT:
//Load user selected .wav file into the buffer that is initialized here, "InitBuf".
LoadDotWavFile();
//Generate a source, attach buffer to source, set source position, and play sound.
alGenSources(NumOfSources, &NorthWestSource);
ErrorCheck();
//Attach the buffer that contains the .wav file's data to the source.
alSourcei(NorthWestSource, AL_BUFFER, WavFileDataBuffer);
ErrorCheck();
//Set source's position, velocity, and orientation/direction.
alSourcefv(NorthWestSource, AL_POSITION, SourcePosition);
ErrorCheck();
alSourcefv(NorthWestSource, AL_VELOCITY, SourceVelocity);
ErrorCheck();
alSourcefv(NorthWestSource, AL_DIRECTION, SourceDirectionNorthWest);
ErrorCheck();
alSourcei(NorthWestSource, AL_SOURCE_RELATIVE, AL_TRUE);
ErrorCheck();
alSourcei(NorthWestSource, AL_CONE_INNER_ANGLE, 180);
ErrorCheck();
alSourcei(NorthWestSource, AL_CONE_OUTER_ANGLE, 270);
ErrorCheck();
SetCtrlVal(panelHandle, PANEL_SOURCEISSET, 1);
//Play the user selected file by playing the sources.
alSourcePlay(NorthWestSource);
ErrorCheck();
//Check that the .wav file has finished playing and if so clean things up.
do
{
alGetSourcei(NorthWestSource, AL_SOURCE_STATE, &PlayStatus);
if(PlayStatus != AL_PLAYING)
{
printf("File done playing. \n");
}//End do-while if statement
}
while(PlayStatus == AL_PLAYING);
//Clean things up more before exiting out of this audio projection.
alDeleteSources(NumOfSources, &NorthWestSource);
ErrorCheck();
alDeleteBuffers(NumOfBuffers, &WavFileDataBuffer);
ErrorCheck();
SetCtrlVal(panelHandle, PANEL_SOURCEISSET, 0);
//alDeleteBuffers(NumOfBuffers,
break;
}
return 0;
}`
I am confronted with the same problem. I want to play a tone to either the left or right ear. The only way I have found so far is to produce a stereo buffer (7.1 buffer for you) with the sound, then overwrite the information on the other channel (... other 7 channels for you) with zeros, and then play it back from a source in front of the listener.
This is my workaround. I know that it is clumsy. But I haven't found any better if you want to stay in openAL and to avoid programming using ALSA directly (for Linux) or CoreAudio (for Mac).
To answer your question more directly: No, there does not seem to be a direct way of saying (as I had wished for): "Speaker #3 say 'Hello World'! All other speakers remain silent."
Cheers,
farid
I'm using Matt Gallagher's AudioStreamer to play a mp3 audio stream. Now I want to do FFT in realtime and visualize the frequencies using OpenGL ES on the iPhone.
I'm wondering where to catch the audio data and pass it to my "Super-Fancy-FFT-Computing-3D-Visualization-Method". Matt is using the AudioQueue Framework and there is a Callback function that is set with:
err = AudioQueueNewOutput(&asbd, ASAudioQueueOutputCallback, self, NULL, NULL, 0, &audioQueue);
The Callback looks like this:
static void ASAudioQueueOutputCallback(void* inClientData,
AudioQueueRef inAQ,
AudioQueueBufferRef inBuffer){...}
In the moment I'm passing the data from the AudioQueueBufferRef and the result looks very weird. But with FFT and visualizations there are so many points where you can screw it up that I wanted to be sure to pass at least the right data to the FFT. I'm reading the data from the Buffer this way ignoring every second value because I only want to analyze one channel:
SInt32* buffPointer = (SInt32*)inBuffer->mAudioData;
int count = 0;
for (int i = 0; i < inBuffer->mAudioDataByteSize/2; i++) {
myBuffer[i] = buffPointer[count];
count += 2;
}
Then follows FFT computing with myBuffer containing 512 values.
Instead of sending the data you receive from the audio file stream callback directly to the audio queue, you could convert it to PCM, run your analysis, and then feed it to the audio queue (as PCM) if you still need to play it. To do the conversion, you could use Audio Converter Services (which will be a screaming nightmare without end), or an offline audio queue.
Option 3: look into the new Audio Queue "tap" on iOS 6, which lets you look at data inside a queue. I still need to check this out… it looks cool (and I'm giving a talk on it three weeks at CocoaConf, so, yeah…)
(repost from: http://lists.apple.com/archives/coreaudio-api/2012/Oct/msg00034.html )
Been banging my head against this problem all morning.
I have setup a connection to a datasource which returns audio data (It is a recording device, so there is no set length on the data. the data just streams in. Like, if you would open a stream to a radio)
and I have managed to receive all the packets of data in my code. Now I just need to play it. I want to play the data that is coming in, so I do not want to queue a few minutes or anything, I want to use the data I am recieving at that exact moment and play it.
Now I been searching all morning finding different examples but none were really layed out.
in the
(void)connection:(NSURLConnection )connection didReceiveData:(NSData)data {
function, the "data" package is the audio package. I tried streaming it with AVPlayer, MFVideoPlayer but nothing has worked for me so far. Also tried looking at mattgallagher's Audiostreamer but still was unable to achieve it.
Anyone here can help, has some (preferably) working examples?
Careful: The answer below is only valid if you receive PCM data from the server. This is of course never happens. That's why between rendering the audio and receiving the data you need another step: data conversion.
Depending on format, this could be more or less tricky, but in general you should use Audio Converter Services for this step.
You should use -(void)connection:(NSURLConnection )connection didReceiveData:(NSData)data only to fill a buffer with the data that comes from the server, playing it should not have anything to do with this method.
Now, to play the data you 'stored' in memory using the buffer you need to use RemoteIO and audio units. Here is a good, comprehensive tutorial. You can remove the "record" part from the tutorial as you don't really need it.
As you can see, they define a callback for playback:
callbackStruct.inputProc = playbackCallback;
callbackStruct.inputProcRefCon = self;
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Global,
kOutputBus,
&callbackStruct,
sizeof(callbackStruct));
and playbackCallback function looks like this:
static OSStatus playbackCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
for (int i = 0 ; i < ioData->mNumberBuffers; i++){
AudioBuffer buffer = ioData->mBuffers[i];
unsigned char *frameBuffer = buffer.mData;
for (int j = 0; j < inNumberFrames*2; j++){
frameBuffer[j] = getNextPacket();//this here is a function you have to make to get the next chunk of bytes available in the stream buffer
}
}
return noErr;
}
Basically what it does is to fill up the ioData buffer with the next chunk of bytes that need to be played. Be sure to zero out (silence) the ioData buffer if there is no new data to play (the player is silenced if not enough data is in the stream buffer).
Also, you can achieve the same thing with OpenAL using alSourceQueueBuffers and alSourceUnqueueBuffers to queue buffers one after the other.
That's it. Happy codding!
I'm currently using OpenAL to play game music. It works fine, except that it doesn't work with anything except for raw WAV files. This means that I end up with a ~9mb soundtrack.
I'm new to OpenAL, and I'm using code directly from Apple's example (https://developer.apple.com/library/ios/#samplecode/MusicCube/Listings/Classes_MyOpenALSupport_h.html%23//apple_ref/doc/uid/DTS40008978-Classes_MyOpenALSupport_h-DontLinkElementID_9) to get the buffer data.
Question: Is there any way to modify this function so it reads compressed audio and decodes it on the fly?
I'm not so worried about the audio file format, just as long as it can be played and is compressed (like mp3, aac, caf). The only reason I want to do this (obviously) is to reduce file size.
Edit: It seems that the problem is not so much in OpenAL as the method I'm using to get the buffer. The function at https://developer.apple.com/library/ios/#samplecode/MusicCube/Listings/Classes_MyOpenALSupport_h.html%23//apple_ref/doc/uid/DTS40008978-Classes_MyOpenALSupport_h-DontLinkElementID_9 uses AudioFileOpenURL and AudioFileReadBytes. Is there any way to get the framework to decode the audio for me using ExtAudioFileOpenURL and ExtAudioFileRead?
I have tried the code here: https://devforums.apple.com/message/10678#10678, but I don't know what to make of it. The function I use to get the buffer is at https://developer.apple.com/library/ios/#samplecode/MusicCube/Listings/Classes_MyOpenALSupport_h.html%23//apple_ref/doc/uid/DTS40008978-Classes_MyOpenALSupport_h-DontLinkElementID_9, and I haven't really modified it, so that's what I need to build on.
I've started a bounty because I really need this, hopefully someone can point me in the right direction.
You'll need to use audio services to load other formats. Bear in mind that OpenAL ONLY supports uncompressed PCM data, so any data you load needs to be uncompressed during load.
Here's some code that will load any format supported by iOS: https://github.com/kstenerud/ObjectAL-for-iPhone/blob/master/ObjectAL/ObjectAL/Support/OALAudioFile.m
If you want to stream compressed soundtrack-type audio, use AVAudioPlayer since it plays compressed audio straight from disk.
You don't need any third party library to open archived files. With a little help from AudioToolbox/AudioToolbox.h framework you can open and read the data of a .caf file which is a very good choice by the way (better than mp3 or ogg) in terms of performance (minimal CPU impact during decompression). So ,when the data gets to OpenAL it is already PCM, ready to fill the buffers. Here is some sample code on how you can achieve this:
-(void) prepareFiles:(NSString *) filePath{
// get the full path of the file
NSString* fileName = [[NSBundle mainBundle] pathForResource:filePath ofType:#"caf"];
// open the file using the custom created methods (see below)
AudioFileID fileID = [self openAudioFile:fileName];
preparedAudioFileSize = [self audioFileSize:fileID];
if (preparedAudioFile){
free(preparedAudioFile);
preparedAudioFile = nil;
}
else{
;
}
preparedAudioFile = malloc(preparedAudioFileSize);
//read the data from the file into soundOutData var
AudioFileReadBytes(fileID, false, 0, &preparedAudioFileSize, preparedAudioFile);
//close the file
AudioFileClose(fileID);
}
-(AudioFileID)openAudioFile:(NSString*)filePath
{
AudioFileID fileID;
NSURL * url = [NSURL fileURLWithPath:filePath];
OSStatus result = AudioFileOpenURL((CFURLRef)url, kAudioFileReadPermission, 0, &fileID);
if (result != noErr) {
NSLog(#"fail to open: %#",filePath);
}
else {
;
}
return fileID;
}
-(UInt32)audioFileSize:(AudioFileID)fileDescriptor
{
UInt64 outDataSize = 0;
UInt32 thePropSize = sizeof(UInt64);
OSStatus result = AudioFileGetProperty(fileDescriptor, kAudioFilePropertyAudioDataByteCount, &thePropSize, &outDataSize);
if(result != 0) NSLog(#"cannot find file size");
return (UInt32)outDataSize;
}
based on Karl's reply above, I made a minimal single c++ function which opens a file and gives you back a buffer of pcm audio ( suitable for OpenAL ) and all the info you need to create an OpenAL sound ( format, samplerate, buffersize etc ).
the two files you need are here:
https://gist.github.com/ofTheo/5171369
hope it helps!
theo
Try if this works: http://kcat.strangesoft.net/openal-tutorial.html
You might try to use a third party library to load a mp3-ogg into a char* buffer, and then give this buffer to openAL. That would solve the file size problem.
For ogg, you should find the libraries on their website
For mp3, I honestly don't know where to find a lightweight library which could do that. But that should exist.
Hey fellows,
Iam trying to build an application for realtime voicechanging.
In a first step I managed to record audiodata to a specified file and to play it after recording.
Now I try to change the code for playing back the audiobuffers right after recording them in loop.
My question is, how it is possible to read the Audiodata directly from the recording Audioqueue and not (like shown in documentation) from a file.
Iam thankful for any ideas and could show code-parts if needed.
Thanks in advance,
Lukas (from Germany)
Have a look at the SpeakHere example. This line sources the audio data:
OSStatus result = AudioFileReadPackets(THIS->GetAudioFileID(), false, &numBytes, inCompleteAQBuffer->mPacketDescriptions, THIS->GetCurrentPacket(), &nPackets,
inCompleteAQBuffer->mAudioData);
So, rather than call AudioFileReadPackets, you can just use a memcpy to copy over the recorded data buffer. Or, alternatively, supply to the playback AudioQueue a pointer to the audio data buffer. As playback continues, advance a mCurrentPacket pointer through the buffer.
To record, you'll do something very similar. Rather than writing out to a file, you'll write out to a buffer in memory. You'll first need to allocate that with a malloc. Then are your incoming AudioQueue captures recorded data, you copy that data to the buffer. As more data is copied, you advance the recording head, or mCurrentPacket to a new position.