AudioQueue Output Callback only fires 3 times (nBuffers times) - iphone

When I start an audio output process with AudioQueueStart(out.queue, nil), the output callback only fires 3 times (which is the number of allocated buffers).
Here is my Output callback code:
static void AQOutputCallback(void* aqr,
AudioQueueRef outQ,
AudioQueueBufferRef outQB)
{
AQCallbackStruct *aqc = (AQCallbackStruct *) aqr;
NSLog(#"Out");
// Check if AudioQueue is stopped
if (!aqc->run) {
NSLog(#"Stopped");
return;
}
// Processing data
// Check enqueue error
int err = AudioQueueEnqueueBuffer(outQ, outQB, 0, NULL);
if (err != noErr) NSLog(#"OutputCallback AudioQueueEnqueueBuffer() %d ", err);
NSLog(#"Enqueued");
}
I think it's due to the lack of buffers, but my output is:
Out
Enqueued
Out
Enqueued
Out
Enqueued
So the first buffer is enqueued before the AudioQueue starts to fill the third one, it should not run out of buffers.
What happens here ?
Edit: Setup code
#define AUDIO_BUFFERS 3
typedef struct AQCallbackStruct {
AudioStreamBasicDescription mDataFormat;
AudioQueueRef queue;
AudioQueueBufferRef mBuffers[AUDIO_BUFFERS];
unsigned long frameSize;
BOOL *run;
} AQCallbackStruct;
// In some method
AQCallbackStruct out;
out.mDataFormat
out.mDataFormat.mFormatID = kAudioFormatLinearPCM;
out.mDataFormat.mSampleRate = 44100.0;
out.mDataFormat.mChannelsPerFrame = 2;
out.mDataFormat.mBitsPerChannel = 16;
out.mDataFormat.mBytesPerPacket =
out.mDataFormat.mBytesPerFrame =
out.mDataFormat.mChannelsPerFrame * sizeof(short int);
out.mDataFormat.mFramesPerPacket = 1;
out.mDataFormat.mFormatFlags =
kLinearPCMFormatFlagIsBigEndian
| kLinearPCMFormatFlagIsSignedInteger
| kLinearPCMFormatFlagIsPacked;
out.frameSize = 735;
int err;
err = AudioQueueNewOutput(&out.mDataFormat,
AQOutputCallback,
&out,
CFRunLoopGetCurrent(),
kCFRunLoopCommonModes,
0,
&out.queue);
if (err != noErr) NSLog(#"AudioQueueNewOutput() error: %d", err);
for (int i=0; i<AUDIO_BUFFERS; i++) {
err = AudioQueueAllocateBuffer(out.queue, out.frameSize, &out.mBuffers[i]);
if (err != noErr) NSLog(#"Output AudioQueueAllocateBuffer() error: %d", err);
out.mBuffers[i]->mAudioDataByteSize = out.frameSize;
err = AudioQueueEnqueueBuffer(out.queue, out.mBuffers[i], 0, NULL);
if (err != noErr) NSLog(#"Output AudioQueueEnqueueBuffer() error: %d", err);
}
AudioQueueStart(out.queue, nil);

Please have a look at this page: where to start with audio synthesis on iPhone.
The BleepMachine sample code that's in there is what got me started with this.

I finally found out where the problem was: AudioQueue callback in simulator but not on device
However, the output callback was properly fired in simulator but not on my device, and I still don't know exactly why there are differences in the AudioSession settings.

Related

Dispatch source is only called when I do a NSLog() first

I am trying to use grand central dispatch in conjunction with bsd sockets to send an icmp ping. I add DISPATCH_SOURCE_TYPE_WRITE and DISPATCH_SOURCE_TYPE_READ as dispatch sources to read and write async.
So this is the method were I create the bsd socket and install the dispatch sources:
- (void)start
{
int err;
const struct sockaddr * addrPtr;
assert(self.hostAddress != nil);
// Open the socket.
addrPtr = (const struct sockaddr *) [self.hostAddress bytes];
fd = -1;
err = 0;
switch (addrPtr->sa_family) {
case AF_INET: {
fd = socket(AF_INET, SOCK_DGRAM, IPPROTO_ICMP);
if (fd < 0) {
err = errno;
}
} break;
case AF_INET6:
assert(NO);
// fall through
default: {
err = EPROTONOSUPPORT;
} break;
}
if (err != 0) {
[self didFailWithError:[NSError errorWithDomain:NSPOSIXErrorDomain code:err userInfo:nil]];
} else {
dispatch_source_t writeSource = dispatch_source_create(DISPATCH_SOURCE_TYPE_WRITE, fd, 0, dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0));
dispatch_source_set_event_handler(writeSource, ^{
abort(); // testing
// call call method here to send a ping
});
dispatch_resume(writeSource);
//NSLog(#"testout");
dispatch_source_t readSource = dispatch_source_create(DISPATCH_SOURCE_TYPE_READ, fd, 0, dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0));
dispatch_source_set_event_handler(readSource, ^{
unsigned long bytesAvail = dispatch_source_get_data(readSource);
NSLog(#"bytes available: %lu", bytesAvail);
});
dispatch_resume(readSource);
}
}
You see the //NSLog(#"testout");? The funny thing is that the write block is only called when the //NSLog(#"testout"); is NOT commented out. This is very odd. I didn't test the read callback. The sending needs to be working first.
So what is going on here?
There are kind of a bunch of things missing here. I'm not sure exactly which one is causing the weird behavior, but when I do all of the missing things, it seems to work "as expected" and my write event handler is called reliably and repeatedly. In general, there are a bunch of things you need to do when setting up a socket like this before passing it off to GCD. They are:
Create the socket
Bind it to a local address (missing in your code)
Set it to non-blocking (missing in your code)
Here is a little example I was able to put together in which the write handler gets called repeatedly, as expected:
int DoStuff()
{
int fd = -1;
// Create
if ((fd = socket(AF_INET, SOCK_DGRAM, 0)) < 0) {
perror("cannot create socket");
return 0;
}
// Bind
struct sockaddr_in *localAddressPtr = (struct sockaddr_in *)malloc(sizeof(struct sockaddr_in));
memset((char *)localAddressPtr, 0, sizeof(*localAddressPtr));
localAddressPtr->sin_family = AF_INET;
localAddressPtr->sin_addr.s_addr = htonl(INADDR_ANY);
localAddressPtr->sin_port = htons(0);
if (bind(fd, (struct sockaddr *)localAddressPtr, sizeof(*localAddressPtr)) < 0) {
perror("bind failed");
return 0;
}
// Set non-blocking
int flags;
if (-1 == (flags = fcntl(fd, F_GETFL, 0)))
flags = 0;
if (-1 == fcntl(fd, F_SETFL, flags | O_NONBLOCK))
{
perror("Couldnt set non-blocking");
return 0;
}
// Do a DNS lookup...
struct hostent *hp;
struct sockaddr_in *remoteAddressPtr = malloc(sizeof(struct sockaddr_in));
// Fill in the server's address and data
memset((char*)remoteAddressPtr, 0, sizeof(*remoteAddressPtr));
remoteAddressPtr->sin_family = AF_INET;
remoteAddressPtr->sin_port = htons(12345);
// Look up the address of the server by name
const char* host = "www.google.com";
hp = gethostbyname(host);
if (!hp) {
fprintf(stderr, "could not obtain address of %s\n", host);
return 0;
}
// Copy the host's address into the remote address structure
memcpy((void *)&remoteAddressPtr->sin_addr, hp->h_addr_list[0], hp->h_length);
dispatch_source_t writeSource = dispatch_source_create(DISPATCH_SOURCE_TYPE_WRITE, fd, 0, dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0));
dispatch_source_set_event_handler(writeSource, ^{
// Send message
const char* my_message = "the only thing we have to fear is fear itself.";
unsigned long len = strlen(my_message);
if (sendto(fd, my_message, len, 0, (struct sockaddr *)remoteAddressPtr, sizeof(*remoteAddressPtr)) != len) {
perror("sendto failed");
dispatch_source_cancel(writeSource);
}
});
dispatch_source_set_cancel_handler(writeSource, ^{
close(fd);
free(localAddressPtr);
free(remoteAddressPtr);
});
dispatch_resume(writeSource);
return 1;
}
NB: There's no way to dispose of the writeSource in my example without there being an error in a send operation. It's a trivial example...
My general theory on why NSLog triggers the handler to fire in your case, is that it keeps execution at or below that stack frame long enough for the background thread to come around and call the handler, but without that NSLog, your function returns, and something has a chance to die before the handler can get called. In fact, if you're using ARC it's probably the writeSource itself that is getting deallocated, since I don't see you making a strong reference to it anywhere outside the scope of this function. (My example captures a strong reference to it in the block, thus keeping it alive.) You could test this in your code by stashing a strong reference to writeSource.
I found the error:
In newer SDKs dispatch sources are subject to automatic reference counting despite the fact that they are no Objective-C objects.
So when the start method is over ARC disposes the dispatch source and they never get called.
NSLog delays the end of the start method in a way that the dispatch source triggers before the source gets disposed.

General param error retrieving record format with AudioQueueGetProperty

I am getting a -50 (general param error) from a call to AudioQueueGetProperty. Please help me as it has been several months since I've touched XCode and any iPhone work. This is likely a simple goof on my behalf but I cannot resolve it. My code leading to the -50:
//Setup format
AudioStreamBasicDescription recordFormat;
memset(&recordFormat, 0, sizeof(recordFormat));
recordFormat.mFormatID = kAudioFormatMPEG4AAC;
recordFormat.mChannelsPerFrame = 2;
CCGetDefaultInputDeviceSampleRate(&recordFormat.mSampleRate);
UInt32 propSize = sizeof(recordFormat);
AQ(AudioFormatGetProperty(kAudioFormatProperty_FormatInfo, 0, NULL,
&propSize, &recordFormat),
"AudioFormatGetProperty throws unexpected errors.");
//Setup Queue
//listing 4.8-4.9
AudioQueueRef theQueue = {0};
self->queue = theQueue;
AQ(AudioQueueNewInput(&recordFormat, CCAudioRecordingCallback,
self, NULL, NULL, 0, &self->queue),
"AudioQueueNewInput throws unexpected errors.");
UInt32 size = sizeof(recordFormat);
AQ(AudioQueueGetProperty(self->queue,
kAudioConverterCurrentOutputStreamDescription,
&recordFormat,
&size),
"Getting audio property kAudioConverterCurrentOutputStreamDescription throws unexpected errors.");
I have verified that I have a valid queue just as I make the call to AudioQueueGetProperty. I've tried both ways of passing the queue "self->queue", and "self.queue" and they both result in the same error. The queue is defined as follows:
#interface CCAudioRecorder()
//...
#property (nonatomic, assign) AudioQueueRef queue;
//...
#end
#implementation CCAudioRecorder
#synthesize queue;
AQ is a #def:
#define AQ(expr, msg) if(nil!=CheckError((expr), msg)) [NSException raise:#"AudioException" format:#"Unexpected exception occured."];
Which resolves to the following error checking function:
static NSString* CheckError(OSStatus error, const char* operation)
{
if (noErr == error) return nil;
NSString *errorMessage = nil;
char errorString[20];
//See if it appears to be a 4-char code
*(UInt32 *)(errorString+1) = CFSwapInt32HostToBig(error);
if ( isprint(errorString[1]) && isprint(errorString[2])
&& isprint(errorString[3]) && isprint(errorString[4]) )
{
errorString[0] = errorString[5] = '\'';
errorString[6] = '\0';
} else {
sprintf(errorString, "%d", (int)error);
}
errorMessage = [NSString stringWithFormat:
#"Audio Error: %# (%#)\n",
[NSString stringWithUTF8String:operation],
[NSString stringWithUTF8String:errorString]];
NSLog(#"%#", errorMessage);
return errorMessage;
}
I've also tried calling AudioFormatGetProperty on a queue created with a local variable, avoiding the class instance level iVar and still get the same error:
AudioQueueRef theQueue = {0};
AQ(AudioQueueNewInput(&recordFormat, CCAudioRecordingCallback,
self, NULL, NULL, 0, &theQueue),
"AudioQueueNewInput throws unexpected errors.");
UInt32 size = sizeof(recordFormat);
AQ(AudioQueueGetProperty(theQueue,
kAudioConverterCurrentOutputStreamDescription,
&recordFormat,
&size),
"Getting audio property kAudioConverterCurrentOutputStreamDescription throws unexpected errors.");
** Update **
I have the following code which works on the simulator and not on the device. (I have not cross referenced it with what I posted earlier but I believe it's either similar or the exact.)
AudioQueueRef theQueue = {0};
self->queue = theQueue;
AQ(AudioQueueNewInput(&recordFormat, CCAudioRecordingCallback,
self, NULL, NULL, 0, &self->queue),
"AudioQueueNewInput throws unexpected errors.");
UInt32 size = sizeof(recordFormat);
AQ(AudioQueueGetProperty(self->queue,
kAudioConverterCurrentOutputStreamDescription,
&recordFormat,
&size),
"Getting audio property kAudioConverterCurrentOutputStreamDescription throws unexpected errors.");
Running it on device I get a crash in the same spot with error -50 general param error. My device is an iPhone 4S running iOS6 I'm working with XCode 4.5.
You are following the code from the learning core audio book right? Chapter 4 about implementing a recorder? I noticed a difference between your code and the book's code, in the book they simply initialize the Queue and use it as is:
AudioQueueRef queue = {0};
UInt32 size = sizeof(recordFormat);
CheckError(AudioQueueGetProperty(queue, kAudioConverterCurrentOutputStreamDescription,
&recordFormat, &size), "couldn't get queue's format");
I'm not sure why you're throwing self in the mix. But that's definitely what's causing your bug. If all fails simply download the complete code for that chapter here and see where you can identify your mistake.

Writing buffer of audio samples to aac file using ExtAudioFileWrite for iOS

UPDATE: I have figured this out and posted my solution as an answer to my own question (below)
I am trying to write a simple buffer of audio samples to a file using ExtAudioFileWrite in AAC format.
I have achieved this with the code below to write a mono buffer to a .wav file - however, I cannot do this for stereo or for AAC files which is what I want to do.
Here is what I have so far...
CFStringRef fPath;
fPath = CFStringCreateWithCString(kCFAllocatorDefault,
"/path/to/my/audiofile/audiofile.wav",
kCFStringEncodingMacRoman);
OSStatus err;
int mChannels = 1;
UInt32 totalFramesInFile = 100000;
Float32 *outputBuffer = (Float32 *)malloc(sizeof(Float32) * (totalFramesInFile*mChannels));
////////////// Set up Audio Buffer List ////////////
AudioBufferList outputData;
outputData.mNumberBuffers = 1;
outputData.mBuffers[0].mNumberChannels = mChannels;
outputData.mBuffers[0].mDataByteSize = 4 * totalFramesInFile * mChannels;
outputData.mBuffers[0].mData = outputBuffer;
Float32 audioFile[totalFramesInFile*mChannels];
for (int i = 0;i < totalFramesInFile*mChannels;i++)
{
audioFile[i] = ((Float32)(rand() % 100))/100.0;
audioFile[i] = audioFile[i]*0.2;
}
outputData.mBuffers[0].mData = &audioFile;
CFURLRef fileURL = CFURLCreateWithFileSystemPath(kCFAllocatorDefault,fPath,kCFURLPOSIXPathStyle,false);
ExtAudioFileRef audiofileRef;
// WAVE FILES
AudioFileTypeID fileType = kAudioFileWAVEType;
AudioStreamBasicDescription clientFormat;
clientFormat.mSampleRate = 44100.0;
clientFormat.mFormatID = kAudioFormatLinearPCM;
clientFormat.mFormatFlags = 12;
clientFormat.mBitsPerChannel = 16;
clientFormat.mChannelsPerFrame = mChannels;
clientFormat.mBytesPerFrame = 2*clientFormat.mChannelsPerFrame;
clientFormat.mFramesPerPacket = 1;
clientFormat.mBytesPerPacket = 2*clientFormat.mChannelsPerFrame;
// open the file for writing
err = ExtAudioFileCreateWithURL((CFURLRef)fileURL, fileType, &clientFormat, NULL, kAudioFileFlags_EraseFile, &audiofileRef);
if (err != noErr)
{
cout << "Problem when creating audio file: " << err << "\n";
}
// tell the ExtAudioFile API what format we'll be sending samples in
err = ExtAudioFileSetProperty(audiofileRef, kExtAudioFileProperty_ClientDataFormat, sizeof(clientFormat), &clientFormat);
if (err != noErr)
{
cout << "Problem setting audio format: " << err << "\n";
}
UInt32 rFrames = (UInt32)totalFramesInFile;
// write the data
err = ExtAudioFileWrite(audiofileRef, rFrames, &outputData);
if (err != noErr)
{
cout << "Problem writing audio file: " << err << "\n";
}
// close the file
ExtAudioFileDispose(audiofileRef);
NSLog(#"Done!");
My specific questions are:
How do I set up the AudioStreamBasicDescription for AAC?
Why can't I get stereo to work properly here? If I set the number of channels ('mChannels') to 2 then I get the left channel correctly and distortion in the right channel.
I'd very much appreciate any help - I think I've read almost every page I can find on this and am none the wiser as, while there are similar questions, they usually derive the AudioStreamBasicDescription parameters from some input audio file, which I cannot see the result of. The Apple documentation is no help either.
Many thanks in advance,
Adam
Ok, after some exploration I have figured it out. I have wrapped it as a function that writes random noise to a file. Specifically, it can:
write .wav or .m4a files
write mono or stereo in either format
write the file to a specified path
The function arguments are:
path to audio file to be created
number of channels (max 2)
boolean: compress with m4a (if false, use pcm)
For a stereo M4A file, the function should be called as:
writeNoiseToAudioFile("/path/to/my/audiofile.m4a",2,true);
The source of the function follows. I have tried to comment it as much as possible - I hope it is correct, it certainly works for me, but please say "Adam, you've done this a bit wrong" if there is something I've missed. Good luck! Here is the code:
void writeNoiseToAudioFile(char *fName,int mChannels,bool compress_with_m4a)
{
OSStatus err; // to record errors from ExtAudioFile API functions
// create file path as CStringRef
CFStringRef fPath;
fPath = CFStringCreateWithCString(kCFAllocatorDefault,
fName,
kCFStringEncodingMacRoman);
// specify total number of samples per channel
UInt32 totalFramesInFile = 100000;
/////////////////////////////////////////////////////////////////////////////
////////////// Set up Audio Buffer List For Interleaved Audio ///////////////
/////////////////////////////////////////////////////////////////////////////
AudioBufferList outputData;
outputData.mNumberBuffers = 1;
outputData.mBuffers[0].mNumberChannels = mChannels;
outputData.mBuffers[0].mDataByteSize = sizeof(AudioUnitSampleType)*totalFramesInFile*mChannels;
/////////////////////////////////////////////////////////////////////////////
//////// Synthesise Noise and Put It In The AudioBufferList /////////////////
/////////////////////////////////////////////////////////////////////////////
// create an array to hold our audio
AudioUnitSampleType audioFile[totalFramesInFile*mChannels];
// fill the array with random numbers (white noise)
for (int i = 0;i < totalFramesInFile*mChannels;i++)
{
audioFile[i] = ((AudioUnitSampleType)(rand() % 100))/100.0;
audioFile[i] = audioFile[i]*0.2;
// (yes, I know this noise has a DC offset, bad)
}
// set the AudioBuffer to point to the array containing the noise
outputData.mBuffers[0].mData = &audioFile;
/////////////////////////////////////////////////////////////////////////////
////////////////// Specify The Output Audio File Format /////////////////////
/////////////////////////////////////////////////////////////////////////////
// the client format will describe the output audio file
AudioStreamBasicDescription clientFormat;
// the file type identifier tells the ExtAudioFile API what kind of file we want created
AudioFileTypeID fileType;
// if compress_with_m4a is tru then set up for m4a file format
if (compress_with_m4a)
{
// the file type identifier tells the ExtAudioFile API what kind of file we want created
// this creates a m4a file type
fileType = kAudioFileM4AType;
// Here we specify the M4A format
clientFormat.mSampleRate = 44100.0;
clientFormat.mFormatID = kAudioFormatMPEG4AAC;
clientFormat.mFormatFlags = kMPEG4Object_AAC_Main;
clientFormat.mChannelsPerFrame = mChannels;
clientFormat.mBytesPerPacket = 0;
clientFormat.mBytesPerFrame = 0;
clientFormat.mFramesPerPacket = 1024;
clientFormat.mBitsPerChannel = 0;
clientFormat.mReserved = 0;
}
else // else encode as PCM
{
// this creates a wav file type
fileType = kAudioFileWAVEType;
// This function audiomatically generates the audio format according to certain arguments
FillOutASBDForLPCM(clientFormat,44100.0,mChannels,32,32,true,false,false);
}
/////////////////////////////////////////////////////////////////////////////
///////////////// Specify The Format of Our Audio Samples ///////////////////
/////////////////////////////////////////////////////////////////////////////
// the local format describes the format the samples we will give to the ExtAudioFile API
AudioStreamBasicDescription localFormat;
FillOutASBDForLPCM (localFormat,44100.0,mChannels,32,32,true,false,false);
/////////////////////////////////////////////////////////////////////////////
///////////////// Create the Audio File and Open It /////////////////////////
/////////////////////////////////////////////////////////////////////////////
// create the audio file reference
ExtAudioFileRef audiofileRef;
// create a fileURL from our path
CFURLRef fileURL = CFURLCreateWithFileSystemPath(kCFAllocatorDefault,fPath,kCFURLPOSIXPathStyle,false);
// open the file for writing
err = ExtAudioFileCreateWithURL((CFURLRef)fileURL, fileType, &clientFormat, NULL, kAudioFileFlags_EraseFile, &audiofileRef);
if (err != noErr)
{
cout << "Problem when creating audio file: " << err << "\n";
}
/////////////////////////////////////////////////////////////////////////////
///// Tell the ExtAudioFile API what format we'll be sending samples in /////
/////////////////////////////////////////////////////////////////////////////
// Tell the ExtAudioFile API what format we'll be sending samples in
err = ExtAudioFileSetProperty(audiofileRef, kExtAudioFileProperty_ClientDataFormat, sizeof(localFormat), &localFormat);
if (err != noErr)
{
cout << "Problem setting audio format: " << err << "\n";
}
/////////////////////////////////////////////////////////////////////////////
///////// Write the Contents of the AudioBufferList to the AudioFile ////////
/////////////////////////////////////////////////////////////////////////////
UInt32 rFrames = (UInt32)totalFramesInFile;
// write the data
err = ExtAudioFileWrite(audiofileRef, rFrames, &outputData);
if (err != noErr)
{
cout << "Problem writing audio file: " << err << "\n";
}
/////////////////////////////////////////////////////////////////////////////
////////////// Close the Audio File and Get Rid Of The Reference ////////////
/////////////////////////////////////////////////////////////////////////////
// close the file
ExtAudioFileDispose(audiofileRef);
NSLog(#"Done!");
}
Don't forget to import the AudioToolbox Framework and to include the header file:
#import <AudioToolbox/AudioToolbox.h>

OpenAL making glitch when looping sound

I'm playing sounds for my game with openAL and I have some problems that sometimes a small glitch is played while looping. Also without looping I get a small pop...sometimes but not all.
I think it has something to do with the buffer being a little too long so there is some undefined data in the end. I just can't figure out how to change this. I'm loading a caf file with this function:
void* MyGetOpenALAudioData(CFURLRef inFileURL, ALsizei *outDataSize, ALenum *outDataFormat, ALsizei *outSampleRate, ALdouble *duration) {
OSStatus err = noErr;
SInt64 theFileLengthInFrames = 0;
AudioStreamBasicDescription theFileFormat;
UInt32 thePropertySize = sizeof(theFileFormat);
ExtAudioFileRef extRef = NULL;
void* theData = NULL;
AudioStreamBasicDescription theOutputFormat;
// Open a file with ExtAudioFileOpen()
err = ExtAudioFileOpenURL(inFileURL, &extRef);
if(err) { printf("MyGetOpenALAudioData: ExtAudioFileOpenURL FAILED, Error = %ld\n", err); goto Exit; }
// Get the audio data format
err = ExtAudioFileGetProperty(extRef, kExtAudioFileProperty_FileDataFormat, &thePropertySize, &theFileFormat);
if(err) { printf("MyGetOpenALAudioData: ExtAudioFileGetProperty(kExtAudioFileProperty_FileDataFormat) FAILED, Error = %ld\n", err); goto Exit; }
if (theFileFormat.mChannelsPerFrame > 2) { printf("MyGetOpenALAudioData - Unsupported Format, channel count is greater than stereo\n"); goto Exit;}
// Set the client format to 16 bit signed integer (native-endian) data
// Maintain the channel count and sample rate of the original source format
theOutputFormat.mSampleRate = theFileFormat.mSampleRate;
theOutputFormat.mChannelsPerFrame = theFileFormat.mChannelsPerFrame;
theOutputFormat.mFormatID = kAudioFormatLinearPCM;
theOutputFormat.mBytesPerPacket = 2 * theOutputFormat.mChannelsPerFrame;
theOutputFormat.mFramesPerPacket = 1;
theOutputFormat.mBytesPerFrame = 2 * theOutputFormat.mChannelsPerFrame;
theOutputFormat.mBitsPerChannel = 16;
theOutputFormat.mFormatFlags = kAudioFormatFlagsNativeEndian | kAudioFormatFlagIsPacked | kAudioFormatFlagIsSignedInteger;
// Set the desired client (output) data format
err = ExtAudioFileSetProperty(extRef, kExtAudioFileProperty_ClientDataFormat, sizeof(theOutputFormat), &theOutputFormat);
if(err) { printf("MyGetOpenALAudioData: ExtAudioFileSetProperty(kExtAudioFileProperty_ClientDataFormat) FAILED, Error = %ld\n", err); goto Exit; }
// Get the total frame count
thePropertySize = sizeof(theFileLengthInFrames);
err = ExtAudioFileGetProperty(extRef, kExtAudioFileProperty_FileLengthFrames, &thePropertySize, &theFileLengthInFrames);
if(err) { printf("MyGetOpenALAudioData: ExtAudioFileGetProperty(kExtAudioFileProperty_FileLengthFrames) FAILED, Error = %ld\n", err); goto Exit; }
// Read all the data into memory
UInt32 dataSize = theFileLengthInFrames * theOutputFormat.mBytesPerFrame;;
theData = malloc(dataSize);
if (theData)
{
AudioBufferList theDataBuffer;
theDataBuffer.mNumberBuffers = 1;
theDataBuffer.mBuffers[0].mDataByteSize = dataSize;
theDataBuffer.mBuffers[0].mNumberChannels = theOutputFormat.mChannelsPerFrame;
theDataBuffer.mBuffers[0].mData = theData;
// Read the data into an AudioBufferList
err = ExtAudioFileRead(extRef, (UInt32*)&theFileLengthInFrames, &theDataBuffer);
if(err == noErr)
{
// success
*outDataSize = (ALsizei)dataSize;
*outDataFormat = (theOutputFormat.mChannelsPerFrame > 1) ? AL_FORMAT_STEREO16 : AL_FORMAT_MONO16;
*outSampleRate = (ALsizei)theOutputFormat.mSampleRate;
}
else
{
// failure
free (theData);
theData = NULL; // make sure to return NULL
printf("MyGetOpenALAudioData: ExtAudioFileRead FAILED, Error = %ld\n", err); goto Exit;
}
}
// Alex(Colombiamug): get the file duration...
// first, get the audioID for the file...
AudioFileID audioID;
UInt32 audioIDSize = sizeof(audioID);
err = ExtAudioFileGetProperty(extRef, kExtAudioFileProperty_AudioFile, &audioIDSize, &audioID);
if(err) { printf("MyGetOpenALAudioData: ExtAudioFileGetProperty(kExtAudioFileProperty_AudioFile) FAILED, Error = %ld\n", err); goto Exit; }
//now the duration...
double soundDuration;
UInt32 durationSize = sizeof(soundDuration);
err = AudioFileGetProperty(audioID, kAudioFilePropertyEstimatedDuration, &durationSize, &soundDuration);
if(err) { printf("MyGetOpenALAudioData: AudioFileGetProperty(kAudioFilePropertyEstimatedDuration) FAILED, Error = %ld\n", err); goto Exit; }
*duration = soundDuration;
//printf("Audio duration:%f secs.\n", soundDuration);
Exit:
// Dispose the ExtAudioFileRef, it is no longer needed
if (extRef) ExtAudioFileDispose(extRef);
return theData;
}
It is part of this soundengine: SoundEngine
I have tried to put my caf file directly into the sample code and it is the same small glitch. (This caf file was doing fine with the old Apple SoundEngine.cpp but I had other issues with that so i decided to change)
Answering my own question ;)
By pure luck I must admit I tried to remove the kAudioFormatFlagIsPacked flag from this line:
theOutputFormat.mFormatFlags = kAudioFormatFlagsNativeEndian | kAudioFormatFlagIsPacked | kAudioFormatFlagIsSignedInteger;
and that fixed it.
If anybody can tell me why it could be nice to know..or if there are some problems in removing that flag I would also like to hear about it.

Unable to Write On CFWriteStreamWrite

i am having trouble in writing data to CFStream.
// i am getting the CFSocketRef and then from it getting native Handle.
CFSocketNativeHandle sock = CFSocketGetNative( [appDelegate getSocketRef]);
Does above Code return me the same handler of the created socket?what ever i write onto stream will be written on the created socket?
// and then wrote
CFStreamCreatePairWithSocket(kCFAllocatorDefault, sock,
&readStream, &writeStream);
if (!readStream || !writeStream) {
// close([appDelegate TCPClient]);
// close(sock);
fprintf(stderr, "CFStreamCreatePairWithSocket() failed\n");
return;
}
above works fine,it does not give me failed message
// does not give error ,else portion is executed
if (!CFWriteStreamOpen(writeStream)) {
CFStreamError myErr = CFWriteStreamGetError(writeStream);
// An error has occurred.
if (myErr.domain == kCFStreamErrorDomainPOSIX) {
// Interpret myErr.error as a UNIX errno.
NSLog(#"kCFStreamErrorDomainPOSIX");
} else if (myErr.domain == kCFStreamErrorDomainMacOSStatus) {
// Interpret myErr.error as a MacOS error code.
OSStatus macError = (OSStatus)myErr.error;
// Check other error domains.
NSLog(#"kCFStreamErrorDomainMacOSStatus");
}
}else
/* Send the connect call to stream */
// while (send_len < (originalLength + 1))
{
// if (CFWriteStreamCanAcceptBytes(writeStream))
{
//UInt8 buf[] = "Hello, world";//(unsigned char *) "connectStream"
//CFIndex bufLen = (CFIndex)strlen(buf);
bytes = CFWriteStreamWrite(writeStream,
(unsigned char *) connectStream,
originalLength );
NSLog(#"%#",[[NSString alloc] initWithData:connectStream encoding:NSASCIIStringEncoding] );
if (bytes < 0) {
fprintf(stderr, "CFWriteStreamWrite() failed\n");
// close(sock);
return;
}
send_len += bytes;
}
// close(sock);
CFReadStreamClose(readStream);
CFWriteStreamClose(writeStream);
return;
}
CFWriteStreamCanAcceptBytes always return false so i have commented it and directly wrote bytes,and it blocks the call and does not return any thing neither any byte is written on to the stream,
Can any one please guide me in this rergard?
is there any other way of doing this?
Regards,
Aamir