OS X / iOS - Sample rate conversion for a buffer using AudioConverterFillComplexBuffer - iphone

I'm writing a CoreAudio backend for an audio library called XAL. Input buffers can be of various sample rates. I'm using a single audio unit for output. Idea is to convert the buffers and mix them prior to sending them to the audio unit.
Everything works as long as the input buffer has the same properties (sample rate, channel count, etc) as the output audio unit. Hence, the mixing part works.
However, I'm stuck with sample rate and channel count conversion. From what I figured out, this is easiest to do with Audio Converter Services API. I've managed to construct a converter; the idea is that the output format is the same as the output unit format, but possibly adjusted for purposes of the converter.
Audio converter is successfully constructed, but upon calling AudioConverterFillComplexBuffer(), I get output status error -50.
I'd love if I could get another set of eyeballs on this code. Problem is probably somewhere below AudioConverterNew(). Variable stream contains incoming (and outgoing) buffer data, and streamSize contains byte-size of incoming (and outgoing) buffer data.
What did I do wrong?
void CoreAudio_AudioManager::_convertStream(Buffer* buffer, unsigned char** stream, int *streamSize)
{
if (buffer->getBitsPerSample() != unitDescription.mBitsPerChannel ||
buffer->getChannels() != unitDescription.mChannelsPerFrame ||
buffer->getSamplingRate() != unitDescription.mSampleRate)
{
printf("INPUT STREAM SIZE: %d\n", *streamSize);
// describe the input format's description
AudioStreamBasicDescription inputDescription;
memset(&inputDescription, 0, sizeof(inputDescription));
inputDescription.mFormatID = kAudioFormatLinearPCM;
inputDescription.mFormatFlags = kLinearPCMFormatFlagIsPacked | kLinearPCMFormatFlagIsSignedInteger;
inputDescription.mChannelsPerFrame = buffer->getChannels();
inputDescription.mSampleRate = buffer->getSamplingRate();
inputDescription.mBitsPerChannel = buffer->getBitsPerSample();
inputDescription.mBytesPerFrame = (inputDescription.mBitsPerChannel * inputDescription.mChannelsPerFrame) / 8;
inputDescription.mFramesPerPacket = 1; //*streamSize / inputDescription.mBytesPerFrame;
inputDescription.mBytesPerPacket = inputDescription.mBytesPerFrame * inputDescription.mFramesPerPacket;
printf("INPUT : %lu bytes per packet for sample rate %g, channels %d\n", inputDescription.mBytesPerPacket, inputDescription.mSampleRate, inputDescription.mChannelsPerFrame);
// copy conversion output format's description from the
// output audio unit's description.
// then adjust framesPerPacket to match the input we'll be passing.
// framecount of our input stream is based on the input bytecount.
// output stream will have same number of frames, but different
// number of bytes.
AudioStreamBasicDescription outputDescription = unitDescription;
outputDescription.mFramesPerPacket = 1; //inputDescription.mFramesPerPacket;
outputDescription.mBytesPerPacket = outputDescription.mBytesPerFrame * outputDescription.mFramesPerPacket;
printf("OUTPUT : %lu bytes per packet for sample rate %g, channels %d\n", outputDescription.mBytesPerPacket, outputDescription.mSampleRate, outputDescription.mChannelsPerFrame);
// create an audio converter
AudioConverterRef audioConverter;
OSStatus acCreationResult = AudioConverterNew(&inputDescription, &outputDescription, &audioConverter);
printf("Created audio converter %p (status: %d)\n", audioConverter, acCreationResult);
if(!audioConverter)
{
// bail out
free(*stream);
*streamSize = 0;
*stream = (unsigned char*)malloc(0);
return;
}
// calculate number of bytes required for output of input stream.
// allocate buffer of adequate size.
UInt32 outputBytes = outputDescription.mBytesPerPacket * (*streamSize / inputDescription.mBytesPerFrame); // outputDescription.mFramesPerPacket * outputDescription.mBytesPerFrame;
unsigned char *outputBuffer = (unsigned char*)malloc(outputBytes);
memset(outputBuffer, 0, outputBytes);
printf("OUTPUT BYTES : %d\n", outputBytes);
// describe input data we'll pass into converter
AudioBuffer inputBuffer;
inputBuffer.mNumberChannels = inputDescription.mChannelsPerFrame;
inputBuffer.mDataByteSize = *streamSize;
inputBuffer.mData = *stream;
// describe output data buffers into which we can receive data.
AudioBufferList outputBufferList;
outputBufferList.mNumberBuffers = 1;
outputBufferList.mBuffers[0].mNumberChannels = outputDescription.mChannelsPerFrame;
outputBufferList.mBuffers[0].mDataByteSize = outputBytes;
outputBufferList.mBuffers[0].mData = outputBuffer;
// set output data packet size
UInt32 outputDataPacketSize = outputDescription.mBytesPerPacket;
// convert
OSStatus result = AudioConverterFillComplexBuffer(audioConverter, /* AudioConverterRef inAudioConverter */
CoreAudio_AudioManager::_converterComplexInputDataProc, /* AudioConverterComplexInputDataProc inInputDataProc */
&inputBuffer, /* void *inInputDataProcUserData */
&outputDataPacketSize, /* UInt32 *ioOutputDataPacketSize */
&outputBufferList, /* AudioBufferList *outOutputData */
NULL /* AudioStreamPacketDescription *outPacketDescription */
);
printf("Result: %d wheee\n", result);
// change "stream" to describe our output buffer.
// even if error occured, we'd rather have silence than unconverted audio.
free(*stream);
*stream = outputBuffer;
*streamSize = outputBytes;
// dispose of the audio converter
AudioConverterDispose(audioConverter);
}
}
OSStatus CoreAudio_AudioManager::_converterComplexInputDataProc(AudioConverterRef inAudioConverter,
UInt32* ioNumberDataPackets,
AudioBufferList* ioData,
AudioStreamPacketDescription** ioDataPacketDescription,
void* inUserData)
{
printf("Converter\n");
if(*ioNumberDataPackets != 1)
{
xal::log("_converterComplexInputDataProc cannot provide input data; invalid number of packets requested");
*ioNumberDataPackets = 0;
ioData->mNumberBuffers = 0;
return -50;
}
*ioNumberDataPackets = 1;
ioData->mNumberBuffers = 1;
ioData->mBuffers[0] = *(AudioBuffer*)inUserData;
*ioDataPacketDescription = NULL;
return 0;
}

Working code for Core Audio sample rate conversion and channel count conversion, using Audio Converter Services (now available as a part of the BSD-licensed XAL audio library):
void CoreAudio_AudioManager::_convertStream(Buffer* buffer, unsigned char** stream, int *streamSize)
{
if (buffer->getBitsPerSample() != unitDescription.mBitsPerChannel ||
buffer->getChannels() != unitDescription.mChannelsPerFrame ||
buffer->getSamplingRate() != unitDescription.mSampleRate)
{
// describe the input format's description
AudioStreamBasicDescription inputDescription;
memset(&inputDescription, 0, sizeof(inputDescription));
inputDescription.mFormatID = kAudioFormatLinearPCM;
inputDescription.mFormatFlags = kLinearPCMFormatFlagIsPacked | kLinearPCMFormatFlagIsSignedInteger;
inputDescription.mChannelsPerFrame = buffer->getChannels();
inputDescription.mSampleRate = buffer->getSamplingRate();
inputDescription.mBitsPerChannel = buffer->getBitsPerSample();
inputDescription.mBytesPerFrame = (inputDescription.mBitsPerChannel * inputDescription.mChannelsPerFrame) / 8;
inputDescription.mFramesPerPacket = 1; //*streamSize / inputDescription.mBytesPerFrame;
inputDescription.mBytesPerPacket = inputDescription.mBytesPerFrame * inputDescription.mFramesPerPacket;
// copy conversion output format's description from the
// output audio unit's description.
// then adjust framesPerPacket to match the input we'll be passing.
// framecount of our input stream is based on the input bytecount.
// output stream will have same number of frames, but different
// number of bytes.
AudioStreamBasicDescription outputDescription = unitDescription;
outputDescription.mFramesPerPacket = 1; //inputDescription.mFramesPerPacket;
outputDescription.mBytesPerPacket = outputDescription.mBytesPerFrame * outputDescription.mFramesPerPacket;
// create an audio converter
AudioConverterRef audioConverter;
OSStatus acCreationResult = AudioConverterNew(&inputDescription, &outputDescription, &audioConverter);
if(!audioConverter)
{
// bail out
free(*stream);
*streamSize = 0;
*stream = (unsigned char*)malloc(0);
return;
}
// calculate number of bytes required for output of input stream.
// allocate buffer of adequate size.
UInt32 outputBytes = outputDescription.mBytesPerPacket * (*streamSize / inputDescription.mBytesPerPacket); // outputDescription.mFramesPerPacket * outputDescription.mBytesPerFrame;
unsigned char *outputBuffer = (unsigned char*)malloc(outputBytes);
memset(outputBuffer, 0, outputBytes);
// describe input data we'll pass into converter
AudioBuffer inputBuffer;
inputBuffer.mNumberChannels = inputDescription.mChannelsPerFrame;
inputBuffer.mDataByteSize = *streamSize;
inputBuffer.mData = *stream;
// describe output data buffers into which we can receive data.
AudioBufferList outputBufferList;
outputBufferList.mNumberBuffers = 1;
outputBufferList.mBuffers[0].mNumberChannels = outputDescription.mChannelsPerFrame;
outputBufferList.mBuffers[0].mDataByteSize = outputBytes;
outputBufferList.mBuffers[0].mData = outputBuffer;
// set output data packet size
UInt32 outputDataPacketSize = outputBytes / outputDescription.mBytesPerPacket;
// fill class members with data that we'll pass into
// the InputDataProc
_converter_currentBuffer = &inputBuffer;
_converter_currentInputDescription = inputDescription;
// convert
OSStatus result = AudioConverterFillComplexBuffer(audioConverter, /* AudioConverterRef inAudioConverter */
CoreAudio_AudioManager::_converterComplexInputDataProc, /* AudioConverterComplexInputDataProc inInputDataProc */
this, /* void *inInputDataProcUserData */
&outputDataPacketSize, /* UInt32 *ioOutputDataPacketSize */
&outputBufferList, /* AudioBufferList *outOutputData */
NULL /* AudioStreamPacketDescription *outPacketDescription */
);
// change "stream" to describe our output buffer.
// even if error occured, we'd rather have silence than unconverted audio.
free(*stream);
*stream = outputBuffer;
*streamSize = outputBytes;
// dispose of the audio converter
AudioConverterDispose(audioConverter);
}
}
OSStatus CoreAudio_AudioManager::_converterComplexInputDataProc(AudioConverterRef inAudioConverter,
UInt32* ioNumberDataPackets,
AudioBufferList* ioData,
AudioStreamPacketDescription** ioDataPacketDescription,
void* inUserData)
{
if(ioDataPacketDescription)
{
xal::log("_converterComplexInputDataProc cannot provide input data; it doesn't know how to provide packet descriptions");
*ioDataPacketDescription = NULL;
*ioNumberDataPackets = 0;
ioData->mNumberBuffers = 0;
return 501;
}
CoreAudio_AudioManager *self = (CoreAudio_AudioManager*)inUserData;
ioData->mNumberBuffers = 1;
ioData->mBuffers[0] = *(self->_converter_currentBuffer);
*ioNumberDataPackets = ioData->mBuffers[0].mDataByteSize / self->_converter_currentInputDescription.mBytesPerPacket;
return 0;
}
In the header, as part of the CoreAudio_AudioManager class, here are relevant instance variables:
AudioStreamBasicDescription unitDescription;
AudioBuffer *_converter_currentBuffer;
AudioStreamBasicDescription _converter_currentInputDescription;
A few months later, I'm looking at this and I've realized that I didn't document the changes.
If you are interested in what the changes were:
look at the callback function CoreAudio_AudioManager::_converterComplexInputDataProc
one has to properly specify the number of output packets into ioNumberDataPackets
this has required introduction of new instance variables to hold both the buffer (the previous inUserData) and the input description (used to calculate the number of packets to be fed into Core Audio's converter)
this calculation of "output" packets (those fed into the converter) is done based on amount of data that our callback received, and the number of bytes per packet that the input format contains
Hopefully this edit will help a future reader (myself included)!

Related

Chunked Encoding using Flac on iOS

I found a library that helps to convert WAV file to Flac:
https://github.com/jhurt/wav_to_flac
Also succeed to compile Flac to the platform and it works fine.
I've been using this library after capturing the audio on wav format to convert it to Flac and then send to my server.
Problem is that the audio file could be long and then precious time is wasted.
The thing is that I want to encode the audio as Flac format and send that to server on the same time when capturing and not after capturing stops, So, I need a help here on how to do that (encode Flac directly from the audio so I could send it to my server)...
In my library called libsprec, you can see an example of both recording a WAV file (here) and converting it to FLAC (here). (Credits: the audio recording part heavily relies on Erica Sadun's work, for the record.)
Now if you want to do this in one step, you can do that as well. The trick is that you have to do the initialization of both the Audio Queues and the FLAC library first, then "interleave" the calls to them, i. e. when you get some audio data in the callback function of the Audio Queue, you immediately FLAC-encode it.
I don't think, however, that this would be much faster than recording and encoding in two separate steps. The heavy part of the processing is the recording and the maths in the encoding itself, so re-reading the same buffer (or I dare you, even a file!) won't add much to the processing time.
That said, you may want to do something like this:
// First, we initialize the Audio Queue
AudioStreamBasicDescription desc;
desc.mFormatID = kAudioFormatLinearPCM;
desc.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;
desc.mReserved = 0;
desc.mSampleRate = SAMPLE_RATE;
desc.mChannelsPerFrame = 2; // stereo (?)
desc.mBitsPerChannel = BITS_PER_SAMPLE;
desc.mBytesPerFrame = BYTES_PER_FRAME;
desc.mFramesPerPacket = 1;
desc.mBytesPerPacket = desc.mFramesPerPacket * desc.mBytesPerFrame;
AudioQueueRef queue;
status = AudioQueueNewInput(
&desc,
audio_queue_callback, // our custom callback function
NULL,
NULL,
NULL,
0,
&queue
);
if (status)
return status;
AudioQueueBufferRef buffers[NUM_BUFFERS];
for (i = 0; i < NUM_BUFFERS; i++) {
status = AudioQueueAllocateBuffer(
queue,
0x5000, // max buffer size
&buffers[i]
);
if (status)
return status;
status = AudioQueueEnqueueBuffer(
queue,
buffers[i],
0,
NULL
);
if (status)
return status;
}
// Then, we initialize the FLAC encoder:
FLAC__StreamEncoder *encoder;
FLAC__StreamEncoderInitStatus status;
FILE *infile;
const char *dataloc;
uint32_t rate; /* sample rate */
uint32_t total; /* number of samples in file */
uint32_t channels; /* number of channels */
uint32_t bps; /* bits per sample */
uint32_t dataoff; /* offset of PCM data within the file */
int err;
/*
* BUFFSIZE samples * 2 bytes per sample * 2 channels
*/
FLAC__byte buffer[BUFSIZE * 2 * 2];
/*
* BUFFSIZE samples * 2 channels
*/
FLAC__int32 pcm[BUFSIZE * 2];
/*
* Create and initialize the FLAC encoder
*/
encoder = FLAC__stream_encoder_new();
if (!encoder)
return -1;
FLAC__stream_encoder_set_verify(encoder, true);
FLAC__stream_encoder_set_compression_level(encoder, 5);
FLAC__stream_encoder_set_channels(encoder, NUM_CHANNELS); // 2 for stereo
FLAC__stream_encoder_set_bits_per_sample(encoder, BITS_PER_SAMPLE); // 32 for stereo 16 bit per channel
FLAC__stream_encoder_set_sample_rate(encoder, SAMPLE_RATE);
status = FLAC__stream_encoder_init_stream(encoder, flac_callback, NULL, NULL, NULL, NULL);
if (status != FLAC__STREAM_ENCODER_INIT_STATUS_OK)
return -1;
// We now start the Audio Queue...
status = AudioQueueStart(queue, NULL);
// And when it's finished, we clean up the FLAC encoder...
FLAC__stream_encoder_finish(encoder);
FLAC__stream_encoder_delete(encoder);
// and the audio queue and its belongings too
AudioQueueFlush(queue);
AudioQueueStop(queue, false);
for (i = 0; i < NUM_BUFFERS; i++)
AudioQueueFreeBuffer(queue, buffers[i]);
AudioQueueDispose(queue, true);
// In the audio queue callback function, we do the encoding:
void audio_queue_callback(
void *data,
AudioQueueRef inAQ,
AudioQueueBufferRef buffer,
const AudioTimeStamp *start_time,
UInt32 num_packets,
const AudioStreamPacketDescription *desc
)
{
unsigned char *buf = buffer->mAudioData;
for (size_t i = 0; i < num_packets * channels; i++) {
uint16_t msb = *(uint8_t *)(buf + i * 2 + 1);
uint16_t usample = (msb << 8) | lsb;
union {
uint16_t usample;
int16_t ssample;
} u;
u.usample = usample;
pcm[i] = u.ssample;
}
FLAC__bool succ = FLAC__stream_encoder_process_interleaved(encoder, pcm, num_packets);
if (!succ)
// handle_error();
}
// Finally, in the FLAC stream encoder callback:
FLAC__StreamEncoderWriteStatus flac_callback(
const FLAC__StreamEncoder *encoder,
const FLAC__byte buffer[],
size_t bytes,
unsigned samples,
unsigned current_frame,
void *client_data
)
{
// Here process `buffer' and stuff,
// then:
return FLAC__STREAM_ENCODER_SEEK_STATUS_OK;
}
You are welcome.
Your question is not very specific, but you need to use Audio Recording Services, which will let you get access to the audio data in chunks, and then move the data you get from there into the streaming interface of the FLAC encoder. You can not use the WAV to FLAC program you linked to, you have to tap into the FLAC library yourself. API docs here.
Example on how to use a callback here.
can't you record your audio in wav using audio queue services and process output packets with your lib ?
edit from apple dev doc :
"Applications writing AIFF and WAV files must either update the data header’s size field at the end of recording—which can result in an unusable file if recording is interrupted before the header is finalized—or they must update the size field after recording each packet of data, which is inefficient."
apparently it seems quite hard to encode a wav file on the fly

How to play audio backwards?

Some people suggested to read the audio data from end to start and create a copy written from start to end, and then simply play that reversed audio data.
Are there existing examples for iOS how this is done?
I found an example project called MixerHost, which at some point uses an
AudioUnitSampleType holding the audio data that has been read from file, and assigning it to a buffer.
This is defined as:
typedef SInt32 AudioUnitSampleType;
#define kAudioUnitSampleFractionBits 24
And according to Apple:
The canonical audio sample type for audio units and other audio
processing in iPhone OS is noninterleaved linear PCM with 8.24-bit
fixed-point samples.
So in other words it holds noninterleaved linear PCM audio data.
But I can't figure out where this data is beeing read in, and where it is stored. Here's the code that loads the audio data and buffers it:
- (void) readAudioFilesIntoMemory {
for (int audioFile = 0; audioFile < NUM_FILES; ++audioFile) {
NSLog (#"readAudioFilesIntoMemory - file %i", audioFile);
// Instantiate an extended audio file object.
ExtAudioFileRef audioFileObject = 0;
// Open an audio file and associate it with the extended audio file object.
OSStatus result = ExtAudioFileOpenURL (sourceURLArray[audioFile], &audioFileObject);
if (noErr != result || NULL == audioFileObject) {[self printErrorMessage: #"ExtAudioFileOpenURL" withStatus: result]; return;}
// Get the audio file's length in frames.
UInt64 totalFramesInFile = 0;
UInt32 frameLengthPropertySize = sizeof (totalFramesInFile);
result = ExtAudioFileGetProperty (
audioFileObject,
kExtAudioFileProperty_FileLengthFrames,
&frameLengthPropertySize,
&totalFramesInFile
);
if (noErr != result) {[self printErrorMessage: #"ExtAudioFileGetProperty (audio file length in frames)" withStatus: result]; return;}
// Assign the frame count to the soundStructArray instance variable
soundStructArray[audioFile].frameCount = totalFramesInFile;
// Get the audio file's number of channels.
AudioStreamBasicDescription fileAudioFormat = {0};
UInt32 formatPropertySize = sizeof (fileAudioFormat);
result = ExtAudioFileGetProperty (
audioFileObject,
kExtAudioFileProperty_FileDataFormat,
&formatPropertySize,
&fileAudioFormat
);
if (noErr != result) {[self printErrorMessage: #"ExtAudioFileGetProperty (file audio format)" withStatus: result]; return;}
UInt32 channelCount = fileAudioFormat.mChannelsPerFrame;
// Allocate memory in the soundStructArray instance variable to hold the left channel,
// or mono, audio data
soundStructArray[audioFile].audioDataLeft =
(AudioUnitSampleType *) calloc (totalFramesInFile, sizeof (AudioUnitSampleType));
AudioStreamBasicDescription importFormat = {0};
if (2 == channelCount) {
soundStructArray[audioFile].isStereo = YES;
// Sound is stereo, so allocate memory in the soundStructArray instance variable to
// hold the right channel audio data
soundStructArray[audioFile].audioDataRight =
(AudioUnitSampleType *) calloc (totalFramesInFile, sizeof (AudioUnitSampleType));
importFormat = stereoStreamFormat;
} else if (1 == channelCount) {
soundStructArray[audioFile].isStereo = NO;
importFormat = monoStreamFormat;
} else {
NSLog (#"*** WARNING: File format not supported - wrong number of channels");
ExtAudioFileDispose (audioFileObject);
return;
}
// Assign the appropriate mixer input bus stream data format to the extended audio
// file object. This is the format used for the audio data placed into the audio
// buffer in the SoundStruct data structure, which is in turn used in the
// inputRenderCallback callback function.
result = ExtAudioFileSetProperty (
audioFileObject,
kExtAudioFileProperty_ClientDataFormat,
sizeof (importFormat),
&importFormat
);
if (noErr != result) {[self printErrorMessage: #"ExtAudioFileSetProperty (client data format)" withStatus: result]; return;}
// Set up an AudioBufferList struct, which has two roles:
//
// 1. It gives the ExtAudioFileRead function the configuration it
// needs to correctly provide the data to the buffer.
//
// 2. It points to the soundStructArray[audioFile].audioDataLeft buffer, so
// that audio data obtained from disk using the ExtAudioFileRead function
// goes to that buffer
// Allocate memory for the buffer list struct according to the number of
// channels it represents.
AudioBufferList *bufferList;
bufferList = (AudioBufferList *) malloc (
sizeof (AudioBufferList) + sizeof (AudioBuffer) * (channelCount - 1)
);
if (NULL == bufferList) {NSLog (#"*** malloc failure for allocating bufferList memory"); return;}
// initialize the mNumberBuffers member
bufferList->mNumberBuffers = channelCount;
// initialize the mBuffers member to 0
AudioBuffer emptyBuffer = {0};
size_t arrayIndex;
for (arrayIndex = 0; arrayIndex < channelCount; arrayIndex++) {
bufferList->mBuffers[arrayIndex] = emptyBuffer;
}
// set up the AudioBuffer structs in the buffer list
bufferList->mBuffers[0].mNumberChannels = 1;
bufferList->mBuffers[0].mDataByteSize = totalFramesInFile * sizeof (AudioUnitSampleType);
bufferList->mBuffers[0].mData = soundStructArray[audioFile].audioDataLeft;
if (2 == channelCount) {
bufferList->mBuffers[1].mNumberChannels = 1;
bufferList->mBuffers[1].mDataByteSize = totalFramesInFile * sizeof (AudioUnitSampleType);
bufferList->mBuffers[1].mData = soundStructArray[audioFile].audioDataRight;
}
// Perform a synchronous, sequential read of the audio data out of the file and
// into the soundStructArray[audioFile].audioDataLeft and (if stereo) .audioDataRight members.
UInt32 numberOfPacketsToRead = (UInt32) totalFramesInFile;
result = ExtAudioFileRead (
audioFileObject,
&numberOfPacketsToRead,
bufferList
);
free (bufferList);
if (noErr != result) {
[self printErrorMessage: #"ExtAudioFileRead failure - " withStatus: result];
// If reading from the file failed, then free the memory for the sound buffer.
free (soundStructArray[audioFile].audioDataLeft);
soundStructArray[audioFile].audioDataLeft = 0;
if (2 == channelCount) {
free (soundStructArray[audioFile].audioDataRight);
soundStructArray[audioFile].audioDataRight = 0;
}
ExtAudioFileDispose (audioFileObject);
return;
}
NSLog (#"Finished reading file %i into memory", audioFile);
// Set the sample index to zero, so that playback starts at the
// beginning of the sound.
soundStructArray[audioFile].sampleNumber = 0;
// Dispose of the extended audio file object, which also
// closes the associated file.
ExtAudioFileDispose (audioFileObject);
}
}
Which part contains the array of audio samples which have to be reversed? Is it the AudioUnitSampleType?
bufferList->mBuffers[0].mData = soundStructArray[audioFile].audioDataLeft;
Note: audioDataLeft is defined as an AudioUnitSampleType, which is an SInt32 but not an array.
I found a clue in a Core Audio Mailing list:
Well, nothing to do with iPh*n* as far as I know (unless some audio
API has been omitted -- I am not a member of that program). AFAIR,
AudioFile.h and ExtendedAudioFile.h should provide you with what you
need to read or write a caf and access its streams/channels.
Basically, you want to read each channel/stream backwards, so, if you
don't need properties of the audio file it is pretty straightforward
once you have a handle on that channel's data, assuming it is not in a
compressed format. Considering the number of formats a caf can
represent, this could take a few more lines of code than you're
thinking. Once you have a handle on uncompressed data, it should be
about as easy as reversing a string. Then you would of course replace
the file's data with the reversed data, or you could just feed the
audio output (or wherever you're sending the reversed signal) reading
whatever stream you have backwards.
This is what I tried, but when I assign my reversed buffer to the mData of both channels, I hear nothing:
AudioUnitSampleType *leftData = soundStructArray[audioFile].audioDataLeft;
AudioUnitSampleType *reversedData = (AudioUnitSampleType *) calloc (totalFramesInFile, sizeof (AudioUnitSampleType));
UInt64 j = 0;
for (UInt64 i = (totalFramesInFile - 1); i > -1; i--) {
reversedData[j] = leftData[i];
j++;
}
I have worked on a sample app, which records what user says and plays them backwards. I have used CoreAudio to achieve this. Link to app code.
/*
As each sample is 16-bits in size(2 bytes)(mono channel).
You can load each sample at a time by copying it into a different buffer by starting at the end of the recording and
reading backwards. When you get to the start of the data you have reversed the data and playing will be reversed.
*/
// set up output file
AudioFileID outputAudioFile;
AudioStreamBasicDescription myPCMFormat;
myPCMFormat.mSampleRate = 16000.00;
myPCMFormat.mFormatID = kAudioFormatLinearPCM ;
myPCMFormat.mFormatFlags = kAudioFormatFlagsCanonical;
myPCMFormat.mChannelsPerFrame = 1;
myPCMFormat.mFramesPerPacket = 1;
myPCMFormat.mBitsPerChannel = 16;
myPCMFormat.mBytesPerPacket = 2;
myPCMFormat.mBytesPerFrame = 2;
AudioFileCreateWithURL((__bridge CFURLRef)self.flippedAudioUrl,
kAudioFileCAFType,
&myPCMFormat,
kAudioFileFlags_EraseFile,
&outputAudioFile);
// set up input file
AudioFileID inputAudioFile;
OSStatus theErr = noErr;
UInt64 fileDataSize = 0;
AudioStreamBasicDescription theFileFormat;
UInt32 thePropertySize = sizeof(theFileFormat);
theErr = AudioFileOpenURL((__bridge CFURLRef)self.recordedAudioUrl, kAudioFileReadPermission, 0, &inputAudioFile);
thePropertySize = sizeof(fileDataSize);
theErr = AudioFileGetProperty(inputAudioFile, kAudioFilePropertyAudioDataByteCount, &thePropertySize, &fileDataSize);
UInt32 dataSize = fileDataSize;
void* theData = malloc(dataSize);
//Read data into buffer
UInt32 readPoint = dataSize;
UInt32 writePoint = 0;
while( readPoint > 0 )
{
UInt32 bytesToRead = 2;
AudioFileReadBytes( inputAudioFile, false, readPoint, &bytesToRead, theData );
AudioFileWriteBytes( outputAudioFile, false, writePoint, &bytesToRead, theData );
writePoint += 2;
readPoint -= 2;
}
free(theData);
AudioFileClose(inputAudioFile);
AudioFileClose(outputAudioFile);
Hope this helps.
Typically, when an ASBD is being used, the fields describe the complete layout of the sample data in the buffers that are represented by this description - where typically those buffers are represented by an AudioBuffer that is contained in an AudioBufferList.
However, when an ASBD has the kAudioFormatFlagIsNonInterleaved flag, the AudioBufferList has a different structure and semantic. In this case, the ASBD fields will describe the format of ONE of the AudioBuffers that are contained in the list, AND each AudioBuffer in the list is determined to have a single (mono) channel of audio data. Then, the ASBD's mChannelsPerFrame will indicate the total number of AudioBuffers that are contained within the AudioBufferList - where each buffer contains one channel. This is used primarily with the AudioUnit (and AudioConverter) representation of this list - and won't be found in the AudioHardware usage of this structure.
You do not have to allocate a separate buffer to store the reversed data, this can take a fair bit of CPU, depending on the length of sound. To play a sound backwards, just make the sampleNumber counter start at totalFramesInFile - 1.
You can modify MixerHost like this, to achieve the desired effect.
Replace soundStructArray[audioFile].sampleNumber = 0; with
soundStructArray[audioFile].sampleNumber = totalFramesInFile - 1;
Make sampleNumber SInt32 instead of UInt32.
Replace the loop which you write the samples out with this.
for (UInt32 frameNumber = 0; frameNumber < inNumberFrames; ++frameNumber) {
outSamplesChannelLeft[frameNumber] = dataInLeft[sampleNumber];
if (isStereo) outSamplesChannelRight[frameNumber] = dataInRight[sampleNumber];
if (--sampleNumber < 0) sampleNumber = frameTotalForSound - 1;
}
This effectively makes it play backwards. Mmmm. It's been a while since I've heard the MixerHost music. I must admit I find it to be quite pleasing.

iOS - AudioUnitRender returned error -10876 on device, but running fine in simulator

I encountered a problem which made me unable to capture input signal from microphone on the device (iPhone4). However, the code runs fine in the simulator.
The code was originally adopted from Apple's MixerHostAudio class from MixerHost sample code. it runs fine both on device and in simulator before I started adding code for capturing mic input.
Wondering if somebody could help me out. Thanks in advance!
Here is my inputRenderCallback function which feeds signal into mixer input:
static OSStatus inputRenderCallback (
void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
recorderStructPtr recorderStructPointer = (recorderStructPtr) inRefCon;
// ....
AudioUnitRenderActionFlags renderActionFlags;
err = AudioUnitRender(recorderStructPointer->iOUnit,
&renderActionFlags,
inTimeStamp,
1, // bus number for input
inNumberFrames,
recorderStructPointer->fInputAudioBuffer
);
// error returned is -10876
// ....
}
Here is my related initialization code:
Now I keep only 1 input in the mixer, so the mixer seems redundant, but works fine before adding input capture code.
// Convenience function to allocate our audio buffers
- (AudioBufferList *) allocateAudioBufferListByNumChannels:(UInt32)numChannels withSize:(UInt32)size {
AudioBufferList* list;
UInt32 i;
list = (AudioBufferList*)calloc(1, sizeof(AudioBufferList) + numChannels * sizeof(AudioBuffer));
if(list == NULL)
return nil;
list->mNumberBuffers = numChannels;
for(i = 0; i < numChannels; ++i) {
list->mBuffers[i].mNumberChannels = 1;
list->mBuffers[i].mDataByteSize = size;
list->mBuffers[i].mData = malloc(size);
if(list->mBuffers[i].mData == NULL) {
[self destroyAudioBufferList:list];
return nil;
}
}
return list;
}
// initialize audio buffer list for input capture
recorderStructInstance.fInputAudioBuffer = [self allocateAudioBufferListByNumChannels:1 withSize:4096];
// I/O unit description
AudioComponentDescription iOUnitDescription;
iOUnitDescription.componentType = kAudioUnitType_Output;
iOUnitDescription.componentSubType = kAudioUnitSubType_RemoteIO;
iOUnitDescription.componentManufacturer = kAudioUnitManufacturer_Apple;
iOUnitDescription.componentFlags = 0;
iOUnitDescription.componentFlagsMask = 0;
// Multichannel mixer unit description
AudioComponentDescription MixerUnitDescription;
MixerUnitDescription.componentType = kAudioUnitType_Mixer;
MixerUnitDescription.componentSubType = kAudioUnitSubType_MultiChannelMixer;
MixerUnitDescription.componentManufacturer = kAudioUnitManufacturer_Apple;
MixerUnitDescription.componentFlags = 0;
MixerUnitDescription.componentFlagsMask = 0;
AUNode iONode; // node for I/O unit
AUNode mixerNode; // node for Multichannel Mixer unit
// Add the nodes to the audio processing graph
result = AUGraphAddNode (
processingGraph,
&iOUnitDescription,
&iONode);
result = AUGraphAddNode (
processingGraph,
&MixerUnitDescription,
&mixerNode
);
result = AUGraphOpen (processingGraph);
// fetch mixer AudioUnit instance
result = AUGraphNodeInfo (
processingGraph,
mixerNode,
NULL,
&mixerUnit
);
// fetch RemoteIO AudioUnit instance
result = AUGraphNodeInfo (
processingGraph,
iONode,
NULL,
&(recorderStructInstance.iOUnit)
);
// enable input of RemoteIO unit
UInt32 enableInput = 1;
AudioUnitElement inputBus = 1;
result = AudioUnitSetProperty(recorderStructInstance.iOUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Input,
inputBus,
&enableInput,
sizeof(enableInput)
);
// setup mixer inputs
UInt32 busCount = 1;
result = AudioUnitSetProperty (
mixerUnit,
kAudioUnitProperty_ElementCount,
kAudioUnitScope_Input,
0,
&busCount,
sizeof (busCount)
);
UInt32 maximumFramesPerSlice = 4096;
result = AudioUnitSetProperty (
mixerUnit,
kAudioUnitProperty_MaximumFramesPerSlice,
kAudioUnitScope_Global,
0,
&maximumFramesPerSlice,
sizeof (maximumFramesPerSlice)
);
for (UInt16 busNumber = 0; busNumber < busCount; ++busNumber) {
// set up input callback
AURenderCallbackStruct inputCallbackStruct;
inputCallbackStruct.inputProc = &inputRenderCallback;
inputCallbackStruct.inputProcRefCon = &recorderStructInstance;
result = AUGraphSetNodeInputCallback (
processingGraph,
mixerNode,
busNumber,
&inputCallbackStruct
);
// set up stream format
AudioStreamBasicDescription mixerBusStreamFormat;
size_t bytesPerSample = sizeof (AudioUnitSampleType);
mixerBusStreamFormat.mFormatID = kAudioFormatLinearPCM;
mixerBusStreamFormat.mFormatFlags = kAudioFormatFlagsAudioUnitCanonical;
mixerBusStreamFormat.mBytesPerPacket = bytesPerSample;
mixerBusStreamFormat.mFramesPerPacket = 1;
mixerBusStreamFormat.mBytesPerFrame = bytesPerSample;
mixerBusStreamFormat.mChannelsPerFrame = 2;
mixerBusStreamFormat.mBitsPerChannel = 8 * bytesPerSample;
mixerBusStreamFormat.mSampleRate = graphSampleRate;
result = AudioUnitSetProperty (
mixerUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
busNumber,
&mixerBusStreamFormat,
sizeof (mixerBusStreamFormat)
);
}
// set sample rate of mixer output
result = AudioUnitSetProperty (
mixerUnit,
kAudioUnitProperty_SampleRate,
kAudioUnitScope_Output,
0,
&graphSampleRate,
sizeof (graphSampleRate)
);
// connect mixer output to RemoteIO
result = AUGraphConnectNodeInput (
processingGraph,
mixerNode, // source node
0, // source node output bus number
iONode, // destination node
0 // desintation node input bus number
);
// initialize AudioGraph
result = AUGraphInitialize (processingGraph);
// start AudioGraph
result = AUGraphStart (processingGraph);
// enable mixer input
result = AudioUnitSetParameter (
mixerUnit,
kMultiChannelMixerParam_Enable,
kAudioUnitScope_Input,
0, // bus number
1, // on
0
);
First, it should be noted that the error code -10876 corresponds to the symbol named kAudioUnitErr_NoConnection. You can usually find these by googling the error code number along with the term CoreAudio. That should be a hint that you are asking the system to render to an AudioUnit which isn't properly connected.
Within your render callback, you are casting the void* user data to a recorderStructPtr. I'm going to assume that when you debugged this code that this cast returned a non-null structure which has your actual audio unit's address in it. However, you should be rendering it with the AudioBufferList which is passed in to your render callback (ie, the inputRenderCallback function). That contains the list of samples from the system which you need to process.
I solved the issue on my own. It is due to a bug in my code causing 10876 error on AudioUnitRender().
I set the category of my AudioSession as AVAudioSessionCategoryPlayback instead of AVAudioSessionCategoryPlayAndRecord. When I fixed the category to AVAudioSessionCategoryPlayAndRecord, I can finally capture microphone input successfully by calling A*udioUnitRender()* on the device.
using AVAudioSessionCategoryPlayback doesn't result to any error upon calling AudioUnitRender() to capture microphone input and is working well in the simulator. I think this should be an issue for
iOS simulator (though not critical).
I have also seen this issue occur when the values in the I/O Unit's stream format property are inconsistent. Make sure that your AudioStreamBasicDescription's bits per channel, channels per frame, bytes per frame, frames per packet, and bytes per packet all make sense.
Specifically I got the NoConnection error when I changed a stream format from stereo to mono by changing the channels per frame, but forgot to change the bytes per frame and bytes per packet to match the fact that there is half as much data in a mono frame as a stereo frame.
If you initialise an AudioUnit and don't set its kAudioUnitProperty_SetRenderCallback property, you'll get this error if you call AudioUnitRender on it.
Call AudioUnitProcess on it instead.

Data format from recording using Audio Queue framework

I'm writing an iPhone app which should record the users voice, and feed the audio data into a library for modifications such as changing tempo and pitch. I started off with the SpeakHere example code from Apple:
http://developer.apple.com/library/ios/#samplecode/SpeakHere/Introduction/Intro.html
That project lays the groundwork for recording the user's voice and playing it back. It works well.
Now I'm diving into the code and I need to figure out how to feed the audio data into the SoundTouch library (http://www.surina.net/soundtouch/) to change the pitch. I became familiar with the Audio Queue framework while going through the code, and I found the place where I receive the audio data from the recording.
Essentially, you call AudioQueueNewInput to create a new input queue. You pass a callback function which is called every time a chunk of audio data is available. It is within this callback that I need to pass the chunks of data into SoundTouch.
I have it all setup, but the noise I play back from the SoundTouch library is very staticky (it barely resembles the original). If I don't pass it through SoundTouch and play the original audio it works fine.
Basically, I'm missing something about what the actual data I'm getting represents. I was assuming that I am getting a stream of shorts which are samples, 1 sample for each channel. That's how SoundTouch is expecting it, so it must not be right somehow.
Here is the code which sets up the audio queue so you can see how it is configured.
void AQRecorder::SetupAudioFormat(UInt32 inFormatID)
{
memset(&mRecordFormat, 0, sizeof(mRecordFormat));
UInt32 size = sizeof(mRecordFormat.mSampleRate);
XThrowIfError(AudioSessionGetProperty(kAudioSessionProperty_CurrentHardwareSampleRate,
&size,
&mRecordFormat.mSampleRate), "couldn't get hardware sample rate");
size = sizeof(mRecordFormat.mChannelsPerFrame);
XThrowIfError(AudioSessionGetProperty(kAudioSessionProperty_CurrentHardwareInputNumberChannels,
&size,
&mRecordFormat.mChannelsPerFrame), "couldn't get input channel count");
mRecordFormat.mFormatID = inFormatID;
if (inFormatID == kAudioFormatLinearPCM)
{
// if we want pcm, default to signed 16-bit little-endian
mRecordFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;
mRecordFormat.mBitsPerChannel = 16;
mRecordFormat.mBytesPerPacket = mRecordFormat.mBytesPerFrame = (mRecordFormat.mBitsPerChannel / 8) * mRecordFormat.mChannelsPerFrame;
mRecordFormat.mFramesPerPacket = 1;
}
}
And here's part of the code which actually sets it up:
SetupAudioFormat(kAudioFormatLinearPCM);
// create the queue
XThrowIfError(AudioQueueNewInput(
&mRecordFormat,
MyInputBufferHandler,
this /* userData */,
NULL /* run loop */, NULL /* run loop mode */,
0 /* flags */, &mQueue), "AudioQueueNewInput failed");
And finally, here is the callback which handles new audio data:
void AQRecorder::MyInputBufferHandler(void *inUserData,
AudioQueueRef inAQ,
AudioQueueBufferRef inBuffer,
const AudioTimeStamp *inStartTime,
UInt32 inNumPackets,
const AudioStreamPacketDescription *inPacketDesc) {
AQRecorder *aqr = (AQRecorder *)inUserData;
try {
if (inNumPackets > 0) {
CAStreamBasicDescription queueFormat = aqr->DataFormat();
SoundTouch *soundTouch = aqr->getSoundTouch();
soundTouch->putSamples((const SAMPLETYPE *)inBuffer->mAudioData,
inBuffer->mAudioDataByteSize / 2 / queueFormat.NumberChannels());
SAMPLETYPE *samples = (SAMPLETYPE *)malloc(sizeof(SAMPLETYPE) * 10000 * queueFormat.NumberChannels());
UInt32 numSamples;
while((numSamples = soundTouch->receiveSamples((SAMPLETYPE *)samples, 10000))) {
// write packets to file
XThrowIfError(AudioFileWritePackets(aqr->mRecordFile,
FALSE,
numSamples * 2 * queueFormat.NumberChannels(),
NULL,
aqr->mRecordPacket,
&numSamples,
samples),
"AudioFileWritePackets failed");
aqr->mRecordPacket += numSamples;
}
free(samples);
}
// if we're not stopping, re-enqueue the buffe so that it gets filled again
if (aqr->IsRunning())
XThrowIfError(AudioQueueEnqueueBuffer(inAQ, inBuffer, 0, NULL), "AudioQueueEnqueueBuffer failed");
} catch (CAXException e) {
char buf[256];
fprintf(stderr, "Error: %s (%s)\n", e.mOperation, e.FormatError(buf));
}
}
You can see that I'm passing the data in inBuffer->mAudioData to SoundTouch. In my callback, what exactly are the bytes representing, i.e. how do I extract samples from mAudioData?
The default endianess of the Audio Queue may be the opposite of what you expect. You may have to swap upper and lower bytes of each 16-bit audio samples after record and before play.
sample_le = (0xff00 & (sample_be << 8)) | (0x00ff & (sample_be >> 8)) ;
You have to check that the endianess, the signedness, etc. of what you are getting match what the library expects. Use mFormatFlags of AudioStreamBasicDescription to determine the source format. Then you might have to convert the samples (e.g. newSample = sample + 0x8000)

AudioFileWriteBytes fails with error code -40

I'm trying to write raw audio bytes to a file using AudioFileWriteBytes(). Here's what I'm doing:
void writeSingleChannelRingBufferDataToFileAsSInt16(AudioFileID audioFileID, AudioConverterRef audioConverter, ringBuffer *rb, SInt16 *holdingBuffer) {
// First, figure out which bits of audio we'll be
// writing to file from the ring buffer
UInt32 lastFreshSample = rb->lastWrittenIndex;
OSStatus status;
int numSamplesToWrite;
UInt32 numBytesToWrite;
if (lastFreshSample < rb->lastReadIndex) {
numSamplesToWrite = kNumPointsInWave + lastFreshSample - rb->lastReadIndex - 1;
}
else {
numSamplesToWrite = lastFreshSample - rb->lastReadIndex;
}
numBytesToWrite = numSamplesToWrite*sizeof(SInt16);
Then we copy the audio data (stored as floats) to a holding buffer (SInt16) that will be written directly to the file. The copying looks funky because it's from a ring buffer.
UInt32 buffLen = rb->sizeOfBuffer - 1;
for (int i=0; i < numSamplesToWrite; ++i) {
holdingBuffer[i] = rb->data[(i + rb->lastReadIndex) & buffLen];
}
Okay, now we actually try to write the audio from the SInt16 buffer "holdingBuffer" to the audio file. The NSLog will spit out an error -40, but also claims that it's writing bytes. No data is written to file.
status = AudioFileWriteBytes(audioFileID, NO, 0, &numBytesToWrite, &holdingBuffer);
rb->lastReadIndex = lastFreshSample;
NSLog(#"Error = %d, wrote %d bytes", status, numBytesToWrite);
return;
What is this error -40? By the way, everything works fine if I write straight from the ringBuffer to the file. Of course it sounds like junk, because I'm writing floats, not SInt16s, but AudioFileWriteBytes doesn't complain.
The key is to explicitly change the endianness of the incoming data to big endian. All I had to do was wrap CFSwapInt16HostToBig around my data to get:
float audioVal = rb->data[(i + rb->lastReadIndex) & buffLen];
holdingBuffer[i] = CFSwapInt16HostToBig((SInt16) audioVal );