How to use an Audio Unit on the iPhone - iphone

I'm looking for a way to change the pitch of recorded audio as it is saved to disk, or played back (in real time). I understand Audio Units can be used for this. The iPhone offers limited support for Audio Units (for example it's not possible to create/use custom audio units, as far as I can tell), but several out-of-the-box audio units are available, one of which is AUPitch.
How exactly would I use an audio unit (specifically AUPitch)? Do you hook it into an audio queue somehow? Is it possible to chain audio units together (for example, to simultaneously add an echo effect and a change in pitch)?
EDIT: After inspecting the iPhone SDK headers (I think AudioUnit.h, I'm not in front of a Mac at the moment), I noticed that AUPitch is commented out. So it doesn't look like AUPitch is available on the iPhone after all. weep weep
Apple seems to have better organized their iPhone SDK documentation at developer.apple.com of late - now its more difficult to find references to AUPitch, etc.
That said, I'm still interested in quality answers on using Audio Units (in general) on the iPhone.

There are some very good resources here (http://michael.tyson.id.au/2008/11/04/using-remoteio-audio-unit/) for using the RemoteIO Audio Unit. In my experience working with Audio Units on the iPhone, I've found that I can implement a transformation manually in the callback function. In doing so, you might find that solves you problem.

Regarding changing pitch on the iPhone, OpenAL is the way to go. Check out the SoundManager class available from www.71squared.com for a great example of an OpenAL sound engine that supports pitch.

- (void)modifySpeedOf:(CFURLRef)inputURL byFactor:(float)factor andWriteTo:(CFURLRef)outputURL {
ExtAudioFileRef inputFile = NULL;
ExtAudioFileRef outputFile = NULL;
AudioStreamBasicDescription destFormat;
destFormat.mFormatID = kAudioFormatLinearPCM;
destFormat.mFormatFlags = kAudioFormatFlagsCanonical;
destFormat.mSampleRate = 44100 * factor;
destFormat.mBytesPerPacket = 2;
destFormat.mFramesPerPacket = 1;
destFormat.mBytesPerFrame = 2;
destFormat.mChannelsPerFrame = 1;
destFormat.mBitsPerChannel = 16;
destFormat.mReserved = 0;
ExtAudioFileCreateWithURL(outputURL, kAudioFileCAFType,
&destFormat, NULL, kAudioFileFlags_EraseFile, &outputFile);
ExtAudioFileOpenURL(inputURL, &inputFile);
//find out how many frames is this file long
SInt64 length = 0;
UInt32 dataSize2 = (UInt32)sizeof(length);
ExtAudioFileGetProperty(inputFile,
kExtAudioFileProperty_FileLengthFrames, &dataSize2, &length);
SInt16 *buffer = (SInt16*)malloc(kBufferSize * sizeof(SInt16));
UInt32 totalFramecount = 0;
AudioBufferList bufferList;
bufferList.mNumberBuffers = 1;
bufferList.mBuffers[0].mNumberChannels = 1;
bufferList.mBuffers[0].mData = buffer; // pointer to buffer of audio data
bufferList.mBuffers[0].mDataByteSize = kBufferSize *
sizeof(SInt16); // number of bytes in the buffer
while(true) {
UInt32 frameCount = kBufferSize * sizeof(SInt16) / 2;
// Read a chunk of input
ExtAudioFileRead(inputFile, &frameCount, &bufferList);
totalFramecount += frameCount;
if (!frameCount || totalFramecount >= length) {
//termination condition
break;
}
ExtAudioFileWrite(outputFile, frameCount, &bufferList);
}
free(buffer);
ExtAudioFileDispose(inputFile);
ExtAudioFileDispose(outputFile);
}
it will change pitch based on factor

I've used the NewTimePitch audio unit for this before, the Audio Component Description for that is
var newTimePitchDesc = AudioComponentDescription(componentType: kAudioUnitType_FormatConverter,
componentSubType: kAudioUnitSubType_NewTimePitch,
componentManufacturer: kAudioUnitManufacturer_Apple,
componentFlags: 0,
componentFlagsMask: 0)
then you can change the pitch parameter with an AudioUnitSetParamater call. For example this changes the pitch by -1000 cents
err = AudioUnitSetParameter(newTimePitchAudioUnit,
kNewTimePitchParam_Pitch,
kAudioUnitScope_Global,
0,
-1000,
0)
The parameters for this audio unit are as follows
// Parameters for AUNewTimePitch
enum {
// Global, rate, 1/32 -> 32.0, 1.0
kNewTimePitchParam_Rate = 0,
// Global, Cents, -2400 -> 2400, 1.0
kNewTimePitchParam_Pitch = 1,
// Global, generic, 3.0 -> 32.0, 8.0
kNewTimePitchParam_Overlap = 4,
// Global, Boolean, 0->1, 1
kNewTimePitchParam_EnablePeakLocking = 6
};
but you'll only need to change the pitch parameter for your purposes. For a guide on how to implement this refer to Justin's answer

Related

Stereo playback gets converted to mono (on iPad only) even when using stereo headphones

I'm developing an audio processing app using core audio that records sounds through the headset mic and plays them back trough the headphones.
I've added a feature for the balance, i.e. to shift the playback onto one ear only.
This works perfectly on the iPods and iPhones I've tested it on. But not on the iPad. On the iPad the location of the sound doesn't change at all.
This is the code used to render the audio output:
static OSStatus renderInput(void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData)
{
// Get a pointer to the dataBuffer of the AudioBufferList
AudioBuffer firstBuffer = ioData->mBuffers[0];
AudioSampleType *outA = (AudioSampleType *)firstBuffer.mData;
int numChannels = firstBuffer.mNumberChannels;
NSLog(#"numChannels = %d, left = %d, right = %d", numChannels, leftBalVolume, rightBalVolume);
// Loop through the callback buffer, generating samples
for (UInt32 i = 0; i < inNumberFrames * numChannels; i += numChannels) {
int outSignal = getFilteredSampleData(sampleDataTail);
outA[i] = (outSignal * leftBalVolume) / 32768;
if (numChannels > 1) {
outA[i + 1] = (outSignal * rightBalVolume) / 32768;
}
sampleDataTail = (sampleDataTail + 1) % sampleDataLen;
}
return noErr;
}
The output from the NSLog is as follows:
numChannels = 2, left = 16557, right = 32767
...telling me that it is basically working in stereo mode, I should hear the audio slightly to the right. But even if I put it 100% to the right, I still hear the audio in the middle, same volume on both earphones.
Obviously, the iPad 2 mixes the audiosignal down to mono and then plays that on both earphones. I thought that it might have to do with the fact that the iPad has only one speaker and thus would usually mix to mono... but why does it do that, even when a stereo headphone is connected?
Any help is greatly appreciated!
Found the culprit:
I've called
desc.SetAUCanonical(1, true);
on the StreamFormat descriptor of the mixer's output. Now I'm just setting values for every property, and it works on the iPad as well...
desc.mSampleRate = kGraphSampleRate;
desc.mFormatID = kAudioFormatLinearPCM;
desc.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
desc.mFramesPerPacket = 1;
desc.mChannelsPerFrame = 2;
desc.mBitsPerChannel = 16;
desc.mBytesPerPacket = 4;
desc.mBytesPerFrame = 4;
It seems that SetAUCanonical does different things on the iPad vs. iPod Touch and iPhone

How to get mono file to play in stereo in iPhone app using audio queue services

I'm writing an iPhone app in which I'm playing some mono mp3 files using Audio Queue Services. When playing, I only hear sound on one channel. I've been searching for an example of how to get the files to play on both channels, with no luck. What I'm doing is pretty simple right now. I'm setting up my audio queue like this:
AudioStreamBasicDescription queueASBD;
AudioQueueRef audioQueue;
queueASBD.mSampleRate = 44100.0;
queueASBD.mFormatID = kAudioFormatLinearPCM;
queueASBD.mFormatFlags = kAudioFormatFlagsNativeEndian | AudioFormatFlagIsPacked | kAudioFormatFlagIsSignedInteger;
queueASBD.mBytesPerPacket = 4;
queueASBD.mFramesPerPacket = 1;
queueASBD.mBytesPerFrame = 4;
queueASBD.mChannelsPerFrame = 2;
queueASBD.mBitsPerChannel = 16;
queueASBD.mReserved = 0;
AudioQueueNewOutput(&queueASBD, AudioQueueCallback, NULL, CFRunLoopGetCurrent(), kCFRunLoopCommonModes, 0, &audioQueue);
I open the mp3 file like this (error checking and such removed for brevity):
ExtAudioFileRef audioFile;
ExtAudioFileOpenURL(url, &audioFile);
ExtAudioFileSetProperty(audioFile, kExtAudioFileProperty_ClientDataFormat, sizeof(queueASBD), &queueASBD);
And to queue a buffer, I do something like this:
AudioQueueBufferRef buffers; // previously allocated
AudioBufferList abl;
UInt32 length = (UInt32)queueASBD.mSampleRate / BUFFERS_PER_SECOND;
abl.mNumberBuffers = 1;
abl.mBuffers[0].mDataByteSize = (UInt32)(queueASBD.mSampleRate * queueASBD.mBytesPerPacket / BUFFERS_PER_SECOND);
abl.mBuffers[0].mNumberChannels = queueASBD.mChannelsPerFrame;
abl.mBuffers[0].mData = buffer->mAudioData;
ExtAudioFileRead(audioFile, &length, &abl);
UInt32 byte_length = length * (UInt32)queueASBD.mBytesPerPacket;
buffer->mAudioDataByteSize = byte_length;
AudioQueueEnqueueBuffer(audioQueue, buffer, 0, NULL);
Is there a way to get the file to play in stereo without totally re-coding (such as by using the Audio Unit APIs)? Could an Audio Converter help here? Is there some other way? Thanks for any help.
Try opening the Audio Queue with only one channel per frame (e.g. mono), and the matching number of bytes per packet and per frame (probably 2).

Audio Processing: Playing with volume level

I want to read a sound file from application bundle, copy it, play with its maximum volume level(Gain value or peak power, I'm not sure about the technical name of it), and then write it as another file to the bundle again.
I did the copying and writing part. Resulting file is identical to input file. I use AudioFileReadBytes() and AudioFileWriteBytes() functions of AudioFile services in AudioToolbox framework to do that.
So, I have the input file's bytes and also its audio data format(via use of AudioFileGetProperty() with kAudioFilePropertyDataFormat) but I can't find a variable in these to play with the original file's maximum volume level.
To clarify my purpose, I'm trying to produce another sound file of which volume level is increased or decreased relative to the original one, so I don't care about the system's volume level which is set by the user or iOS.
Is that possible to do with the framework I mentioned? If not, are there any alternative suggestions?
Thanks
edit:
Walking through Sam's answer regarding some audio basics, I decided to expand the question with another alternative.
Can I use AudioQueue services to record existing sound file(which is in the bundle) to another file and play with the volume level(with the help of framework) during the recording phase?
update:
Here's how I'm reading the input file and writing the output. Below code lowers the sound level for "some" of the amplitude values but with lots of noise. Interestingly, if I choose 0.5 as amplitude value it increases the sound level instead of lowering it, but when I use 0.1 as amplitude value it lowers the sound. Both cases involve disturbing noise. I think that's why Art is talking about normalization, but I've no idea about normalization.
AudioFileID inFileID;
CFURLRef inURL = [self inSoundURL];
AudioFileOpenURL(inURL, kAudioFileReadPermission, kAudioFileWAVEType, &inFileID)
UInt32 fileSize = [self audioFileSize:inFileID];
Float32 *inData = malloc(fileSize * sizeof(Float32)); //I used Float32 type with jv42's suggestion
AudioFileReadBytes(inFileID, false, 0, &fileSize, inData);
Float32 *outData = malloc(fileSize * sizeof(Float32));
//Art's suggestion, if I've correctly understood him
float ampScale = 0.5f; //this will reduce the 'volume' by -6db
for (int i = 0; i < fileSize; i++) {
outData[i] = (Float32)(inData[i] * ampScale);
}
AudioStreamBasicDescription outDataFormat = {0};
[self audioDataFormat:inFileID];
AudioFileID outFileID;
CFURLRef outURL = [self outSoundURL];
AudioFileCreateWithURL(outURL, kAudioFileWAVEType, &outDataFormat, kAudioFileFlags_EraseFile, &outFileID)
AudioFileWriteBytes(outFileID, false, 0, &fileSize, outData);
AudioFileClose(outFileID);
AudioFileClose(inFileID);
You won't find amplitude scaling operations in (Ext)AudioFile, because it's about the simplest DSP you can do.
Let's assume you use ExtAudioFile to convert whatever you read into 32-bit floats. To change the amplitude, you simply multiply:
float ampScale = 0.5f; //this will reduce the 'volume' by -6db
for (int ii=0; ii<numSamples; ++ii) {
*sampOut = *sampIn * ampScale;
sampOut++; sampIn++;
}
To increase the gain, you simply use a scale > 1.f. For example, an ampScale of 2.f would give you +6dB of gain.
If you want to normalize, you have to make two passes over the audio: One to determine the sample with the greatest amplitude. Then another to actually apply your computed gain.
Using AudioQueue services just to get access to the volume property is serious, serious overkill.
UPDATE:
In your updated code, you're multiplying each byte by 0.5 instead of each sample. Here's a quick-and-dirty fix for your code, but see my notes below. I wouldn't do what you're doing.
...
// create short pointers to our byte data
int16_t *inDataShort = (int16_t *)inData;
int16_t *outDataShort = (int16_t *)inData;
int16_t ampScale = 2;
for (int i = 0; i < fileSize; i++) {
outDataShort[i] = inDataShort[i] / ampScale;
}
...
Of course, this isn't the best way to do things: It assumes your file is little-endian 16-bit signed linear PCM. (Most WAV files are, but not AIFF, m4a, mp3, etc.) I'd use the ExtAudioFile API instead of the AudioFile API as this will convert any format you're reading into whatever format you want to work with in code. Usually the simplest thing to do is read your samples in as 32-bit float. Here's an example of your code using ExtAudioAPI to handle any input file format, including stereo v. mono
void ScaleAudioFileAmplitude(NSURL *theURL, float ampScale) {
OSStatus err = noErr;
ExtAudioFileRef audiofile;
ExtAudioFileOpenURL((CFURLRef)theURL, &audiofile);
assert(audiofile);
// get some info about the file's format.
AudioStreamBasicDescription fileFormat;
UInt32 size = sizeof(fileFormat);
err = ExtAudioFileGetProperty(audiofile, kExtAudioFileProperty_FileDataFormat, &size, &fileFormat);
// we'll need to know what type of file it is later when we write
AudioFileID aFile;
size = sizeof(aFile);
err = ExtAudioFileGetProperty(audiofile, kExtAudioFileProperty_AudioFile, &size, &aFile);
AudioFileTypeID fileType;
size = sizeof(fileType);
err = AudioFileGetProperty(aFile, kAudioFilePropertyFileFormat, &size, &fileType);
// tell the ExtAudioFile API what format we want samples back in
AudioStreamBasicDescription clientFormat;
bzero(&clientFormat, sizeof(clientFormat));
clientFormat.mChannelsPerFrame = fileFormat.mChannelsPerFrame;
clientFormat.mBytesPerFrame = 4;
clientFormat.mBytesPerPacket = clientFormat.mBytesPerFrame;
clientFormat.mFramesPerPacket = 1;
clientFormat.mBitsPerChannel = 32;
clientFormat.mFormatID = kAudioFormatLinearPCM;
clientFormat.mSampleRate = fileFormat.mSampleRate;
clientFormat.mFormatFlags = kLinearPCMFormatFlagIsFloat | kAudioFormatFlagIsNonInterleaved;
err = ExtAudioFileSetProperty(audiofile, kExtAudioFileProperty_ClientDataFormat, sizeof(clientFormat), &clientFormat);
// find out how many frames we need to read
SInt64 numFrames = 0;
size = sizeof(numFrames);
err = ExtAudioFileGetProperty(audiofile, kExtAudioFileProperty_FileLengthFrames, &size, &numFrames);
// create the buffers for reading in data
AudioBufferList *bufferList = malloc(sizeof(AudioBufferList) + sizeof(AudioBuffer) * (clientFormat.mChannelsPerFrame - 1));
bufferList->mNumberBuffers = clientFormat.mChannelsPerFrame;
for (int ii=0; ii < bufferList->mNumberBuffers; ++ii) {
bufferList->mBuffers[ii].mDataByteSize = sizeof(float) * numFrames;
bufferList->mBuffers[ii].mNumberChannels = 1;
bufferList->mBuffers[ii].mData = malloc(bufferList->mBuffers[ii].mDataByteSize);
}
// read in the data
UInt32 rFrames = (UInt32)numFrames;
err = ExtAudioFileRead(audiofile, &rFrames, bufferList);
// close the file
err = ExtAudioFileDispose(audiofile);
// process the audio
for (int ii=0; ii < bufferList->mNumberBuffers; ++ii) {
float *fBuf = (float *)bufferList->mBuffers[ii].mData;
for (int jj=0; jj < rFrames; ++jj) {
*fBuf = *fBuf * ampScale;
fBuf++;
}
}
// open the file for writing
err = ExtAudioFileCreateWithURL((CFURLRef)theURL, fileType, &fileFormat, NULL, kAudioFileFlags_EraseFile, &audiofile);
// tell the ExtAudioFile API what format we'll be sending samples in
err = ExtAudioFileSetProperty(audiofile, kExtAudioFileProperty_ClientDataFormat, sizeof(clientFormat), &clientFormat);
// write the data
err = ExtAudioFileWrite(audiofile, rFrames, bufferList);
// close the file
ExtAudioFileDispose(audiofile);
// destroy the buffers
for (int ii=0; ii < bufferList->mNumberBuffers; ++ii) {
free(bufferList->mBuffers[ii].mData);
}
free(bufferList);
bufferList = NULL;
}
I think you should avoid working with 8 bits unsigned chars for audio, if you can.
Try to get the data as 16 bits or 32 bits, that would avoid some noise/bad quality issues.
For most common audio file formats there isn't a single master volume variable. Instead you will need to take (or convert to) the PCM sound samples and perform at least some minimal digital signal processing (multiply, saturate/limit/AGC, quantization noise shaping, and etc.) on each sample.
If the sound file is normalized, there's nothing you can do to make the file louder. Except in the case of poorly encoded audio, volume is almost entirely the realm of the playback engine.
http://en.wikipedia.org/wiki/Audio_bit_depth
Properly stored audio files will have peak volume at or near the maximum value available for the file's bit depth. If you attempt to 'decrease the volume' of a sound file, you'll essentially just be degrading the sound quality.

Is kAudioFormatFlagIsFloat supported on iPhoneOS?

I am writing an iPhone app that records and plays audio simultaneously using the I/O audio unit as per Apple's recommendations.
I want to apply some sound effects (reverb, etc) on the recorded audio before playing it back. For these effects to work well, I need the samples to be floating point numbers, rather than integers. It seems this should be possible, by creating an AudioStreamBasicDescription with kAudioFormatFlagIsFloat set on mFormatFlags. This is what my code looks like:
AudioStreamBasicDescription streamDescription;
streamDescription.mSampleRate = 44100.0;
streamDescription.mFormatID = kAudioFormatLinearPCM;
streamDescription.mFormatFlags = kAudioFormatFlagIsFloat;
streamDescription.mBitsPerChannel = 32;
streamDescription.mBytesPerFrame = 4;
streamDescription.mBytesPerPacket = 4;
streamDescription.mChannelsPerFrame = 1;
streamDescription.mFramesPerPacket = 1;
streamDescription.mReserved = 0;
OSStatus status;
status = AudioUnitSetProperty(audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &streamDescription, sizeof(streamDescription));
if (status != noErr)
fprintf(stderr, "AudioUnitSetProperty (kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input) returned status %ld\n", status);
status = AudioUnitSetProperty(audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 1, &streamDescription, sizeof(streamDescription));
if (status != noErr)
fprintf(stderr, "AudioUnitSetProperty (kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output) returned status %ld\n", status);
However, when I run this (on an iPhone 3GS running iPhoneOS 3.1.3), I get this:
AudioUnitSetProperty (kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input) returned error -10868
AudioUnitSetProperty (kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output) returned error -10868
(-10868 is the value of kAudioUnitErr_FormatNotSupported)
I didn't find anything of value in Apple's documentation, apart from a recommendation to stick to 16 bit little-endian integers. However, the aurioTouch example project contains at least some support code related to kAudioFormatFlagIsFloat.
So, is my stream description incorrect, or is kAudioFormatFlagIsFloat simply not supported on iPhoneOS?
It's not supported, as far as I know. You can pretty easily convert to floats, though using AudioConverter. I do this conversion (both ways) in real time to use the Accelerate framework with iOS audio. (note: this code is copied and pasted from more modular code, so there may be some minor typos)
First, you'll need the AudioStreamBasicDescription from the input. Say
AudioStreamBasicDescription aBasicDescription = {0};
aBasicDescription.mSampleRate = self.samplerate;
aBasicDescription.mFormatID = kAudioFormatLinearPCM;
aBasicDescription.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
aBasicDescription.mFramesPerPacket = 1;
aBasicDescription.mChannelsPerFrame = 1;
aBasicDescription.mBitsPerChannel = 8 * sizeof(SInt16);
aBasicDescription.mBytesPerPacket = sizeof(SInt16) * aBasicDescription.mFramesPerPacket;
aBasicDescription.mBytesPerFrame = sizeof(SInt16) * aBasicDescription.mChannelsPerFrame
Then, generate a corresponding AudioStreamBasicDescription for float.
AudioStreamBasicDescription floatDesc = {0};
floatDesc.mFormatID = kAudioFormatLinearPCM;
floatDesc.mFormatFlags = kAudioFormatFlagIsFloat | kAudioFormatFlagIsPacked;
floatDesc.mBitsPerChannel = 8 * sizeof(float);
floatDesc.mFramesPerPacket = 1;
floatDesc.mChannelsPerFrame = 1;
floatDesc.mBytesPerPacket = sizeof(float) * floatDesc.mFramesPerPacket;
floatDesc.mBytesPerFrame = sizeof(float) * floatDesc.mChannelsPerFrame;
floatDesc.mSampleRate = [controller samplerate];
Make some buffers.
UInt32 intSize = inNumberFrames * sizeof(SInt16);
UInt32 floatSize = inNumberFrames * sizeof(float);
float *dataBuffer = (float *)calloc(numberOfAudioFramesIn, sizeof(float));
Then convert. (ioData is your AudioBufferList containing the int audio)
AudioConverterRef converter;
OSStatus err = noErr;
err = AudioConverterNew(&aBasicDescription, &floatDesct, &converter);
//check for error here in "real" code
err = AudioConverterConvertBuffer(converter, intSize, ioData->mBuffers[0].mData, &floatSize, dataBuffer);
//check for error here in "real" code
//do stuff to dataBuffer, which now contains floats
//convert the floats back by running the conversion the other way
I'm doing something unrelated to AudioUnits but I am using AudioStreamBasicDescription on iOS. I was able to use float samples by specifying:
dstFormat.mFormatFlags = kAudioFormatFlagIsFloat | kAudioFormatFlagIsNonInterleaved | kAudioFormatFlagsNativeEndian | kLinearPCMFormatFlagIsPacked;
The book Learning Core Audio: A Hands-on Guide to Audio Programming for Mac and iOS was helpful for this.
It is supported.
The problem is you must also set kAudioFormatFlagIsNonInterleaved on mFormatFlags. If you don't do this when setting kAudioFormatFlagIsFloat, you will get a format error.
So, you want to do something like this when preparing your AudioStreamBasicDescription:
streamDescription.mFormatFlags = kAudioFormatFlagIsFloat |
kAudioFormatFlagIsNonInterleaved;
As for why iOS requires this, I'm not sure - I only stumbled across it via trial and error.
From the Core Audio docs:
kAudioFormatFlagIsFloat
Set for floating point, clear for integer.
Available in iPhone OS 2.0 and later.
Declared in CoreAudioTypes.h.
I don't know enough about your stream to comment on its [in]correctness.
You can obtain an interleaved float RemoteIO with the following ASBD setup:
// STEREO_CHANNEL = 2, defaultSampleRate = 44100
AudioStreamBasicDescription const audioDescription = {
.mSampleRate = defaultSampleRate,
.mFormatID = kAudioFormatLinearPCM,
.mFormatFlags = kAudioFormatFlagIsFloat,
.mBytesPerPacket = STEREO_CHANNEL * sizeof(float),
.mFramesPerPacket = 1,
.mBytesPerFrame = STEREO_CHANNEL * sizeof(float),
.mChannelsPerFrame = STEREO_CHANNEL,
.mBitsPerChannel = 8 * sizeof(float),
.mReserved = 0
};
This worked for me.

ffmpeg audio and the iphone

has anyone been able to make ffmpeg work with audio queues, I get an error when I try to create the queue.
ret = avcodec_open(enc, codec);
if (ret < 0) {
NSLog(#"Error: Could not open video decoder: %d", ret);
av_close_input_file(avfContext);
return;
}
if (audio_index >= 0) {
AudioStreamBasicDescription
audioFormat;
audioFormat.mFormatID = -1;
audioFormat.mSampleRate =
avfContext->streams[audio_index]->codec->sample_rate;
audioFormat.mFormatFlags = 0;
switch (avfContext->streams[audio_index]->codec->codec_id)
{
case CODEC_ID_MP3:
audioFormat.mFormatID = kAudioFormatMPEGLayer3;
break;
case CODEC_ID_AAC:
audioFormat.mFormatID = kAudioFormatMPEG4AAC;
audioFormat.mFormatFlags = kMPEG4Object_AAC_Main;
break;
case CODEC_ID_AC3:
audioFormat.mFormatID = kAudioFormatAC3;
break;
default:
break;
}
if (audioFormat.mFormatID != -1) {
audioFormat.mBytesPerPacket = 0;
audioFormat.mFramesPerPacket =
avfContext->streams[audio_index]->codec->frame_size;
audioFormat.mBytesPerFrame = 0;
audioFormat.mChannelsPerFrame = avfContext->streams[audio_index]->codec->channels;
audioFormat.mBitsPerChannel = 0;
if (ret = AudioQueueNewOutput(&audioFormat, audioQueueOutputCallback, self, NULL, NULL, 0, &audioQueue)) {
NSLog(#"Error creating audio output queue: %d", ret);
}
The issues only with the audio,
Video is perfect if only I can figure out how to get audio queues to work.
http://web.me.com/cannonwc/Site/Photos_6.html
I though of remoteio but there is'nt much doc on that.
I will share the code for the complete class with anyone that helps me get it to work.
The idea is to have a single view controller that plays any streaming video passed to it, similar to ffplay on the iphone but without the sdl overhead.
You could be very well missing some important specifications in the AudioStreamBasicDescription structure: i don't know about ffmpeg, but specifying zero bytes per frame and zero bytes per packet won't work ;)
Here is how i would fill the structure, given the samplerate, the audio format, the number of channels and the bits per sample:
iAqc.mDataFormat.mSampleRate = iSampleRate;
iAqc.mDataFormat.mFormatID = kAudioFormatLinearPCM;
iAqc.mDataFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
iAqc.mDataFormat.mBytesPerPacket = (iBitsPerSample >> 3) * iNumChannels;
iAqc.mDataFormat.mFramesPerPacket = 1;
iAqc.mDataFormat.mBytesPerFrame = (iBitsPerSample >> 3) * iNumChannels;
iAqc.mDataFormat.mChannelsPerFrame = iNumChannels;
iAqc.mDataFormat.mBitsPerChannel = iBitsPerSample;
I assume here you are writing PCM samples to the audio device.
As long as you know the audio format you are working with, there should be no problems adapting it: the important thing to remember is what all this stuff mean.
Here i'm working with one sample frame per packet, so the number of bytes per packet coincides with the number of bytes per sample frame.
Most of the problems come out because there is a lot of bad usage of words such as "samples", "sample frames" in the wrong contexts and so on: a sample frame can be thought as the atomic unit of audio data that embrace all the available channels, a sample refers to a single sub-unit of data composing the sample frame.
For example, you have an audio stream of 2 channels with a resolution of 16 bits per sample: a sample will be 2 bytes big (16bps/8 or 16 >> 3), the sample frame will also take the number of channels into account, so it will be 4 bytes big (2bytes x 2channels).
IMPORTANT
The theory behind this doesn't apply only to the iPhone, but to audio coding in general!
It just happens the AudioQueues ask you for well-defined specifications about your audio stream, and that's good, but you could be asked for bytes instead, so expressing audio data sizes as audio frames is always good, you can always convert your data sizes and be sure about it.