aurioTouch2 recording issue. I need to add data from one AudioBufferList to another - iphone

I investigate aurioTouch2 sample code. But I wanna record everything in file. aurioTouch doesn't provide this possibility. I tried to record data using this code in FFTBufferManager.cpp in void FFTBufferManager::GrabAudioData(AudioBufferList *inBL)
ExtAudioFileRef cafFile;
AudioStreamBasicDescription cafDesc;
cafDesc.mBitsPerChannel = 16;
cafDesc.mBytesPerFrame = 4;
cafDesc.mBytesPerPacket = 4;
cafDesc.mChannelsPerFrame = 2;
cafDesc.mFormatFlags = 0;
cafDesc.mFormatID = 'ima4';
cafDesc.mFramesPerPacket = 1;
cafDesc.mReserved = 0;
cafDesc.mSampleRate = 44100;
CFStringRef refH;
refH = CFStringCreateWithCString(kCFAllocatorDefault, "/var/mobile/Applications/BD596ECF-A6F2-41EB-B4CE-3A9644B1C26A/Documents/voice2.caff", kCFStringEncodingUTF8);
CFURLRef destinationURL = CFURLCreateWithFileSystemPath(kCFAllocatorDefault,
refH,
kCFURLPOSIXPathStyle,
false);
OSType status = ExtAudioFileCreateWithURL(
destinationURL, // inURL
'caff', // inFileType
&cafDesc, // inStreamDesc
NULL, // inChannelLayout
kAudioFileFlags_EraseFile, // inFlags
&cafFile // outExtAudioFile
); // returns 0xFFFFFFCE
ExtAudioFileWrite(cafFile, mNumberFrames, inBL);
And this works well, but I use AudioBufferList *inBL, and this is only small part of all audio data (about 1 second). This functions is called every 1 second to analize new audion data from microphone. So it would be great, if I can add data from one AudioBufferList to another AudioBufferList.
Or may be anybody know other approach.

You whould set up new AudioUnit to record audio (with its own callback function).
OSStatus status;
// Describe audio component
AudioComponentDescription desc;
desc.componentType = kAudioUnitType_Output;
desc.componentSubType = kAudioUnitSubType_RemoteIO;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
// Get component
AudioComponent inputComponent = AudioComponentFindNext(NULL, &desc);
// Get audio units
status = AudioComponentInstanceNew(inputComponent, &mAudioUnit);
// Enable IO for recording
UInt32 flag = 1;
status = AudioUnitSetProperty(mAudioUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Input,
kInputBus,
&flag,
sizeof(flag));
// Enable IO for playback
status = AudioUnitSetProperty(mAudioUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Output,
kOutputBus,
&flag,
sizeof(flag));
// Describe format
AudioStreamBasicDescription audioFormat={0};
audioFormat.mSampleRate = kSampleRate;
audioFormat.mFormatID = kAudioFormatLinearPCM;
audioFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
audioFormat.mFramesPerPacket = 1;
audioFormat.mChannelsPerFrame = 1;
audioFormat.mBitsPerChannel = 16;
audioFormat.mBytesPerPacket = 2;
audioFormat.mBytesPerFrame = 2;
// Apply format
status = AudioUnitSetProperty(mAudioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
kInputBus,
&audioFormat,
sizeof(audioFormat));
status = AudioUnitSetProperty(mAudioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
kOutputBus,
&audioFormat,
sizeof(audioFormat));
// Set input callback
AURenderCallbackStruct callbackStruct;
callbackStruct.inputProc = recordingCallback;
callbackStruct.inputProcRefCon = (__bridge void *)self;
status = AudioUnitSetProperty(mAudioUnit,
kAudioOutputUnitProperty_SetInputCallback,
kAudioUnitScope_Global,
kInputBus,
&callbackStruct,
sizeof(callbackStruct));
// Disable buffer allocation for the recorder (optional - do this if we want to pass in our own)
flag = 0;
status = AudioUnitSetProperty(mAudioUnit,
kAudioUnitProperty_ShouldAllocateBuffer,
kAudioUnitScope_Output,
kInputBus,
&flag,
sizeof(flag));
// On initialise le fichier audio
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
NSString *destinationFilePath = [[NSString alloc] initWithFormat: #"%#/output.caf", documentsDirectory];
NSLog(#">>> %#\n", destinationFilePath);
CFURLRef destinationURL = CFURLCreateWithFileSystemPath(kCFAllocatorDefault, (__bridge CFStringRef)destinationFilePath, kCFURLPOSIXPathStyle, false);
OSStatus setupErr = ExtAudioFileCreateWithURL(destinationURL, kAudioFileCAFType, &audioFormat, NULL, kAudioFileFlags_EraseFile, &mAudioFileRef);
CFRelease(destinationURL);
NSAssert(setupErr == noErr, #"Couldn't create file for writing");
setupErr = ExtAudioFileSetProperty(mAudioFileRef, kExtAudioFileProperty_ClientDataFormat, sizeof(AudioStreamBasicDescription), &audioFormat);
NSAssert(setupErr == noErr, #"Couldn't create file for format");
setupErr = ExtAudioFileWriteAsync(mAudioFileRef, 0, NULL);
NSAssert(setupErr == noErr, #"Couldn't initialize write buffers for audio file");
CheckError(AudioUnitInitialize(mAudioUnit), "AudioUnitInitialize");
CheckError(AudioOutputUnitStart(mAudioUnit), "AudioOutputUnitStart");

Related

iOS: Audio Unit RemoteIO not working on iPhone

I'm trying to create my own custom sound effects Audio Unit based on the input from the mic. This application allows simultaneous input/output from the microphone to speaker. I can apply effects and work using the simulator, but when I try to test on the iPhone I can't hear anything. I paste my code if anyone can help me:
- (id) init{
self = [super init];
OSStatus status;
// Describe audio component
AudioComponentDescription desc;
desc.componentType = kAudioUnitType_Output;
desc.componentSubType = kAudioUnitSubType_RemoteIO;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
// Get component
AudioComponent inputComponent = AudioComponentFindNext(NULL, &desc);
// Get audio units
status = AudioComponentInstanceNew(inputComponent, &audioUnit);
checkStatus(status);
// Enable IO for recording
UInt32 flag = 1;
status = AudioUnitSetProperty(audioUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Input,
kInputBus,
&flag,
sizeof(flag));
checkStatus(status);
// Enable IO for playback
status = AudioUnitSetProperty(audioUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Output,
kOutputBus,
&flag,
sizeof(flag));
checkStatus(status);
// Describe format
AudioStreamBasicDescription audioFormat;
audioFormat.mSampleRate = 44100.00;
audioFormat.mFormatID = kAudioFormatLinearPCM;
audioFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
audioFormat.mFramesPerPacket = 1;
audioFormat.mChannelsPerFrame = 1;
audioFormat.mBitsPerChannel = 16;
audioFormat.mBytesPerPacket = 2;
audioFormat.mBytesPerFrame = 2;
// Apply format
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
kInputBus,
&audioFormat,
sizeof(audioFormat));
checkStatus(status);
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
kOutputBus,
&audioFormat,
sizeof(audioFormat));
checkStatus(status);
// Set input callback
AURenderCallbackStruct callbackStruct;
callbackStruct.inputProc = recordingCallback;
callbackStruct.inputProcRefCon = self;
status = AudioUnitSetProperty(audioUnit,
kAudioOutputUnitProperty_SetInputCallback,
kAudioUnitScope_Global,
kInputBus,
&callbackStruct,
sizeof(callbackStruct));
checkStatus(status);
// Set output callback
callbackStruct.inputProc = playbackCallback;
callbackStruct.inputProcRefCon = self;
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Global,
kOutputBus,
&callbackStruct,
sizeof(callbackStruct));
checkStatus(status);
// Allocate our own buffers (1 channel, 16 bits per sample, thus 16 bits per frame, thus 2 bytes per frame).
// Practice learns the buffers used contain 512 frames, if this changes it will be fixed in processAudio.
tempBuffer.mNumberChannels = 1;
tempBuffer.mDataByteSize = 512 * 2;
tempBuffer.mData = malloc( 512 * 2 );
// Initialise
status = AudioUnitInitialize(audioUnit);
checkStatus(status);
return self;
}
This callback is called when new audio data from the microphone is available. But never enter here when I test on the iPhone:
static OSStatus recordingCallback(void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData) {
AudioBuffer buffer;
buffer.mNumberChannels = 1;
buffer.mDataByteSize = inNumberFrames * 2;
buffer.mData = malloc( inNumberFrames * 2 );
// Put buffer in a AudioBufferList
AudioBufferList bufferList;
bufferList.mNumberBuffers = 1;
bufferList.mBuffers[0] = buffer;
// Then:
// Obtain recorded samples
OSStatus status;
status = AudioUnitRender([iosAudio audioUnit],
ioActionFlags,
inTimeStamp,
inBusNumber,
inNumberFrames,
&bufferList);
checkStatus(status);
// Now, we have the samples we just read sitting in buffers in bufferList
// Process the new data
[iosAudio processAudio:&bufferList];
// release the malloc'ed data in the buffer we created earlier
free(bufferList.mBuffers[0].mData);
return noErr;
}
I solved my problem. I simply needed to initialize the AudioSession before playing/recording. I did so with the following code:
OSStatus status;
AudioSessionInitialize(NULL, NULL, NULL, self);
UInt32 sessionCategory = kAudioSessionCategory_PlayAndRecord;
status = AudioSessionSetProperty (kAudioSessionProperty_AudioCategory,
sizeof (sessionCategory),
&sessionCategory);
if (status != kAudioSessionNoError)
{
if (status == kAudioServicesUnsupportedPropertyError) {
NSLog(#"AudioSessionInitialize failed: unsupportedPropertyError");
}else if (status == kAudioServicesBadPropertySizeError) {
NSLog(#"AudioSessionInitialize failed: badPropertySizeError");
}else if (status == kAudioServicesBadSpecifierSizeError) {
NSLog(#"AudioSessionInitialize failed: badSpecifierSizeError");
}else if (status == kAudioServicesSystemSoundUnspecifiedError) {
NSLog(#"AudioSessionInitialize failed: systemSoundUnspecifiedError");
}else if (status == kAudioServicesSystemSoundClientTimedOutError) {
NSLog(#"AudioSessionInitialize failed: systemSoundClientTimedOutError");
}else {
NSLog(#"AudioSessionInitialize failed! %ld", status);
}
}
AudioSessionSetActive(TRUE);
...

AudioUnit Input Samples

So I am having some trouble here with my AudioUnit taking in data from microphone/line-in in iOS. I am able to set everything up to what I think is okay and it is calling my recordingCallback, but the data that I am getting out of the buffer is not correct. It always returns exactly the same thing, which is mostly zeros and random large numbers. Does anyone know what could be causing this. My code is as follows.
Setting up Audio Unit
OSStatus status;
// Describe audio component
AudioComponentDescription desc;
desc.componentType = kAudioUnitType_Output;
desc.componentSubType = kAudioUnitSubType_RemoteIO;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
// Get component
AudioComponent inputComponent = AudioComponentFindNext(NULL, &desc);
status = AudioComponentInstanceNew(inputComponent, &audioUnit);
// Enable IO for recording
UInt32 flag = 1;
status = AudioUnitSetProperty(audioUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Input,
kInputBusNumber,
&flag,
sizeof(flag));
// Disable playback IO
flag = 0;
status = AudioUnitSetProperty(audioUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Output,
kOutputBusNumber,
&flag,
sizeof(flag));
// Describe format
AudioStreamBasicDescription audioFormat;
audioFormat.mSampleRate = 44100.00;
audioFormat.mFormatID = kAudioFormatLinearPCM;
audioFormat.mFormatFlags = kAudioFormatFlagsNativeFloatPacked |kAudioFormatFlagIsNonInterleaved;
audioFormat.mFramesPerPacket = 1;
audioFormat.mChannelsPerFrame = 1;
audioFormat.mBitsPerChannel = 32;
audioFormat.mBytesPerPacket = 4;
audioFormat.mBytesPerFrame = 4;
// Apply format
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
kInputBusNumber,
&audioFormat,
sizeof(audioFormat));
// Set input callback
AURenderCallbackStruct callbackStruct;
callbackStruct.inputProc = recordingCallback;
callbackStruct.inputProcRefCon = (__bridge void*)self;
status = AudioUnitSetProperty(audioUnit,
kAudioOutputUnitProperty_SetInputCallback,
kAudioUnitScope_Global,
kInputBusNumber,
&callbackStruct,
sizeof(callbackStruct));
status = AudioUnitInitialize(audioUnit);
Input Callback
static OSStatus recordingCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
AudioBufferList bufferList;
bufferList.mNumberBuffers = 1;
bufferList.mBuffers[0].mDataByteSize = 4;
bufferList.mBuffers[0].mNumberChannels = 1;
bufferList.mBuffers[0].mData = malloc(sizeof(float)*inNumberFrames); //
InputAudio *input = (__bridge InputAudio*)inRefCon;
OSStatus status;
status = AudioUnitRender([input audioUnit],
ioActionFlags,
inTimeStamp,
inBusNumber,
inNumberFrames,
&bufferList);
float* result = (float*)&bufferList.mBuffers[0].mData;
if (input->counter == 5) {
for (int i = 0;i<inNumberFrames;i++) {
printf("%f ",result[i]);
}
}
input->counter++;
return noErr;
}
Anyone ever encounter a similar problem or see something clearly wrong in my code. Thanks in advance for any help!
I am basing all of it off of Michael Tysons Core Audio RemoteIO code
If I remember correctly, the samples you get from the audio buffer in the callback aren't floats, they're SInt16. Try casting the samples like this:
SInt16 *sn16AudioData= (SInt16 *)(bufferList.mBuffers[0].mData);
And these should be the max and min values:
#define sn16_MAX_SAMPLE_VALUE 32767
#define sn16_MIN_SAMPLE_VALUE -32768
I was basically trying to do the same thing with very similar code but using an AudioGraph(). I had the same problem, zeros in my output data from the mic and could not get it working until I added the line
Status = AUGraphConnectNodeInput(graph, ioNode, 1, ioNode, 0);
As you are not using a graph you will need to call AudioUnitSetProperty() with kAudioUnitProperty_MakeConnection and pass and pass a complete AudioUnitConnection structure

How to Encode AAC data from PCM data in iPhone SDK? (iphone dev/Audio)

I guess "AudioConverterFillComplexBuffer" is the solution.
But I don't know this way is right.
+1. AudioUnit
initialize AudioUnit : "recordingCallback" is callback method.
the output format is PCM.
record to file.( I played the recorded file).
+2. AudioConverter
add "AudioConverterFillComplexBuffer"
I don't know about it well. added,
+3. problem
"audioConverterComplexInputDataProc" method called only one time.
How can I use AudioConverter api?
Attached my code
#import "AACAudioRecorder.h"
#define kOutputBus 0
#define kInputBus 1
#implementation AACAudioRecorder
This is AudioConverterFillComplexBuffer's callback method.
static OSStatus audioConverterComplexInputDataProc( AudioConverterRef inAudioConverter,
UInt32* ioNumberDataPackets,
AudioBufferList* ioData,
AudioStreamPacketDescription** outDataPacketDescription,
void* inUserData){
ioData = (AudioBufferList*)inUserData;
return 0;
}
This is AudioUnit's callback.
static OSStatus recordingCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
#autoreleasepool {
AudioBufferList *bufferList;
AACAudioRecorder *THIS = (AACAudioRecorder *)inRefCon;
OSStatus err = AudioUnitRender(THIS-> m_audioUnit ,
ioActionFlags,
inTimeStamp, 1, inNumberFrames, bufferList);
if (err) { NSLog(#"%s AudioUnitRender error %d\n",__FUNCTION__, (int)err); return err; }
NSString *recordFile =
[NSTemporaryDirectory() stringByAppendingPathComponent: #"auioBuffer.pcm"];
FILE *fp;
fp = fopen([recordFile UTF8String], "a+");
fwrite(bufferList->mBuffers[0].mData, sizeof(Byte),
bufferList->mBuffers[0].mDataByteSize, fp);
fclose(fp);
[THIS convert:bufferList ioOutputDataPacketSize:&inNumberFrames];
if (err) {NSLog(#"%s : AudioFormat Convert error %d\n",__FUNCTION__, (int)err); }
}
return noErr;
}
status check method
static void checkStatus(OSStatus status, const char* str){
if (status != noErr) {
NSLog(#"%s %s error : %ld ",__FUNCTION__, str, status);
}
}
convert method : PCM -> AAC
- (void)convert:(AudioBufferList*)input_bufferList ioOutputDataPacketSize:(UInt32*)packetSizeRef
{
UInt32 size = sizeof(UInt32);
UInt32 maxOutputSize;
AudioConverterGetProperty(m_audioConverterRef,
kAudioConverterPropertyMaximumOutputPacketSize,
&size,
&maxOutputSize);
AudioBufferList *output_bufferList = (AudioBufferList *)malloc(sizeof(AudioBufferList));
output_bufferList->mNumberBuffers = 1;
output_bufferList->mBuffers[0].mNumberChannels = 1;
output_bufferList->mBuffers[0].mDataByteSize = *packetSizeRef * 2;
output_bufferList->mBuffers[0].mData = (AudioUnitSampleType *)malloc(*packetSizeRef * 2);
OSStatus err;
err = AudioConverterFillComplexBuffer(
m_audioConverterRef,
audioConverterComplexInputDataProc,
input_bufferList,
packetSizeRef,
output_bufferList,
NULL
);
if (err) {NSLog(#"%s : AudioFormat Convert error %d\n",__FUNCTION__, (int)err); }
}
This is initialize method.
- (void)initialize
{
// ...
OSStatus status;
// Describe audio component
AudioComponentDescription desc;
desc.componentType = kAudioUnitType_Output;
desc.componentSubType = kAudioUnitSubType_RemoteIO;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
// Get component
AudioComponent inputComponent = AudioComponentFindNext(NULL, &desc);
// Get audio units
status = AudioComponentInstanceNew(inputComponent, &m_audioUnit);
checkStatus(status,"AudioComponentInstanceNew");
// Enable IO for recording
UInt32 flag = 1;
status = AudioUnitSetProperty(m_audioUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Input,
kInputBus,
&flag,
sizeof(flag));
checkStatus(status,"Enable IO for recording");
// Enable IO for playback
status = AudioUnitSetProperty(m_audioUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Output,
kOutputBus,
&flag,
sizeof(flag));
checkStatus(status,"Enable IO for playback");
// Describe format
AudioStreamBasicDescription audioFormat;
audioFormat.mSampleRate = 44100.00;
audioFormat.mFormatID = kAudioFormatLinearPCM;
audioFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
audioFormat.mFramesPerPacket = 1;
audioFormat.mChannelsPerFrame = 1;
audioFormat.mBitsPerChannel = 16;
audioFormat.mBytesPerPacket = 2;
audioFormat.mBytesPerFrame = 2;
// Apply format
status = AudioUnitSetProperty(m_audioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
kInputBus,
&audioFormat,
sizeof(audioFormat));
checkStatus(status,"Apply format1");
status = AudioUnitSetProperty(m_audioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
kOutputBus,
&audioFormat,
sizeof(audioFormat));
checkStatus(status,"Apply format2");
// Set input callback
AURenderCallbackStruct callbackStruct;
callbackStruct.inputProc = recordingCallback;
callbackStruct.inputProcRefCon = self;
status = AudioUnitSetProperty(m_audioUnit,
kAudioOutputUnitProperty_SetInputCallback,
kAudioUnitScope_Global,
kInputBus,
&callbackStruct,
sizeof(callbackStruct));
checkStatus(status,"Set input callback");
// Initialise
status = AudioUnitInitialize(m_audioUnit);
checkStatus(status,"AudioUnitInitialize");
// Set ASBD For converting Output Stream
AudioStreamBasicDescription outputFormat;
memset(&outputFormat, 0, sizeof(outputFormat));
outputFormat.mSampleRate = 44100.00;
outputFormat.mFormatID = kAudioFormatMPEG4AAC;
outputFormat.mFormatFlags = kMPEG4Object_AAC_Main;
outputFormat.mFramesPerPacket = 1024;
outputFormat.mChannelsPerFrame = 1;
outputFormat.mBitsPerChannel = 0;
outputFormat.mBytesPerFrame = 0;
outputFormat.mBytesPerPacket = 0;
//Create An Audio Converter
status = AudioConverterNew( &audioFormat, &outputFormat, &m_audioConverterRef );
checkStatus(status,"Create An Audio Converter");
if(m_audioConverterRef) NSLog(#"m_audioConverterRef is created");
}
AudioOutputUnitStart
- (void)StartRecord
{
OSStatus status = AudioOutputUnitStart(m_audioUnit);
checkStatus(status,"AudioOutputUnitStart");
}
AudioOutputUnitStop
- (void)StopRecord
{
OSStatus status = AudioOutputUnitStop(m_audioUnit);
checkStatus(status,"AudioOutputUnitStop");
}
finish
- (void)finish
{
AudioUnitUninitialize(m_audioUnit);
}
#end
It took me a long time to understand AudioConverterFillComplexBuffer, and especially how to use it to convert audio in real-time. I've posted my approach here: How do I use CoreAudio's AudioConverter to encode AAC in real-time?
Reference https://developer.apple.com/library/ios/samplecode/iPhoneACFileConvertTest/Introduction/Intro.html
It demonstrates using the Audio Converter APIs to convert from a PCM audio format to a compressed format including AAC.

iphone core audio data from microphone is NaN

When I receive data from the microphone via core audio, sometimes the buffers have only one sample inside and sometimes they have 20 samples. some of the time the values of the samples are 0.00000 and sometimes their values are NaN, some of the time, they are what you would expect.
What is the problem?
Here is my code:
-(void)startListeningWithFrequency:(float)frequency;
{
OSStatus status;
//AudioComponentInstance audioUnit;
AudioComponentDescription desc;
desc.componentType = kAudioUnitType_Output;
desc.componentSubType = kAudioUnitSubType_RemoteIO;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
AudioComponent inputComponent = AudioComponentFindNext(NULL, &desc);
status = AudioComponentInstanceNew( inputComponent, &audioUnit);
checkStatus(status);
UInt32 flag = 1;
status = AudioUnitSetProperty(audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input,kInputBus, &flag, sizeof(flag));
checkStatus(status);
AudioStreamBasicDescription audioFormat;
audioFormat.mSampleRate = 44100.00;//44100.00;
audioFormat.mFormatID = kAudioFormatLinearPCM;
audioFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
audioFormat.mFramesPerPacket = 1;
audioFormat.mChannelsPerFrame = 1;
audioFormat.mBitsPerChannel = 16;
audioFormat.mBytesPerPacket = 2;
audioFormat.mBytesPerFrame = 2;
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
kInputBus,
&audioFormat,
sizeof(audioFormat));
checkStatus(status);
//status = AudioUnitSetProperty(audioUnit,
// kAudioUnitProperty_StreamFormat,
// kAudioUnitScope_Input,
// kOutputBus,
// &audioFormat,
// sizeof(audioFormat));
checkStatus(status);
AURenderCallbackStruct callbackStruct;
callbackStruct.inputProc = recordingCallback;
callbackStruct.inputProcRefCon = self;
status = AudioUnitSetProperty(audioUnit,
kAudioOutputUnitProperty_SetInputCallback,
kAudioUnitScope_Global,
kInputBus, &callbackStruct, sizeof(callbackStruct));
checkStatus(status);
/* UInt32 shouldAllocateBuffer = 1;
AudioUnitSetProperty(audioUnit, kAudioUnitProperty_ShouldAllocateBuffer, kAudioUnitScope_Global, 1, &shouldAllocateBuffer, sizeof(shouldAllocateBuffer));
*/
//float bufferLength = 0.005;
//AudioSessionSetProperty(kAudioSessionProperty_PreferredHardwareIOBufferDuration, sizeof(bufferLength), &bufferLength);
status = AudioOutputUnitStart(audioUnit);
}
and the callback:
static OSStatus recordingCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
AudioBuffer buffer;
buffer.mNumberChannels = 1;
buffer.mDataByteSize = inNumberFrames * 2;
NSLog(#"%d",inNumberFrames);
buffer.mData = malloc( inNumberFrames * 2 );
// Put buffer in a AudioBufferList
AudioBufferList bufferList;
bufferList.mNumberBuffers = 1;
bufferList.mBuffers[0] = buffer;
OSStatus status;
status = AudioUnitRender(audioUnit,
ioActionFlags,
inTimeStamp,
inBusNumber,
inNumberFrames,
&bufferList);
checkStatus(status);
double *q = (double *)(&bufferList)->mBuffers[0].mData;
for(int i=0; i < strlen((const char *)(&bufferList)->mBuffers[0].mData); i++)
{
//i sometimes doesn't get past 0, sometimes goes into 20s
NSLog(#"%f",q[i]);//returns NaN, 0.00, or some times actual data
}
}
Any help would be appreciated,
Thank you,
nonono
Since you are passing the kAudioFormatFlagIsSignedInteger flag for the stream format your samples are just that: 16-bit signed integers (int16_t) and not floats. You either need to treat the samples that way or use the kAudioFormatFlagIsFloat flag instead (and you would need to use float instead of double as datatypes, AFAIK).

Play iphone audio in down "ipod" mic

It only plays in ear mic!
I use Remote IO to playback
OSStatus status; // Describe audio component AudioComponentDescription desc; desc.componentType = kAudioUnitType_Output; desc.componentSubType = kAudioUnitSubType_RemoteIO; desc.componentFlags = 0; desc.componentFlagsMask = 0; desc.componentManufacturer = kAudioUnitManufacturer_Apple; // Get component AudioComponent inputComponent = AudioComponentFindNext(NULL, &desc); // Get audio units status = AudioComponentInstanceNew(inputComponent, &audioUnit); // Enable IO for recording UInt32 flag = 1; status = AudioUnitSetProperty(audioUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Input,
kInputBus,
&flag,
sizeof(flag)); // Enable IO for playback status = AudioUnitSetProperty(audioUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Output,
kOutputBus,
&flag,
sizeof(flag)); // Describe format audioFormat.mSampleRate = 44100; audioFormat.mFormatID = kAudioFormatLinearPCM; audioFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked; audioFormat.mFramesPerPacket = 1; audioFormat.mChannelsPerFrame = 1; audioFormat.mBitsPerChannel = 16; audioFormat.mBytesPerPacket = 2; audioFormat.mBytesPerFrame = 2;
// Apply format status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
kInputBus,
&audioFormat,
sizeof(audioFormat));
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
kOutputBus,
&audioFormat,
sizeof(audioFormat)); AURenderCallbackStruct callbackStruct; // Set output callback callbackStruct.inputProc = playbackCallback; callbackStruct.inputProcRefCon = self; status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_SetRenderCallback,
//kAudioUnitScope_Global,
kAudioUnitScope_Output,
kOutputBus,
&callbackStruct,
sizeof(callbackStruct)); // Set input callback
callbackStruct.inputProc = recordingCallback; callbackStruct.inputProcRefCon = self; status = AudioUnitSetProperty(audioUnit,
kAudioOutputUnitProperty_SetInputCallback,
//kAudioUnitScope_Global,
kAudioUnitScope_Input,
kInputBus,
&callbackStruct,
sizeof(callbackStruct));
// Disable buffer allocation for the recorder (optional - do this if we want to pass in our own) flag = 0; status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_ShouldAllocateBuffer,
kAudioUnitScope_Output,
kInputBus,
&flag,
sizeof(flag)); /* // TODO: Allocate our own buffers if we want
*/ // Initialise status = AudioUnitInitialize(audioUnit); AudioUnitSetParameter(audioUnit, kHALOutputParam_Volume,
kAudioUnitScope_Input, kInputBus,
1, 0);
Before playing audio file, set the AVAudioSession to AVAudioSessionCategoryPlayback
AVAudioSession * audioSession;
[audioSession setCategory:AVAudioSessionCategoryPlayback error: &error];
//Activate the session
[audioSession setActive:YES error: &error];