How to mute output in apple aurio Touch (audio sessioin example) application? - iphone

I have wrote detector of breath based on apple's aurio Touch
example application. But i can not figure how to set audio unit or audio
session not to play sounds from audio input. Now when blowing in the mic,
i can hear the breath from iphone speaker. How to prevent this?
Here is apple's audio session init code:
XThrowIfError(AudioSessionInitialize(NULL, NULL, rioInterruptionListener, self), "couldn't initialize audio session");
XThrowIfError(AudioSessionSetActive(true), "couldn't set audio session active\n");
UInt32 audioCategory = kAudioSessionCategory_RecordAudio;
XThrowIfError(AudioSessionSetProperty(kAudioSessionProperty_AudioCategory, sizeof(audioCategory), &audioCategory), "couldn't set audio category");
XThrowIfError(AudioSessionAddPropertyListener(kAudioSessionProperty_AudioRouteChange, propListener, self), "couldn't set property listener");
Float32 preferredBufferSize = .005;
XThrowIfError(AudioSessionSetProperty(kAudioSessionProperty_PreferredHardwareIOBufferDuration, sizeof(preferredBufferSize), &preferredBufferSize), "couldn't set i/o buffer duration");
UInt32 size = sizeof(hwSampleRate);
XThrowIfError(AudioSessionGetProperty(kAudioSessionProperty_CurrentHardwareSampleRate, &size, &hwSampleRate), "couldn't get hw sample rate");
XThrowIfError(SetupRemoteIO(rioUnit, inputProc, thruFormat), "couldn't setup remote i/o unit");
dcFilter = new DCRejectionFilter[thruFormat.NumberChannels()];
UInt32 maxFPS;
size = sizeof(maxFPS);
XThrowIfError(AudioUnitGetProperty(rioUnit, kAudioUnitProperty_MaximumFramesPerSlice, kAudioUnitScope_Global, 0, &maxFPS, &size), "couldn't get the remote I/O unit's max frames per slice");
fftBufferManager = new FFTBufferManager(maxFPS);
l_fftData = new int32_t[maxFPS/2];
XThrowIfError(AudioOutputUnitStart(rioUnit), "couldn't start remote i/o unit");
size = sizeof(thruFormat);
XThrowIfError(AudioUnitGetProperty(rioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 1, &thruFormat, &size), "couldn't get the remote I/O unit's output client format");

Many Thanks to Tim Bolstad
His answer:
AURIOTouch uses a single callback to pass audio from input to output. So the simplest thing would be to zero
out the ioData structure after you've done your breath detection.
Something like:
for (UInt32 i=0; i < inData->mNumberBuffers; i++)
memset(inData->mBuffers[i].mData, 0, inData->mBuffers[i].mDataByteSize);
Or you could look at the CAPlayThrough example, which has separate callbacks for input and output.

Related

How to edit the default instrument of an AUGraph?

I'm working with the MusicPlayer API. I understand that when you load in a .mid as a sequence, the API creates a default AUGraph for you that includes an AUSampler. This AUSampler uses a simple sine-wave based instrument to synthesize the notes in the .mid
My question is, how does one change the default instrument in the AUSampler? I understand that you can use SoundFont2 files (.sf2) and add them using the AudioUnitSetProperty method. But, how does one access this default AUGraph? Do you have to open the graph before you can edit the AudioUnit or is opening a graph only for editing connections between nodes?
Thanks :)
I've written a tutorial on this but here but here's an outline of the process:
Function to load a Sound Font file (taken from the Apple documentation):
-(OSStatus) loadFromDLSOrSoundFont: (NSURL *)bankURL withPatch: (int)presetNumber {
OSStatus result = noErr;
// fill out a bank preset data structure
AUSamplerBankPresetData bpdata;
bpdata.bankURL = (__bridge CFURLRef) bankURL;
bpdata.bankMSB = kAUSampler_DefaultMelodicBankMSB;
bpdata.bankLSB = kAUSampler_DefaultBankLSB;
bpdata.presetID = (UInt8) presetNumber;
// set the kAUSamplerProperty_LoadPresetFromBank property
result = AudioUnitSetProperty([pointer to your AUSampler node here],
kAUSamplerProperty_LoadPresetFromBank,
kAudioUnitScope_Global,
0,
&bpdata,
sizeof(bpdata));
// check for errors
NSCAssert (result == noErr,
#"Unable to set the preset property on the Sampler. Error code:%d '%.4s'",
(int) result,
(const char *)&result);
return result; }
Then you need to load the Sound Font from your Resources folder:
NSURL *presetURL = [[NSURL alloc] initFileURLWithPath:[[NSBundle mainBundle] pathForResource:#"Name of sound font" ofType:#"sf2"]];
// Initialise the sound font
[self loadFromDLSOrSoundFont: (NSURL *)presetURL withPatch: (int)10];
Hope this helps!
You might take a look at the Audiograph example. It doesn't use soundFonts but should give you an idea of how to set up a graph.
When I use the MusicPlayer I always generate the midi note data from code/GUI and create the AUGraph (with a mixer) from scratch. There are ways to derive/extract the default generated AUGraph & AUSampler resulting from loading a midi file (example code below) but I never had success setting a new soundFont this way. On the other hand, creating the AUGraph from scratch and then loading an .sf2 file works great.
AUGraph graph;
result = MusicSequenceGetAUGraph (sequence, &graph);
MusicTrack firstTrack;
result = MusicSequenceGetIndTrack (sequence, 0, &firstTrack);
AUNode myNode;
result = MusicTrackGetDestNode(firstTrack,&myNode);
AudioUnit mySamplerUnit;
result = AUGraphNodeInfo(graph, myNode, 0, &mySamplerUnit);

Write Audio To Disk From IO Unit

Rewriting this question to be a little more succient.
My problem is that I cant successfully write an audio file to disk from a remote IO Unit.
The steps I took were to
Open an mp3 file and extract its audio to buffers.
I set up an asbd to use with my graph based on the properties of the graph.
I setup and run my graph looping the extracted audio and sound successfully comes out the speaker!
What I'm having difficulty with is taking the audio samples from the remote IO callback and writing them to an audio file on disk which I am using ExtAudioFileWriteASync for.
The audio file does get written and bears some audiable resemblance to the original mp3 but it sounds very distorted.
I'm not sure if the problem is
A) ExtAudioFileWriteAsync cant write the samples as fast as the io unit callback provides them.
or -
B) I have set up the ASBD for the extaudiofile refeference wrong. I wanted to begin by saving a wav file. I'm not sure if I have described this properly in the ASBD below.
Secondly I am uncertain what value to pass for the inChannelLayout property when creating the audio file.
And finally I am very uncertain about what asbd to use for kExtAudioFileProperty_ClientDataFormat.
I had been using my stereo stream format but a closer look at the docs says this must be pcm. Should this be the same format as the output for the remoteio? And if so was I wrong to set the output format of the remote io to stereostreamformat?
I realize there's and awful lot in this question but I have a lot of uncertainties that I cant seem to clear up on my own.
setup stereo stream format
- (void) setupStereoStreamFormat
{
size_t bytesPerSample = sizeof (AudioUnitSampleType);
stereoStreamFormat.mFormatID = kAudioFormatLinearPCM;
stereoStreamFormat.mFormatFlags = kAudioFormatFlagsAudioUnitCanonical;
stereoStreamFormat.mBytesPerPacket = bytesPerSample;
stereoStreamFormat.mFramesPerPacket = 1;
stereoStreamFormat.mBytesPerFrame = bytesPerSample;
stereoStreamFormat.mChannelsPerFrame = 2; // 2 indicates stereo
stereoStreamFormat.mBitsPerChannel = 8 * bytesPerSample;
stereoStreamFormat.mSampleRate = engineDescribtion.samplerate;
NSLog (#"The stereo stereo format :");
}
setup remoteio callback using stereo stream format
AudioUnitSetProperty(engineDescribtion.masterChannelMixerUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
masterChannelMixerUnitloop,
&stereoStreamFormat,
sizeof(stereoStreamFormat));
AudioUnitSetProperty(engineDescribtion.masterChannelMixerUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
masterChannelMixerUnitloop,
&stereoStreamFormat,
sizeof(stereoStreamFormat));
static OSStatus masterChannelMixerUnitCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
// ref.equnit;
//AudioUnitRender(engineDescribtion.channelMixers[inBusNumber], ioActionFlags, inTimeStamp, 0, inNumberFrames, ioData);
Engine *engine= (Engine *) inRefCon;
AudioUnitRender(engineDescribtion.equnit, ioActionFlags, inTimeStamp, 0, inNumberFrames, ioData);
if(engine->isrecording)
{
ExtAudioFileWriteAsync(engine->recordingfileref, inNumberFrames, ioData);
}
return 0;
}
**the recording setup **
-(void)startrecording
{
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
destinationFilePath = [[NSString alloc] initWithFormat: #"%#/testrecording.wav", documentsDirectory];
destinationURL = CFURLCreateWithFileSystemPath(kCFAllocatorDefault, (CFStringRef)destinationFilePath, kCFURLPOSIXPathStyle, false);
OSStatus status;
// prepare a 16-bit int file format, sample channel count and sample rate
AudioStreamBasicDescription dstFormat;
dstFormat.mSampleRate=44100.0;
dstFormat.mFormatID=kAudioFormatLinearPCM;
dstFormat.mFormatFlags=kAudioFormatFlagsNativeEndian|kAudioFormatFlagIsSignedInteger|kAudioFormatFlagIsPacked;
dstFormat.mBytesPerPacket=4;
dstFormat.mBytesPerFrame=4;
dstFormat.mFramesPerPacket=1;
dstFormat.mChannelsPerFrame=2;
dstFormat.mBitsPerChannel=16;
dstFormat.mReserved=0;
// create the capture file
status= ExtAudioFileCreateWithURL(destinationURL, kAudioFileWAVEType, &dstFormat, NULL, kAudioFileFlags_EraseFile, &recordingfileref);
CheckError( status ,"couldnt create audio file");
// set the capture file's client format to be the canonical format from the queue
status=ExtAudioFileSetProperty(recordingfileref, kExtAudioFileProperty_ClientDataFormat, sizeof(AudioStreamBasicDescription), &stereoStreamFormat);
CheckError( status ,"couldnt set input format");
ExtAudioFileSeek(recordingfileref, 0);
isrecording=YES;
// [documentsDirectory release];
}
edit 1
I'm really stabbing in the dark here now but do I need to use an audio convertor or does kExtAudioFileProperty_ClientDataFormat take care of that?
edit 2
Im attaching 2 samples of audio. The first is the original audio that Im looping and trying to copy. The second is the recorded audio of that loop. Hopefully it might give somebody a clue as to whats going wrong.
Original mp3
Problem recording of mp3
After a couple of days of tears & hair pulling I have a solution.
In my code and in other examples I have seen extaudiofilewriteasync was called in the callback for the remoteio unit like so.
** remoteiounit callback **
static OSStatus masterChannelMixerUnitCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
AudioUnitRender(engineDescribtion.equnit, ioActionFlags, inTimeStamp, 0, inNumberFrames, ioData);
if(isrecording)
{
ExtAudioFileWriteAsync(engine->recordingfileref, inNumberFrames, ioData);
}
return 0;
}
In this callback I'm pulling audio data from another audio unit that applies eqs and mixes audio.
I removed the extaudiofilewriteasync call from the remoteio callback to this other callback that the remoteio pulls and the file writes successfully!!
*equnits callback function *
static OSStatus outputCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
AudioUnitRender(engineDescribtion.masterChannelMixerUnit, ioActionFlags, inTimeStamp, 0, inNumberFrames, ioData);
//process audio here
Engine *engine= (Engine *) inRefCon;
OSStatus s;
if(engine->isrecording)
{
s=ExtAudioFileWriteAsync(engine->recordingfileref, inNumberFrames, ioData);
}
return noErr;
}
In the interest of fully understanding why my solution worked could somebody explain to me why writing data to file from the iodata bufferlist of the remoteio causes distorted audio but writing data one further step down the chain results in perfect audio?

Recording Mono on iPhone in IMA4 format

I'm using the SpeakHear sample app on Apple's developer site to create an audio recording app. I'm attempting to record directly to IMA4 format using the kAudioFormatAppleIMA4 system constant. This is listed as one of the usable formats, but every time I set up my audio format variable and pass and set it, I get a 'fmt?' error. Here is the code I use to set up the audio format variable:
#define kAudioRecordingFormat kAudioFormatAppleIMA4
#define kAudioRecordingType kAudioFileCAFType
#define kAudioRecordingSampleRate 16000.00
#define kAudioRecordingChannelsPerFrame 1
#define kAudioRecordingFramesPerPacket 1
#define kAudioRecordingBitsPerChannel 16
#define kAudioRecordingBytesPerPacket 2
#define kAudioRecordingBytesPerFrame 2
- (void) setupAudioFormat: (UInt32) formatID {
// Obtains the hardware sample rate for use in the recording
// audio format. Each time the audio route changes, the sample rate
// needs to get updated.
UInt32 propertySize = sizeof (self.hardwareSampleRate);
OSStatus err = AudioSessionGetProperty (
kAudioSessionProperty_CurrentHardwareSampleRate,
&propertySize,
&hardwareSampleRate
);
if(err != 0){
NSLog(#"AudioRecorder::setupAudioFormat - error getting audio session property");
}
audioFormat.mSampleRate = kAudioRecordingSampleRate;
NSLog (#"Hardware sample rate = %f", self.audioFormat.mSampleRate);
audioFormat.mFormatID = formatID;
audioFormat.mChannelsPerFrame = kAudioRecordingChannelsPerFrame;
audioFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
audioFormat.mFramesPerPacket = kAudioRecordingFramesPerPacket;
audioFormat.mBitsPerChannel = kAudioRecordingBitsPerChannel;
audioFormat.mBytesPerPacket = kAudioRecordingBytesPerPacket;
audioFormat.mBytesPerFrame = kAudioRecordingBytesPerFrame;
}
And here is where I use that function:
- (id) initWithURL: fileURL {
NSLog (#"initializing a recorder object.");
self = [super init];
if (self != nil) {
// Specify the recording format. Options are:
//
// kAudioFormatLinearPCM
// kAudioFormatAppleLossless
// kAudioFormatAppleIMA4
// kAudioFormatiLBC
// kAudioFormatULaw
// kAudioFormatALaw
//
// When targeting the Simulator, SpeakHere uses linear PCM regardless of the format
// specified here. See the setupAudioFormat: method in this file.
[self setupAudioFormat: kAudioRecordingFormat];
OSStatus result = AudioQueueNewInput (
&audioFormat,
recordingCallback,
self, // userData
NULL, // run loop
NULL, // run loop mode
0, // flags
&queueObject
);
NSLog (#"Attempted to create new recording audio queue object. Result: %f", result);
// get the recording format back from the audio queue's audio converter --
// the file may require a more specific stream description than was
// necessary to create the encoder.
UInt32 sizeOfRecordingFormatASBDStruct = sizeof (audioFormat);
AudioQueueGetProperty (
queueObject,
kAudioQueueProperty_StreamDescription, // this constant is only available in iPhone OS
&audioFormat,
&sizeOfRecordingFormatASBDStruct
);
AudioQueueAddPropertyListener (
[self queueObject],
kAudioQueueProperty_IsRunning,
audioQueuePropertyListenerCallback,
self
);
[self setAudioFileURL: (CFURLRef) fileURL];
[self enableLevelMetering];
}
return self;
}
Thanks for the help!
-Matt
I'm not sure that all the format flags you're passing are correct; IMA4 (which, IIRC, stands for IMA ADPCM 4:1) is 4-bit (4:1 compression from 16 bits) with some headers.
According to the docs for AudioStreamBasicDescription:
mBytesPerFrame should be 0, since the format is compressed.
mBitsPerChannel should be 0, since the format is compressed.
mFormatFlags should probably be 0, since there is nothing to choose.
Aaccording to afconvert -f caff -t ima4 -c 1 blah.aiff blah.caf followed by afinfo blah.caf:
mBytesPerPacket should be 34, and
mFramesPerPacket should be 64. You might be able to set these to 0 instead.
The reference algorithm in the original IMA spec is not that helpful (It's an OCR of scans, the site also has the scans).
On top of what #tc. has already said, it's easier to automatically populate your descriptions based on the IDs using this:
AudioStreamBasicDescription streamDescription;
UInt32 streamDesSize = sizeof(streamDescription);
memset(&streamDescription, 0, streamDesSize);
streamDescription.mFormatID = kAudioFormatiLBC;
OSStatus status;
status = AudioFormatGetProperty(kAudioFormatProperty_FormatInfo, 0, NULL, &streamDesSize, &streamDescription);
assert(status==noErr);
This way you don't need to bother with guessing the features of certain formats. Be warned, although in this example the kAudioFormatiLBC didn't need any other additional info, other formats do (usually the number of channels and the sample rate).

AAC header and other info in iPhone

I'm building an iPhone Application that records sound. I make use of Audio Queue Services, and everything works great for the recording.
The thing is, I'm using AudioFileWritePackets for file writing, and I'm trying to put the same "AAC + ADTS" packets to a network socket.
The resulting file is different since some "headers" or "adts header" might be missing. I am searching for ideas on how to write the ADTS header and/or AAC header? Could the community assist me with this or refer me to a guide that demonstrated how to do this?
I currently have my Buffer Handler method:
void Recorder::MyInputBufferHandler(void inUserData,
AudioQueueRefinAQ, AudioQueueBufferRefinBuffer,
const AudioTimeStamp*inStartTime,
UInt32 inNumPackets,
const AudioStreamPacketDescription*inPacketDesc) {
AQRecorder *aqr = (AQRecorder *)inUserData;
try {
if (inNumPackets > 0) {
// write packets to file
XThrowIfError(AudioFileWritePackets(aqr->mRecordFile,
FALSE,
inBuffer->mAudioDataByteSize,
inPacketDesc,
aqr->mRecordPacket,
&inNumPackets,
inBuffer->mAudioData),
"AudioFileWritePackets failed");
fprintf(stderr, "Writing.");
// We write the Net Buffer.
[aqr->socket_if writeData :(void *)(inBuffer->mAudioData)
:inBuffer->mAudioDataByteSize];
aqr->mRecordPacket += inNumPackets;
}
// if we're not stopping, re-enqueue the buffe so that it gets filled again
if (aqr->IsRunning()) {
XThrowIfError(AudioQueueEnqueueBuffer(inAQ, inBuffer, 0, NULL),
"AudioQueueEnqueueBuffer failed");
}
}
catch (CAXException e) {
char buf[256];
fprintf(stderr, "Error: %s (%s)\n", e.mOperation, e.FormatError(buf));
}
}
I've found the solution for this:
I implemented the callback
XThrowIfError(
AudioFileInitializeWithCallbacks(
this,
nil,
BufferFilled_callback,
nil,
nil,
//kAudioFileCAFType,
kAudioFileAAC_ADTSType,
&mRecordFormat,
kAudioFileFlags_EraseFile,
&mRecordFile),
"InitializeWithCallbacks failed");
... And voilá! The real callback you have to implement is BufferFilled_callback. Here is my implementation:
OSStatus AQRecorder::BufferFilled_callback(
void * inUserData,
SInt64 inPosition,
UInt32 requestCount,
const void * buffer,
UInt32 * actualCount) {
AQRecorder *aqr = (AQRecorder *)inUserData;
// You can write these bytes to anywhere.
// You can build a streaming server
return 0;
}
If you want to see more about audio queue services, you can get some ideas from Flipzu for iPhone (ex-app for live broadcasting audio // we have to shut it down because we could not raise money).
https://github.com/lucaslain/Flipzu_iPhone
Best,
Lucas.
I've recently encountered this issue with the iLBC codec, and arrived at the solution as follows:
Record the audio data you want and just write it to a file. Then, take that file and do an octal dump on it. You can use the -c flag to see ascii characters.
Then, create a separate file that you know doesn't contain the header. This is just your data from the buffers on the audio queue. Octal dump that, and compare.
From this, you should have the header and enough info on how to proceed. Hope this helps.

How to get the uncompressed file size of an MP3 file using CoreAudio API

Using CoreAudio, I am able to get the sampleRate (frames per second) and the file size, but in order to get the "total" time of the song, I need to know the Real file size of that compressed mp3.
AudioStreamBasicDescription asbd;
UInt32 asbdSize = sizeof(asbd);
// get the stream format.
err = AudioFileStreamGetProperty(inAudioFileStream, kAudioFileStreamProperty_DataFormat, &asbdSize, &asbd);
if (err)
{
[self failWithErrorCode:AS_FILE_STREAM_GET_PROPERTY_FAILED];
return;
}
sampleRate = asbd.mSampleRate;
Is there any way I can know the real size of the song using Objective-C?
Thanks in advance.
See the answer to this question
There's a property you can ask in AudioFileGetProperty called kAudioFilePropertyEstimatedDuration that should do the trick.