How to listen to that property? - iphone

#constant kAudioSessionProperty_AudioInputAvailable
A UInt32 with a value other than zero when audio input is available.
Use this property, rather than the device model, to determine if audio input is available.
A listener will notify you when audio input becomes available. For instance, when a headset is attached
to the second generation iPod Touch, audio input becomes available via the wired microphone.
So, if I wanted to get notified about kAudioSessionProperty_AudioInputAvailable, how would I do that?

You set up the listener like this:
AudioSessionAddPropertyListener(kAudioSessionProperty_AudioInputAvailable, myCallback, NULL);
You have to define a callback function which gets called whenever the value changes:
void myCallback(void* inClientData, AudioSessionPropertyID inID, UInt32 inDataSize, const void* inData)
{
printf("value changed\n");
}

Related

OpenSL ES can not play audio on Android emulator

I decode amrnb to PCM, then put right pcm buffer to Enqueue buffer (I'm sure PCM data is right), but no sound is heard. And when feeding buffer, log outputs:
/AudioTrack(14857): obtainBuffer timed out (is the CPU pegged?)
My code is below, and my questions are:
Is there something wrong when I use the OpenSL ES?
Is it true that OpenSL ES only works on the real device?
Sample code:
void AudioTest()
{
StartAudioPlay();
while(1)
{
//decode AMR to PCM
/* Convert to little endian and write to wav */
//write buffer to buffer queue
AudioBufferWrite(littleendian, 320);
}
}
void bqPlayerCallback(SLAndroidSimpleBufferQueueItf bq, void *context)
{
//do nothing
}
void AudioBufferWrite(const void* buffer, int size)
{
(*gBQBufferQueue)->Enqueue(gBQBufferQueue, buffer, size );
}
// create buffer queue audio player
void SlesCreateBQPlayer(/*AudioCallBackSL funCallback, void *soundMix,*/ int rate, int nChannel, int bitsPerSample )
{
SLresult result;
// configure audio source
SLDataLocator_AndroidSimpleBufferQueue loc_bufq = {SL_DATALOCATOR_ANDROIDSIMPLEBUFFERQUEUE, 2};
SLDataFormat_PCM format_pcm = {SL_DATAFORMAT_PCM, 1, SL_SAMPLINGRATE_8,
SL_PCMSAMPLEFORMAT_FIXED_16, SL_PCMSAMPLEFORMAT_FIXED_16,
SL_SPEAKER_FRONT_CENTER, SL_BYTEORDER_LITTLEENDIAN};
SLDataSource audioSrc = {&loc_bufq, &format_pcm};
// configure audio sink
SLDataLocator_OutputMix loc_outmix = {SL_DATALOCATOR_OUTPUTMIX, gOutputMixObject};
SLDataSink audioSnk = {&loc_outmix, NULL};
// create audio player
const SLInterfaceID ids[3] = {SL_IID_BUFFERQUEUE, SL_IID_EFFECTSEND, SL_IID_VOLUME};
const SLboolean req[3] = {SL_BOOLEAN_TRUE, SL_BOOLEAN_TRUE, SL_BOOLEAN_TRUE};
result = (*gEngineEngine)->CreateAudioPlayer(gEngineEngine, &gBQObject, &audioSrc, &audioSnk,
3, ids, req);
// realize the player
result = (*gBQObject)->Realize(gBQObject, SL_BOOLEAN_FALSE);
// get the play interface
result = (*gBQObject)->GetInterface(gBQObject, SL_IID_PLAY, &gBQPlay);
// get the buffer queue interface
result = (*gBQObject)->GetInterface(gBQObject, SL_IID_BUFFERQUEUE,
&gBQBufferQueue);
// register callback on the buffer queue
result = (*gBQBufferQueue)->RegisterCallback(gBQBufferQueue, bqPlayerCallback, NULL/*soundMix*/);
// get the effect send interface
result = (*gBQObject)->GetInterface(gBQObject, SL_IID_EFFECTSEND,
&gBQEffectSend);
// set the player's state to playing
result = (*gBQPlay)->SetPlayState(gBQPlay, SL_PLAYSTATE_PLAYING );
}
I'm not entirely sure, but I think you're correct in that the emulator's OpenSL ES support doesn't actually work. I've never gotten it to work in practice, while it works on any device I've tried.
In my application I have to support Android 2.2 as well, so I have a fallback to use JNI to access the Java AudioTrack APIs. I added a special case to my app to always use the AudioTrack interface when the emulator is detected.

Audio recorded using Audio Queue Services to data

I want to transmit voice from one iPhone to another. I have established connection between two iPhones using TCP and I have managed to record voice on the iPhone and play it using Audio Queue Services. I have also managed to send data between the two iPhones. I do this by sending NSData packages.
My next step is to send the audio data to the other iPhone as it is being recorded. I believe I should do this in the AudioInputCallback. My AudioQueueBufferRef is called inBuffer and it seems that I want to convert the inBuffer->mAudioData to NSData and then send the NSData to the other device and then unpack it.
Does anyone know if this would be the way to do it and how I can convert my inBuffer->mAudioData to NSData? Other approaches are also welcome.
This is my callback method in which I believe I should "grab" the data and send it to the other iPhone:
void AudioInputCallback(void *inUserData, AudioQueueRef inAQ, AudioQueueBufferRef inBuffer, const AudioTimeStamp *inStartTime, UInt32 inNumberPacketDescriptions, const AudioStreamPacketDescription *inPacketDescs)
{
RecordState *recordState = (RecordState *)inUserData;
if(!recordState->recording)
return;
OSStatus status = AudioFileWritePackets(recordState->audioFile,
false,
inBuffer->mAudioDataByteSize,
inPacketDescs,
recordState->currentPacket,
&inNumberPacketDescriptions,
inBuffer->mAudioData);
if(status == 0)
{
recordState->currentPacket += inNumberPacketDescriptions;
}
AudioQueueEnqueueBuffer(recordState->queue, inBuffer, 0, NULL);
}
You might want to consider saving the audio data (your example shows the audio sample pointer and the byte count) from the audio callback to another queue or FIFO, then having a separate networking thread create NSData from the audio bytes and sending it.

Why might my AudioQueueOutputCallback not be called?

I'm using the Audio Queue Services API to play audio streamed from a server over a TCP socket connection on an iPhone. I can play the buffers that were filled from the socket connection, I just cannot seem to make my AudioQueue call my AudioQueueOutputCallback function, and I'm out of ideas.
High level design
Data is passed to the player from the socket connection, and written
immediately into circular buffers in memory.
As AudioQueueBuffers become available, data is copied from the circular buffers into the
available AudioQueueBuffer, which is immediately re-queued. (Or would be, if my callback happened)
What happens
The buffers are all filled and enqueued successfully, and I hear the audio stream clearly. For testing, I use a large number of buffers (15) and all of them play through seamlessly, but the AudioQueueOutputCallback is never called, so I never re-queue any of those buffers, despite the fact that everything seems to be working perfectly. If I don't wait for my callback, assuming it will never be called, and instead drive the enqueueing of buffers based on the data as it is written, I can play the audio stream indefinitely, reusing and re-enqueueing buffers as if they had been explicitly returned to me by the callback. It is that fact: that I can play the stream perfectly while reusing buffers as needed, that confuses me the most. Why isn't the callback being called?
Possibly Relevant Code
The format of the stream is 16 bit linear PCM, 8 kHz, Mono:
_streamDescription.mSampleRate = 8000.0f;
_streamDescription.mFormatID = kAudioFormatLinearPCM;
_streamDescription.mBytesPerPacket = 2;
_streamDescription.mFramesPerPacket = 1;
_streamDescription.mBytesPerFrame = sizeof(AudioSampleType);
_streamDescription.mChannelsPerFrame = 1;
_streamDescription.mBitsPerChannel = 8 * sizeof(AudioSampleType)
_streamDescription.mReserved = 0;
_streamDescription.mFormatFlags = (kLinearPCMFormatFlagIsBigEndian |
kLinearPCMFormatFlagIsPacked);
My prototype and implementation of the callback are as follows. Nothing fancy, and pretty much identical to every example I've seen so far:
// Prototype, declared above the class's #implementation
void AQBufferCallback(void* inUserData, AudioQueueRef inAudioQueue, AudioQueueBufferRef inAudioQueueBuffer);
// Definition at the bottom of the file.
void AQBufferCallback(void* inUserData, AudioQueueRef inAudioQueue, AudioQueueBufferRef inAudioQueueBuffer) {
printf("callback\n");
[(MyAudioPlayer *)inUserData audioQueue:inAudioQueue didAquireBufferForReuse:inAudioQueueBuffer];
}
I create the AudioQueue like this:
OSStatus status = 0;
status = AudioQueueNewOutput(&_streamDescription,
AQBufferCallback, // <-- Doesn't work...
self,
CFRunLoopGetCurrent(),
kCFRunLoopCommonModes,
0,
&_audioQueue);
if (status) {
// This is not called...
NSLog(#"Error creating new audio output queue: %#", [MyAudioPlayer stringForOSStatus:status]);
return;
}
And I enqueue buffers like this. At this point, it is known that the local buffer contains the correct amount of data for copying:
memcpy(aqBuffer->mAudioData, localBuffer, kAQBufferSize);
aqBuffer->mAudioDataByteSize = kAQBufferSize;
OSStatus status = AudioQueueEnqueueBuffer(_audioQueue, aqBuffer, 0, NULL);
if (status) {
// This is also not called.
NSLog(#"Error enqueueing buffer %#", [MyAudioPlayer stringForOSStatus:status]);
}
Please save me.
Is this executed on the main thread or a background thread? probably not good if CFRunLoopGetCurrent() returns a run loop of a thread that could disappear (thread pool etc) or is a run loop that don't care about kCFRunLoopCommonModes.
Try to change CFRunLoopGetCurrent() to CFRunLoopGetMain() or make sure AudioQueueNewOutput() and CFRunLoopGetCurrent() is executed on the main thread or a thread that you have control over and has a proper run loop.
Try changing self for (void*)self. Like this:
status = AudioQueueNewOutput(&_streamDescription,
AQBufferCallback,
(void*)self,
CFRunLoopGetCurrent(),
kCFRunLoopCommonModes,
0,
&_audioQueue);

Iphone Streaming and playing Audio Problem

I am trying to make an app that plays audio stream using ffmpeg, libmms.
I can open mms server, get stream, and decode audio frame to raw frame using suitable codec.
However I don't know how to do next.
I think I must use AudioToolbox/AudioToolbox.h and make audioqueue.
but however when I give audioqueuebuffer decode buffer's memory and play, Only plays the white noise.
Here is my code.
What am i missing?
Any comment and hint is very appreciated.
Thanks very much.
while(av_read_frame(pFormatCtx, &pkt)>=0)
{
int pkt_decoded_len = 0;
int frame_decoded_len;
int decode_buff_remain=AVCODEC_MAX_AUDIO_FRAME_SIZE * 5;
if(pkt.stream_index==audiostream)
{
frame_decoded_len=decode_buff_remain;
int16_t *decode_buff_ptr = decode_buffer;
int decoded_tot_len=0;
pkt_decoded_len = avcodec_decode_audio2(pCodecCtx, decode_buff_ptr, &frame_decoded_len,
pkt.data, pkt.size);
if (pkt_decoded_len <0) break;
AudioQueueAllocateBuffer(audioQueue, kBufferSize, &buffers[i]);
AQOutputCallback(self, audioQueue, buffers[i], pkt_decoded_len);
if(i == 1){
AudioQueueSetParameter(audioQueue, kAudioQueueParam_Volume, 1.0);
AudioQueueStart(audioQueue, NULL);
}
i++;
}
}
void AQOutputCallback(void *inData, AudioQueueRef inAQ, AudioQueueBufferRef inBuffer, int copySize)
{
mmsDemoViewController *staticApp = (mmsDemoViewController *)inData;
[staticApp handleBufferCompleteForQueue:inAQ buffer:inBuffer size:copySize];
}
- (void)handleBufferCompleteForQueue:(AudioQueueRef)inAQ
buffer:(AudioQueueBufferRef)inBuffer
size:(int)copySize
{
inBuffer->mAudioDataByteSize = inBuffer->mAudioDataBytesCapacity;
memcpy((char*)inBuffer->mAudioData, (const char*)decode_buffer, copySize);
AudioQueueEnqueueBuffer(inAQ, inBuffer, 0, NULL);
}
You called AQOutputCallback wrongly. You don't have to necessarilly call that method.
That method will be called automatically when audio buffers used by audio queue.
And the prototype of AQOutputCallback was wrong.
According to your code That method will not be called automatically I think.
You can Override
typedef void (*AudioQueueOutputCallback) (
void *inUserData,
AudioQueueRef inAQ,
AudioQueueBufferRef inBuffer
);
like this
void AudioQueueCallback(void* inUserData, AudioQueueRef inAQ, AudioQueueBufferRef
inBuffer);
And you should set the Audio Session When your app starts.
The important references are here.
However, What is the extension of Audio you are willing to decode?
AudioStreamPacketDescription is important if the Audio is Variable Frame per packet.
Otherwise, if One Frame per One Packet, AudioStreamPacketDescription is not significant.
What you do next is
To Set the audio session, To Get raw audio frame using decoder, To Put the frame into the Audio Buffer.
Instead of you, Make the system to fill the empty buffer.

Audio data streaming having latency issue in iPhone

I have written a voice streaming application in iPhone using AudioQue. At the audio recording starts I initiated the network connection and pass the instance of NSAudioOutStream to
AudioInputCallback using inUserData reference.
void AudioInputCallback(
void *inUserData,
AudioQueueRef inAQ,
AudioQueueBufferRef inBuffer,
const AudioTimeStamp *inStartTime,
UInt32 inNumberPacketDescriptions,
const AudioStreamPacketDescription *inPacketDescs) {
RecordState* recordState = (RecordState*)inUserData;
if(!recordState->recording) {
NSLog(#"Record ending...");
}
else{
[recordState->soStream write:inBuffer->mAudioData maxLength:inBuffer->mAudioDataByteSize];
NSLog([NSString stringWithFormat:#"Count:%d Size:%d¥n", sentCnt++, inBuffer->mAudioDataByteSize]);
}
recordState->currentPacket += inNumberPacketDescriptions;
AudioQueueEnqueueBuffer(recordState->queue, inBuffer, 0, NULL);
}
According to the init parameters of the AudioQueue the length of the inBuffer is 16000 bytes. However, in WIFi application works without any doubt. But in 3G network client-server commutation is not stable.
Anybody has got the same experience or someone can suggest a tip to solve this.
One way for fixing this matter is use Queue to insert audio buffer (size of 16000 bytes) and initial some other thread to enqueue the buffers time by time and send to server.
But anybody can tell me how to synchronize one queue among two thread.