Audio recorded using Audio Queue Services to data - iphone

I want to transmit voice from one iPhone to another. I have established connection between two iPhones using TCP and I have managed to record voice on the iPhone and play it using Audio Queue Services. I have also managed to send data between the two iPhones. I do this by sending NSData packages.
My next step is to send the audio data to the other iPhone as it is being recorded. I believe I should do this in the AudioInputCallback. My AudioQueueBufferRef is called inBuffer and it seems that I want to convert the inBuffer->mAudioData to NSData and then send the NSData to the other device and then unpack it.
Does anyone know if this would be the way to do it and how I can convert my inBuffer->mAudioData to NSData? Other approaches are also welcome.
This is my callback method in which I believe I should "grab" the data and send it to the other iPhone:
void AudioInputCallback(void *inUserData, AudioQueueRef inAQ, AudioQueueBufferRef inBuffer, const AudioTimeStamp *inStartTime, UInt32 inNumberPacketDescriptions, const AudioStreamPacketDescription *inPacketDescs)
{
RecordState *recordState = (RecordState *)inUserData;
if(!recordState->recording)
return;
OSStatus status = AudioFileWritePackets(recordState->audioFile,
false,
inBuffer->mAudioDataByteSize,
inPacketDescs,
recordState->currentPacket,
&inNumberPacketDescriptions,
inBuffer->mAudioData);
if(status == 0)
{
recordState->currentPacket += inNumberPacketDescriptions;
}
AudioQueueEnqueueBuffer(recordState->queue, inBuffer, 0, NULL);
}

You might want to consider saving the audio data (your example shows the audio sample pointer and the byte count) from the audio callback to another queue or FIFO, then having a separate networking thread create NSData from the audio bytes and sending it.

Related

OpenSL ES can not play audio on Android emulator

I decode amrnb to PCM, then put right pcm buffer to Enqueue buffer (I'm sure PCM data is right), but no sound is heard. And when feeding buffer, log outputs:
/AudioTrack(14857): obtainBuffer timed out (is the CPU pegged?)
My code is below, and my questions are:
Is there something wrong when I use the OpenSL ES?
Is it true that OpenSL ES only works on the real device?
Sample code:
void AudioTest()
{
StartAudioPlay();
while(1)
{
//decode AMR to PCM
/* Convert to little endian and write to wav */
//write buffer to buffer queue
AudioBufferWrite(littleendian, 320);
}
}
void bqPlayerCallback(SLAndroidSimpleBufferQueueItf bq, void *context)
{
//do nothing
}
void AudioBufferWrite(const void* buffer, int size)
{
(*gBQBufferQueue)->Enqueue(gBQBufferQueue, buffer, size );
}
// create buffer queue audio player
void SlesCreateBQPlayer(/*AudioCallBackSL funCallback, void *soundMix,*/ int rate, int nChannel, int bitsPerSample )
{
SLresult result;
// configure audio source
SLDataLocator_AndroidSimpleBufferQueue loc_bufq = {SL_DATALOCATOR_ANDROIDSIMPLEBUFFERQUEUE, 2};
SLDataFormat_PCM format_pcm = {SL_DATAFORMAT_PCM, 1, SL_SAMPLINGRATE_8,
SL_PCMSAMPLEFORMAT_FIXED_16, SL_PCMSAMPLEFORMAT_FIXED_16,
SL_SPEAKER_FRONT_CENTER, SL_BYTEORDER_LITTLEENDIAN};
SLDataSource audioSrc = {&loc_bufq, &format_pcm};
// configure audio sink
SLDataLocator_OutputMix loc_outmix = {SL_DATALOCATOR_OUTPUTMIX, gOutputMixObject};
SLDataSink audioSnk = {&loc_outmix, NULL};
// create audio player
const SLInterfaceID ids[3] = {SL_IID_BUFFERQUEUE, SL_IID_EFFECTSEND, SL_IID_VOLUME};
const SLboolean req[3] = {SL_BOOLEAN_TRUE, SL_BOOLEAN_TRUE, SL_BOOLEAN_TRUE};
result = (*gEngineEngine)->CreateAudioPlayer(gEngineEngine, &gBQObject, &audioSrc, &audioSnk,
3, ids, req);
// realize the player
result = (*gBQObject)->Realize(gBQObject, SL_BOOLEAN_FALSE);
// get the play interface
result = (*gBQObject)->GetInterface(gBQObject, SL_IID_PLAY, &gBQPlay);
// get the buffer queue interface
result = (*gBQObject)->GetInterface(gBQObject, SL_IID_BUFFERQUEUE,
&gBQBufferQueue);
// register callback on the buffer queue
result = (*gBQBufferQueue)->RegisterCallback(gBQBufferQueue, bqPlayerCallback, NULL/*soundMix*/);
// get the effect send interface
result = (*gBQObject)->GetInterface(gBQObject, SL_IID_EFFECTSEND,
&gBQEffectSend);
// set the player's state to playing
result = (*gBQPlay)->SetPlayState(gBQPlay, SL_PLAYSTATE_PLAYING );
}
I'm not entirely sure, but I think you're correct in that the emulator's OpenSL ES support doesn't actually work. I've never gotten it to work in practice, while it works on any device I've tried.
In my application I have to support Android 2.2 as well, so I have a fallback to use JNI to access the Java AudioTrack APIs. I added a special case to my app to always use the AudioTrack interface when the emulator is detected.

Iphone Streaming and playing Audio Problem

I am trying to make an app that plays audio stream using ffmpeg, libmms.
I can open mms server, get stream, and decode audio frame to raw frame using suitable codec.
However I don't know how to do next.
I think I must use AudioToolbox/AudioToolbox.h and make audioqueue.
but however when I give audioqueuebuffer decode buffer's memory and play, Only plays the white noise.
Here is my code.
What am i missing?
Any comment and hint is very appreciated.
Thanks very much.
while(av_read_frame(pFormatCtx, &pkt)>=0)
{
int pkt_decoded_len = 0;
int frame_decoded_len;
int decode_buff_remain=AVCODEC_MAX_AUDIO_FRAME_SIZE * 5;
if(pkt.stream_index==audiostream)
{
frame_decoded_len=decode_buff_remain;
int16_t *decode_buff_ptr = decode_buffer;
int decoded_tot_len=0;
pkt_decoded_len = avcodec_decode_audio2(pCodecCtx, decode_buff_ptr, &frame_decoded_len,
pkt.data, pkt.size);
if (pkt_decoded_len <0) break;
AudioQueueAllocateBuffer(audioQueue, kBufferSize, &buffers[i]);
AQOutputCallback(self, audioQueue, buffers[i], pkt_decoded_len);
if(i == 1){
AudioQueueSetParameter(audioQueue, kAudioQueueParam_Volume, 1.0);
AudioQueueStart(audioQueue, NULL);
}
i++;
}
}
void AQOutputCallback(void *inData, AudioQueueRef inAQ, AudioQueueBufferRef inBuffer, int copySize)
{
mmsDemoViewController *staticApp = (mmsDemoViewController *)inData;
[staticApp handleBufferCompleteForQueue:inAQ buffer:inBuffer size:copySize];
}
- (void)handleBufferCompleteForQueue:(AudioQueueRef)inAQ
buffer:(AudioQueueBufferRef)inBuffer
size:(int)copySize
{
inBuffer->mAudioDataByteSize = inBuffer->mAudioDataBytesCapacity;
memcpy((char*)inBuffer->mAudioData, (const char*)decode_buffer, copySize);
AudioQueueEnqueueBuffer(inAQ, inBuffer, 0, NULL);
}
You called AQOutputCallback wrongly. You don't have to necessarilly call that method.
That method will be called automatically when audio buffers used by audio queue.
And the prototype of AQOutputCallback was wrong.
According to your code That method will not be called automatically I think.
You can Override
typedef void (*AudioQueueOutputCallback) (
void *inUserData,
AudioQueueRef inAQ,
AudioQueueBufferRef inBuffer
);
like this
void AudioQueueCallback(void* inUserData, AudioQueueRef inAQ, AudioQueueBufferRef
inBuffer);
And you should set the Audio Session When your app starts.
The important references are here.
However, What is the extension of Audio you are willing to decode?
AudioStreamPacketDescription is important if the Audio is Variable Frame per packet.
Otherwise, if One Frame per One Packet, AudioStreamPacketDescription is not significant.
What you do next is
To Set the audio session, To Get raw audio frame using decoder, To Put the frame into the Audio Buffer.
Instead of you, Make the system to fill the empty buffer.

AudioQueue screws up output after modification

I am currently working on an audio DSP App development. The project requires direct access and modification of audio data. Right now I can successfully access and modify the raw audio data using AudioQueue but encounters error during playback. The output audio after any modification turns out be noise.
In short, the code is something like this:
(Modified from Speakhere sample code. The rest remains unchanged.)
void AQPlayer::AQBufferCallback(void * inUserData,
AudioQueueRef inAQ,
AudioQueueBufferRef inCompleteAQBuffer)
{
AQPlayer *THIS = (AQPlayer *)inUserData;
if (THIS->mIsDone) return;
UInt32 numBytes;
UInt32 nPackets = THIS->GetNumPacketsToRead();
OSStatus result = AudioFileReadPackets(THIS->GetAudioFileID(),
false,
&numBytes,
inCompleteAQBuffer->mPacketDescriptions,
THIS->GetCurrentPacket(),
&nPackets,
inCompleteAQBuffer->mAudioData);
if (result)
printf("AudioFileReadPackets failed: %d", (int)result);
if (nPackets > 0) {
inCompleteAQBuffer->mAudioDataByteSize = numBytes;
inCompleteAQBuffer->mPacketDescriptionCount = nPackets;
//My modification starts from here
//Modifying audio data
SInt16 *testBuffer = (SInt16*)inCompleteAQBuffer->mAudioData;
for (int i = 0; i < (inCompleteAQBuffer->mAudioDataByteSize)/sizeof(SInt16); i++)
{
//printf("before modification %d", (int)*testBuffer);
*testBuffer = (SInt16) *testBuffer/2; //Say some simple modification
//printf("after modification %d", (int)*testBuffer);
testBuffer++;
}
AudioQueueEnqueueBuffer(inAQ, inCompleteAQBuffer, 0, NULL);
}
During debugging, the data in buffer is displayed as expected, but the actual output is nothing but noise.
Here are some other strange behaviors of the code that makes both the whole team crazy:
If there is no change to the data (add/sub by 0, multiply by 1) or the whole buffer is assigned to a constant (say 0, then the audio will be muted), the playback behaves normally (Of course!) But if I perform anything more than it, it still turns out to be noise.
In the case I hardcode a single tone as test audio, the output noise spreads into another channel also.
So where is the bug in this code? Or if I am on the wrong track, what is the correct approach to modify the audio data and perform playback CORRECTLY? Any insight will be sincerely appreciated.
Thank you very much :-)
Cheers,
Manca
are you SURE the sample format is SInt16? And how many channels are there? You seem to treat the audio as a single channel short stream, but suppose the format is actually dual channel Float32 or so, and you do the modifications there, than the effect would be exactly as you describe, including the noise on other channels.

Audio data streaming having latency issue in iPhone

I have written a voice streaming application in iPhone using AudioQue. At the audio recording starts I initiated the network connection and pass the instance of NSAudioOutStream to
AudioInputCallback using inUserData reference.
void AudioInputCallback(
void *inUserData,
AudioQueueRef inAQ,
AudioQueueBufferRef inBuffer,
const AudioTimeStamp *inStartTime,
UInt32 inNumberPacketDescriptions,
const AudioStreamPacketDescription *inPacketDescs) {
RecordState* recordState = (RecordState*)inUserData;
if(!recordState->recording) {
NSLog(#"Record ending...");
}
else{
[recordState->soStream write:inBuffer->mAudioData maxLength:inBuffer->mAudioDataByteSize];
NSLog([NSString stringWithFormat:#"Count:%d Size:%d¥n", sentCnt++, inBuffer->mAudioDataByteSize]);
}
recordState->currentPacket += inNumberPacketDescriptions;
AudioQueueEnqueueBuffer(recordState->queue, inBuffer, 0, NULL);
}
According to the init parameters of the AudioQueue the length of the inBuffer is 16000 bytes. However, in WIFi application works without any doubt. But in 3G network client-server commutation is not stable.
Anybody has got the same experience or someone can suggest a tip to solve this.
One way for fixing this matter is use Queue to insert audio buffer (size of 16000 bytes) and initial some other thread to enqueue the buffers time by time and send to server.
But anybody can tell me how to synchronize one queue among two thread.

How to listen to that property?

#constant kAudioSessionProperty_AudioInputAvailable
A UInt32 with a value other than zero when audio input is available.
Use this property, rather than the device model, to determine if audio input is available.
A listener will notify you when audio input becomes available. For instance, when a headset is attached
to the second generation iPod Touch, audio input becomes available via the wired microphone.
So, if I wanted to get notified about kAudioSessionProperty_AudioInputAvailable, how would I do that?
You set up the listener like this:
AudioSessionAddPropertyListener(kAudioSessionProperty_AudioInputAvailable, myCallback, NULL);
You have to define a callback function which gets called whenever the value changes:
void myCallback(void* inClientData, AudioSessionPropertyID inID, UInt32 inDataSize, const void* inData)
{
printf("value changed\n");
}