Iphone Streaming and playing Audio Problem - iphone

I am trying to make an app that plays audio stream using ffmpeg, libmms.
I can open mms server, get stream, and decode audio frame to raw frame using suitable codec.
However I don't know how to do next.
I think I must use AudioToolbox/AudioToolbox.h and make audioqueue.
but however when I give audioqueuebuffer decode buffer's memory and play, Only plays the white noise.
Here is my code.
What am i missing?
Any comment and hint is very appreciated.
Thanks very much.
while(av_read_frame(pFormatCtx, &pkt)>=0)
{
int pkt_decoded_len = 0;
int frame_decoded_len;
int decode_buff_remain=AVCODEC_MAX_AUDIO_FRAME_SIZE * 5;
if(pkt.stream_index==audiostream)
{
frame_decoded_len=decode_buff_remain;
int16_t *decode_buff_ptr = decode_buffer;
int decoded_tot_len=0;
pkt_decoded_len = avcodec_decode_audio2(pCodecCtx, decode_buff_ptr, &frame_decoded_len,
pkt.data, pkt.size);
if (pkt_decoded_len <0) break;
AudioQueueAllocateBuffer(audioQueue, kBufferSize, &buffers[i]);
AQOutputCallback(self, audioQueue, buffers[i], pkt_decoded_len);
if(i == 1){
AudioQueueSetParameter(audioQueue, kAudioQueueParam_Volume, 1.0);
AudioQueueStart(audioQueue, NULL);
}
i++;
}
}
void AQOutputCallback(void *inData, AudioQueueRef inAQ, AudioQueueBufferRef inBuffer, int copySize)
{
mmsDemoViewController *staticApp = (mmsDemoViewController *)inData;
[staticApp handleBufferCompleteForQueue:inAQ buffer:inBuffer size:copySize];
}
- (void)handleBufferCompleteForQueue:(AudioQueueRef)inAQ
buffer:(AudioQueueBufferRef)inBuffer
size:(int)copySize
{
inBuffer->mAudioDataByteSize = inBuffer->mAudioDataBytesCapacity;
memcpy((char*)inBuffer->mAudioData, (const char*)decode_buffer, copySize);
AudioQueueEnqueueBuffer(inAQ, inBuffer, 0, NULL);
}

You called AQOutputCallback wrongly. You don't have to necessarilly call that method.
That method will be called automatically when audio buffers used by audio queue.
And the prototype of AQOutputCallback was wrong.
According to your code That method will not be called automatically I think.
You can Override
typedef void (*AudioQueueOutputCallback) (
void *inUserData,
AudioQueueRef inAQ,
AudioQueueBufferRef inBuffer
);
like this
void AudioQueueCallback(void* inUserData, AudioQueueRef inAQ, AudioQueueBufferRef
inBuffer);
And you should set the Audio Session When your app starts.
The important references are here.
However, What is the extension of Audio you are willing to decode?
AudioStreamPacketDescription is important if the Audio is Variable Frame per packet.
Otherwise, if One Frame per One Packet, AudioStreamPacketDescription is not significant.
What you do next is
To Set the audio session, To Get raw audio frame using decoder, To Put the frame into the Audio Buffer.
Instead of you, Make the system to fill the empty buffer.

Related

OpenSL ES can not play audio on Android emulator

I decode amrnb to PCM, then put right pcm buffer to Enqueue buffer (I'm sure PCM data is right), but no sound is heard. And when feeding buffer, log outputs:
/AudioTrack(14857): obtainBuffer timed out (is the CPU pegged?)
My code is below, and my questions are:
Is there something wrong when I use the OpenSL ES?
Is it true that OpenSL ES only works on the real device?
Sample code:
void AudioTest()
{
StartAudioPlay();
while(1)
{
//decode AMR to PCM
/* Convert to little endian and write to wav */
//write buffer to buffer queue
AudioBufferWrite(littleendian, 320);
}
}
void bqPlayerCallback(SLAndroidSimpleBufferQueueItf bq, void *context)
{
//do nothing
}
void AudioBufferWrite(const void* buffer, int size)
{
(*gBQBufferQueue)->Enqueue(gBQBufferQueue, buffer, size );
}
// create buffer queue audio player
void SlesCreateBQPlayer(/*AudioCallBackSL funCallback, void *soundMix,*/ int rate, int nChannel, int bitsPerSample )
{
SLresult result;
// configure audio source
SLDataLocator_AndroidSimpleBufferQueue loc_bufq = {SL_DATALOCATOR_ANDROIDSIMPLEBUFFERQUEUE, 2};
SLDataFormat_PCM format_pcm = {SL_DATAFORMAT_PCM, 1, SL_SAMPLINGRATE_8,
SL_PCMSAMPLEFORMAT_FIXED_16, SL_PCMSAMPLEFORMAT_FIXED_16,
SL_SPEAKER_FRONT_CENTER, SL_BYTEORDER_LITTLEENDIAN};
SLDataSource audioSrc = {&loc_bufq, &format_pcm};
// configure audio sink
SLDataLocator_OutputMix loc_outmix = {SL_DATALOCATOR_OUTPUTMIX, gOutputMixObject};
SLDataSink audioSnk = {&loc_outmix, NULL};
// create audio player
const SLInterfaceID ids[3] = {SL_IID_BUFFERQUEUE, SL_IID_EFFECTSEND, SL_IID_VOLUME};
const SLboolean req[3] = {SL_BOOLEAN_TRUE, SL_BOOLEAN_TRUE, SL_BOOLEAN_TRUE};
result = (*gEngineEngine)->CreateAudioPlayer(gEngineEngine, &gBQObject, &audioSrc, &audioSnk,
3, ids, req);
// realize the player
result = (*gBQObject)->Realize(gBQObject, SL_BOOLEAN_FALSE);
// get the play interface
result = (*gBQObject)->GetInterface(gBQObject, SL_IID_PLAY, &gBQPlay);
// get the buffer queue interface
result = (*gBQObject)->GetInterface(gBQObject, SL_IID_BUFFERQUEUE,
&gBQBufferQueue);
// register callback on the buffer queue
result = (*gBQBufferQueue)->RegisterCallback(gBQBufferQueue, bqPlayerCallback, NULL/*soundMix*/);
// get the effect send interface
result = (*gBQObject)->GetInterface(gBQObject, SL_IID_EFFECTSEND,
&gBQEffectSend);
// set the player's state to playing
result = (*gBQPlay)->SetPlayState(gBQPlay, SL_PLAYSTATE_PLAYING );
}
I'm not entirely sure, but I think you're correct in that the emulator's OpenSL ES support doesn't actually work. I've never gotten it to work in practice, while it works on any device I've tried.
In my application I have to support Android 2.2 as well, so I have a fallback to use JNI to access the Java AudioTrack APIs. I added a special case to my app to always use the AudioTrack interface when the emulator is detected.

Audio recorded using Audio Queue Services to data

I want to transmit voice from one iPhone to another. I have established connection between two iPhones using TCP and I have managed to record voice on the iPhone and play it using Audio Queue Services. I have also managed to send data between the two iPhones. I do this by sending NSData packages.
My next step is to send the audio data to the other iPhone as it is being recorded. I believe I should do this in the AudioInputCallback. My AudioQueueBufferRef is called inBuffer and it seems that I want to convert the inBuffer->mAudioData to NSData and then send the NSData to the other device and then unpack it.
Does anyone know if this would be the way to do it and how I can convert my inBuffer->mAudioData to NSData? Other approaches are also welcome.
This is my callback method in which I believe I should "grab" the data and send it to the other iPhone:
void AudioInputCallback(void *inUserData, AudioQueueRef inAQ, AudioQueueBufferRef inBuffer, const AudioTimeStamp *inStartTime, UInt32 inNumberPacketDescriptions, const AudioStreamPacketDescription *inPacketDescs)
{
RecordState *recordState = (RecordState *)inUserData;
if(!recordState->recording)
return;
OSStatus status = AudioFileWritePackets(recordState->audioFile,
false,
inBuffer->mAudioDataByteSize,
inPacketDescs,
recordState->currentPacket,
&inNumberPacketDescriptions,
inBuffer->mAudioData);
if(status == 0)
{
recordState->currentPacket += inNumberPacketDescriptions;
}
AudioQueueEnqueueBuffer(recordState->queue, inBuffer, 0, NULL);
}
You might want to consider saving the audio data (your example shows the audio sample pointer and the byte count) from the audio callback to another queue or FIFO, then having a separate networking thread create NSData from the audio bytes and sending it.

how to encode/decode speex with AudioQueue in ios

If anyone have some experience that encode/decode speex audio format with AudioQueue?
I have tried to implement it by editing the SpeakHere sample. But not success!
From the apple API document, AudioQueue can support codec, but I can't found any sample. Could anyone give me some suggestion? I already compiled speex codec successfully in my project in XCode 4.
in the apple sample code "SpeakHere" you can do some thing like this:
AudioQueueNewInput(
&mRecordFormat,
MyInputBufferHandler,
this /* userData */,
NULL /* run loop */,
NULL /* run loop mode */,
0 /* flags */, &mQueue)
you can do some thing in function "MyInputBufferHandler" like
[self encoder:(short *)buffer->mAudioData count:buffer->mAudioDataByteSize/sizeof(short)];
the encoder function like:
while ( count >= samplesPerFrame )
{
speex_bits_reset( &bits );
speex_encode_int( enc_state, samples, &bits );
static const unsigned maxSize = 256;
char data[maxSize];
unsigned size = (unsigned)speex_bits_write( &bits, data, maxSize );
/*
do some thing... for example :send to server
*/
samples += samplesPerFrame;
count -= samplesPerFrame;
}
This is the general idea.Of course fact is hard, but you can see some open source of VOIP, maybe can help you.
good luck.
You can achieve all that with FFMPEG and then play it as PCM with AudioQueue.
The building of the FFMPEG library is not so painless but the whole decode/encode process is not that hard :)
FFMPEG official site
SPEEX official site
You will have to download the libs and build them yourself and then you will have to include them into FFMPEG and build it.
Below is the Code For Capturing Audio using audioqueue and encode (wide-band) using speex
(For Better Quality of Audio You can encode data in separate Thread , Change your sample size according to your capture format).
Audio format
mSampleRate = 16000;
mFormatID = kAudioFormatLinearPCM;
mFramesPerPacket = 1;
mChannelsPerFrame = 1;
mBytesPerFrame = 2;
mBytesPerPacket = 2;
mBitsPerChannel = 16;
mReserved = 0;
mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;
Capture callback
void CAudioCapturer::AudioInputCallback(void *inUserData,
AudioQueueRef inAQ,
AudioQueueBufferRef inBuffer,
const AudioTimeStamp *inStartTime,
UInt32 inNumberPacketDescriptions,
const AudioStreamPacketDescription *inPacketDescs)
{
CAudioCapturer *This = (CMacAudioCapturer *)inUserData;
int len = 640;
char data[640];
char *pSrc = (char *)inBuffer->mAudioData;
while (len <= inBuffer->mAudioDataByteSize)
{
memcpy(data,pSrc,640);
int enclen = encode(buffer,encBuffer);
len=len+640;
pSrc+=640; // 640 is the frame size for WB in speex (320 short)
}
AudioQueueEnqueueBuffer(This->m_audioQueue, inBuffer, 0, NULL);
}
speex encoder
int encode(char *buffer,char *pDest)
{
int nbBytes=0;
speex_bits_reset(&encbits);
speex_encode_int(encstate, (short*)(buffer) , &encbits);
nbBytes = speex_bits_write(&encbits, pDest ,640/(sizeof(short)));
return nbBytes;
}

AudioQueue screws up output after modification

I am currently working on an audio DSP App development. The project requires direct access and modification of audio data. Right now I can successfully access and modify the raw audio data using AudioQueue but encounters error during playback. The output audio after any modification turns out be noise.
In short, the code is something like this:
(Modified from Speakhere sample code. The rest remains unchanged.)
void AQPlayer::AQBufferCallback(void * inUserData,
AudioQueueRef inAQ,
AudioQueueBufferRef inCompleteAQBuffer)
{
AQPlayer *THIS = (AQPlayer *)inUserData;
if (THIS->mIsDone) return;
UInt32 numBytes;
UInt32 nPackets = THIS->GetNumPacketsToRead();
OSStatus result = AudioFileReadPackets(THIS->GetAudioFileID(),
false,
&numBytes,
inCompleteAQBuffer->mPacketDescriptions,
THIS->GetCurrentPacket(),
&nPackets,
inCompleteAQBuffer->mAudioData);
if (result)
printf("AudioFileReadPackets failed: %d", (int)result);
if (nPackets > 0) {
inCompleteAQBuffer->mAudioDataByteSize = numBytes;
inCompleteAQBuffer->mPacketDescriptionCount = nPackets;
//My modification starts from here
//Modifying audio data
SInt16 *testBuffer = (SInt16*)inCompleteAQBuffer->mAudioData;
for (int i = 0; i < (inCompleteAQBuffer->mAudioDataByteSize)/sizeof(SInt16); i++)
{
//printf("before modification %d", (int)*testBuffer);
*testBuffer = (SInt16) *testBuffer/2; //Say some simple modification
//printf("after modification %d", (int)*testBuffer);
testBuffer++;
}
AudioQueueEnqueueBuffer(inAQ, inCompleteAQBuffer, 0, NULL);
}
During debugging, the data in buffer is displayed as expected, but the actual output is nothing but noise.
Here are some other strange behaviors of the code that makes both the whole team crazy:
If there is no change to the data (add/sub by 0, multiply by 1) or the whole buffer is assigned to a constant (say 0, then the audio will be muted), the playback behaves normally (Of course!) But if I perform anything more than it, it still turns out to be noise.
In the case I hardcode a single tone as test audio, the output noise spreads into another channel also.
So where is the bug in this code? Or if I am on the wrong track, what is the correct approach to modify the audio data and perform playback CORRECTLY? Any insight will be sincerely appreciated.
Thank you very much :-)
Cheers,
Manca
are you SURE the sample format is SInt16? And how many channels are there? You seem to treat the audio as a single channel short stream, but suppose the format is actually dual channel Float32 or so, and you do the modifications there, than the effect would be exactly as you describe, including the noise on other channels.

Audio data streaming having latency issue in iPhone

I have written a voice streaming application in iPhone using AudioQue. At the audio recording starts I initiated the network connection and pass the instance of NSAudioOutStream to
AudioInputCallback using inUserData reference.
void AudioInputCallback(
void *inUserData,
AudioQueueRef inAQ,
AudioQueueBufferRef inBuffer,
const AudioTimeStamp *inStartTime,
UInt32 inNumberPacketDescriptions,
const AudioStreamPacketDescription *inPacketDescs) {
RecordState* recordState = (RecordState*)inUserData;
if(!recordState->recording) {
NSLog(#"Record ending...");
}
else{
[recordState->soStream write:inBuffer->mAudioData maxLength:inBuffer->mAudioDataByteSize];
NSLog([NSString stringWithFormat:#"Count:%d Size:%d¥n", sentCnt++, inBuffer->mAudioDataByteSize]);
}
recordState->currentPacket += inNumberPacketDescriptions;
AudioQueueEnqueueBuffer(recordState->queue, inBuffer, 0, NULL);
}
According to the init parameters of the AudioQueue the length of the inBuffer is 16000 bytes. However, in WIFi application works without any doubt. But in 3G network client-server commutation is not stable.
Anybody has got the same experience or someone can suggest a tip to solve this.
One way for fixing this matter is use Queue to insert audio buffer (size of 16000 bytes) and initial some other thread to enqueue the buffers time by time and send to server.
But anybody can tell me how to synchronize one queue among two thread.