AudioQueue does not output any sound - iphone

I have trouble getting sound output on my iPhone experiment and I'm out of ideas.
Here is my callback to fill the Audio Queue buffer
void AudioOutputCallback(void *user, AudioQueueRef refQueue, AudioQueueBufferRef inBuffer)
{
NSLog(#"callback called");
inBuffer->mAudioDataByteSize = 1024;
gme_play((Music_Emu*)user, 1024, (short *)inBuffer->mAudioData);
AudioQueueEnqueueBuffer(refQueue, inBuffer, 0, NULL);
}
I setup the audio queue using the following snippet
// Create stream description
AudioStreamBasicDescription streamDescription;
streamDescription.mSampleRate = 44100;
streamDescription.mFormatID = kAudioFormatLinearPCM;
streamDescription.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;
streamDescription.mBytesPerPacket = 1024;
streamDescription.mFramesPerPacket = 1024 / 4;
streamDescription.mBytesPerFrame = 2 * sizeof(short);
streamDescription.mChannelsPerFrame = 2;
streamDescription.mBitsPerChannel = 16;
AudioQueueNewOutput(&streamDescription, AudioOutputCallback, theEmu, NULL, NULL, 0, &theAudioQueue);
OSStatus errorCode = AudioQueueAllocateBuffer(theAudioQueue, 1024, &someBuffer);
if( errorCode )
{
NSLog(#"Cannot allocate buffer");
}
AudioOutputCallback(theEmu, theAudioQueue, someBuffer);
AudioQueueSetParameter(theAudioQueue, kAudioQueueParam_Volume, 1.0);
AudioQueueStart(theAudioQueue, NULL);
The library I'm using is outputting linear PCM 16bit 44hz.

I usually use 3 buffers. You need at least 2 because as one gets played, the other one is filled by your code. If you have only one, there's not enough time to fill the same buffer and re-enqueue it and have playback be seamless. So it probably just stops your queue because it ran out of buffers.

Related

IOSurfaces - Artefacts in video and unable to grab video surfaces

This is a 2-part Question. I have the following code working which grabs the current display surface and creates a video out of the surfaces (everything happens in the background).
for(int i=0;i<100;i++){
IOMobileFramebufferConnection connect;
kern_return_t result;
IOSurfaceRef screenSurface = NULL;
io_service_t framebufferService = IOServiceGetMatchingService(kIOMasterPortDefault, IOServiceMatching("AppleH1CLCD"));
if(!framebufferService)
framebufferService = IOServiceGetMatchingService(kIOMasterPortDefault, IOServiceMatching("AppleM2CLCD"));
if(!framebufferService)
framebufferService = IOServiceGetMatchingService(kIOMasterPortDefault, IOServiceMatching("AppleCLCD"));
result = IOMobileFramebufferOpen(framebufferService, mach_task_self(), 0, &connect);
result = IOMobileFramebufferGetLayerDefaultSurface(connect, 0, &screenSurface);
uint32_t aseed;
IOSurfaceLock(screenSurface, kIOSurfaceLockReadOnly, &aseed);
uint32_t width = IOSurfaceGetWidth(screenSurface);
uint32_t height = IOSurfaceGetHeight(screenSurface);
m_width = width;
m_height = height;
CFMutableDictionaryRef dict;
int pitch = width*4, size = width*height*4;
int bPE=4;
char pixelFormat[4] = {'A','R','G','B'};
dict = CFDictionaryCreateMutable(kCFAllocatorDefault, 0, &kCFTypeDictionaryKeyCallBacks, &kCFTypeDictionaryValueCallBacks);
CFDictionarySetValue(dict, kIOSurfaceIsGlobal, kCFBooleanTrue);
CFDictionarySetValue(dict, kIOSurfaceBytesPerRow, CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt32Type, &pitch));
CFDictionarySetValue(dict, kIOSurfaceBytesPerElement, CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt32Type, &bPE));
CFDictionarySetValue(dict, kIOSurfaceWidth, CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt32Type, &width));
CFDictionarySetValue(dict, kIOSurfaceHeight, CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt32Type, &height));
CFDictionarySetValue(dict, kIOSurfacePixelFormat, CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt32Type, pixelFormat));
CFDictionarySetValue(dict, kIOSurfaceAllocSize, CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt32Type, &size));
IOSurfaceRef destSurf = IOSurfaceCreate(dict);
IOSurfaceAcceleratorRef outAcc;
IOSurfaceAcceleratorCreate(NULL, 0, &outAcc);
IOSurfaceAcceleratorTransferSurface(outAcc, screenSurface, destSurf, dict, NULL);
IOSurfaceUnlock(screenSurface, kIOSurfaceLockReadOnly, &aseed);
CFRelease(outAcc);
// MOST RELEVANT PART OF CODE
CVPixelBufferCreateWithBytes(NULL, width, height, kCVPixelFormatType_32BGRA, IOSurfaceGetBaseAddress(destSurf), IOSurfaceGetBytesPerRow(destSurf), NULL, NULL, NULL, &sampleBuffer);
CMTime frameTime = CMTimeMake(frameCount, (int32_t)5);
[adaptor appendPixelBuffer:sampleBuffer withPresentationTime:frameTime];
CFRelease(sampleBuffer);
CFRelease(destSurf);
frameCount++;
}
P.S: The last 4-5 lines of code are the most relevant(if you need to filter).
1) The video that is produced has artefacts. I have worked on videos previously and have encountered such an issue before as well. I suppose there can be 2 reasons for this:
i. The PixelBuffer that is passed to the adaptor is getting modified or released before the processing (encoding + writing) is complete. This can be due to asynchronous calls. But I am not sure if this itself is the problem and how to resolve it.
ii. The timestamps that are passed are inaccurate (e.g. 2 frames having the same timestamp or a frame having a lower timestamp than the previous frame). I logged out the timestamp values and this doesn't seem to be the problem.
2) The code above is not able to grab surfaces when a video is played or when we play games. All I get is a blank screen in the output. This might be due to hardware accelerated decoding that happens in such cases.
Any inputs on either of the 2 parts of the questions will be really helpful. Also, if you have any good links to read on IOSurfaces in general, please do post them here.
I did a bit of experimentation and concluded that the screen surface from which the content is copied is changing even before the transfer of contents is complete (call to IOSurfaceAcceleratorTransferSurface() ). I am using a lock (tried both asynchronous and read-only) but it is being overridden by the iOS. I changed the code between the lock/unlock part to the following minimal:
IOSurfaceLock(screenSurface, kIOSurfaceLockReadOnly, &aseed);
aseed1 = IOSurfaceGetSeed(screenSurface);
IOSurfaceAcceleratorTransferSurface(outAcc, screenSurface, destSurf, dict, NULL);
aseed2 = IOSurfaceGetSeed(screenSurface);
IOSurfaceUnlock(screenSurface, kIOSurfaceLockReadOnly, &aseed);
The GetSeed function tells if the contents of the surface have changed. And, I logged a count indicating the number of frames for which the seed changes. The count was non-zero. So, the following code resolved the problem:
if(aseed1 != aseed2){
//Release the created surface
continue; //Do not use this surface/frame since it has artefacts
}
This however does affect performance since many frames/surfaces are rejected due to artefacts.
Any additions/corrections to this will be helpful.

Iphone Streaming and playing Audio Problem

I am trying to make an app that plays audio stream using ffmpeg, libmms.
I can open mms server, get stream, and decode audio frame to raw frame using suitable codec.
However I don't know how to do next.
I think I must use AudioToolbox/AudioToolbox.h and make audioqueue.
but however when I give audioqueuebuffer decode buffer's memory and play, Only plays the white noise.
Here is my code.
What am i missing?
Any comment and hint is very appreciated.
Thanks very much.
while(av_read_frame(pFormatCtx, &pkt)>=0)
{
int pkt_decoded_len = 0;
int frame_decoded_len;
int decode_buff_remain=AVCODEC_MAX_AUDIO_FRAME_SIZE * 5;
if(pkt.stream_index==audiostream)
{
frame_decoded_len=decode_buff_remain;
int16_t *decode_buff_ptr = decode_buffer;
int decoded_tot_len=0;
pkt_decoded_len = avcodec_decode_audio2(pCodecCtx, decode_buff_ptr, &frame_decoded_len,
pkt.data, pkt.size);
if (pkt_decoded_len <0) break;
AudioQueueAllocateBuffer(audioQueue, kBufferSize, &buffers[i]);
AQOutputCallback(self, audioQueue, buffers[i], pkt_decoded_len);
if(i == 1){
AudioQueueSetParameter(audioQueue, kAudioQueueParam_Volume, 1.0);
AudioQueueStart(audioQueue, NULL);
}
i++;
}
}
void AQOutputCallback(void *inData, AudioQueueRef inAQ, AudioQueueBufferRef inBuffer, int copySize)
{
mmsDemoViewController *staticApp = (mmsDemoViewController *)inData;
[staticApp handleBufferCompleteForQueue:inAQ buffer:inBuffer size:copySize];
}
- (void)handleBufferCompleteForQueue:(AudioQueueRef)inAQ
buffer:(AudioQueueBufferRef)inBuffer
size:(int)copySize
{
inBuffer->mAudioDataByteSize = inBuffer->mAudioDataBytesCapacity;
memcpy((char*)inBuffer->mAudioData, (const char*)decode_buffer, copySize);
AudioQueueEnqueueBuffer(inAQ, inBuffer, 0, NULL);
}
You called AQOutputCallback wrongly. You don't have to necessarilly call that method.
That method will be called automatically when audio buffers used by audio queue.
And the prototype of AQOutputCallback was wrong.
According to your code That method will not be called automatically I think.
You can Override
typedef void (*AudioQueueOutputCallback) (
void *inUserData,
AudioQueueRef inAQ,
AudioQueueBufferRef inBuffer
);
like this
void AudioQueueCallback(void* inUserData, AudioQueueRef inAQ, AudioQueueBufferRef
inBuffer);
And you should set the Audio Session When your app starts.
The important references are here.
However, What is the extension of Audio you are willing to decode?
AudioStreamPacketDescription is important if the Audio is Variable Frame per packet.
Otherwise, if One Frame per One Packet, AudioStreamPacketDescription is not significant.
What you do next is
To Set the audio session, To Get raw audio frame using decoder, To Put the frame into the Audio Buffer.
Instead of you, Make the system to fill the empty buffer.

how to encode/decode speex with AudioQueue in ios

If anyone have some experience that encode/decode speex audio format with AudioQueue?
I have tried to implement it by editing the SpeakHere sample. But not success!
From the apple API document, AudioQueue can support codec, but I can't found any sample. Could anyone give me some suggestion? I already compiled speex codec successfully in my project in XCode 4.
in the apple sample code "SpeakHere" you can do some thing like this:
AudioQueueNewInput(
&mRecordFormat,
MyInputBufferHandler,
this /* userData */,
NULL /* run loop */,
NULL /* run loop mode */,
0 /* flags */, &mQueue)
you can do some thing in function "MyInputBufferHandler" like
[self encoder:(short *)buffer->mAudioData count:buffer->mAudioDataByteSize/sizeof(short)];
the encoder function like:
while ( count >= samplesPerFrame )
{
speex_bits_reset( &bits );
speex_encode_int( enc_state, samples, &bits );
static const unsigned maxSize = 256;
char data[maxSize];
unsigned size = (unsigned)speex_bits_write( &bits, data, maxSize );
/*
do some thing... for example :send to server
*/
samples += samplesPerFrame;
count -= samplesPerFrame;
}
This is the general idea.Of course fact is hard, but you can see some open source of VOIP, maybe can help you.
good luck.
You can achieve all that with FFMPEG and then play it as PCM with AudioQueue.
The building of the FFMPEG library is not so painless but the whole decode/encode process is not that hard :)
FFMPEG official site
SPEEX official site
You will have to download the libs and build them yourself and then you will have to include them into FFMPEG and build it.
Below is the Code For Capturing Audio using audioqueue and encode (wide-band) using speex
(For Better Quality of Audio You can encode data in separate Thread , Change your sample size according to your capture format).
Audio format
mSampleRate = 16000;
mFormatID = kAudioFormatLinearPCM;
mFramesPerPacket = 1;
mChannelsPerFrame = 1;
mBytesPerFrame = 2;
mBytesPerPacket = 2;
mBitsPerChannel = 16;
mReserved = 0;
mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;
Capture callback
void CAudioCapturer::AudioInputCallback(void *inUserData,
AudioQueueRef inAQ,
AudioQueueBufferRef inBuffer,
const AudioTimeStamp *inStartTime,
UInt32 inNumberPacketDescriptions,
const AudioStreamPacketDescription *inPacketDescs)
{
CAudioCapturer *This = (CMacAudioCapturer *)inUserData;
int len = 640;
char data[640];
char *pSrc = (char *)inBuffer->mAudioData;
while (len <= inBuffer->mAudioDataByteSize)
{
memcpy(data,pSrc,640);
int enclen = encode(buffer,encBuffer);
len=len+640;
pSrc+=640; // 640 is the frame size for WB in speex (320 short)
}
AudioQueueEnqueueBuffer(This->m_audioQueue, inBuffer, 0, NULL);
}
speex encoder
int encode(char *buffer,char *pDest)
{
int nbBytes=0;
speex_bits_reset(&encbits);
speex_encode_int(encstate, (short*)(buffer) , &encbits);
nbBytes = speex_bits_write(&encbits, pDest ,640/(sizeof(short)));
return nbBytes;
}

Core Audio: Bizarre problem with pass-through using render callback

I am implementing an audio passthrough using the RemoteIO audio unit, by attaching a render callback to the input scope of the output bus (ie the speakers).
Everything works swimmingly, ...
OSStatus RenderTone(
void * inRefCon,
AudioUnitRenderActionFlags * ioActionFlags,
const AudioTimeStamp * inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList * ioData)
{
SoundEngine * soundEngine = (SoundEngine *) inRefCon;
// grab data from the MIC (ie fill up ioData)
AudioUnit thisUnit = soundEngine->remoteIOUnit->audioUnit;
OSStatus err = AudioUnitRender(thisUnit,
ioActionFlags,
inTimeStamp,
1,
inNumberFrames,
ioData);
if (result)
{
printf("Error pulling mic data");
}
assert(ioData->mNumberBuffers > 0);
// only need the first buffer
const int channel = 0;
Float32 * buff = (Float32 *) ioData->mBuffers[channel].mData;
}
until I add that last line.
Float32 * buff = (Float32 *) ioData->mBuffers[channel].mData;
With this line in place, no errors, simply silence. Without it, I can click my fingers in front of the microphone and hear it in my headset.
EDIT:
AudioBuffer buf0 = ioData->mBuffers[0]; // Is sufficient to cause failure
What is going on?
It is not an error caused by the compiler optimising out an unused variable. If I set buff++; on the next line, behaviour is the same. although maybe the compiler can still detect the variable is effectively unused.
Problem was the request for data from the microphone was failing the very first time this callback fires.
And what I should have been doing was examining the return value, and returning it (ie quitting the callback) if it was not success
What was probably happening was that I was accessing invalid data
It is still rather bizarre, but no need to go into it.

How to get AVAudioPlayer output to Speaker and verify if i am right on iphone simulator?

I got some questions on playback music via speaker.
i found an example in the following link
How to get AVAudioPlayer output to the speaker
but the question is how to make sure i successfully implemented playing music via "speaker"?
I wrote the code as the link, but it seems no difference before and after i activating the
"speaker" on iphone simulator (on macbook)!!
Update:
below is the way i activate the speaker.
UInt32 sessionCategory = kAudioSessionCategory_PlayAndRecord ; // 1
AudioSessionSetProperty (
kAudioSessionProperty_AudioCategory, // 2
sizeof (sessionCategory), // 3
&sessionCategory // 4
);
UInt32 audioRouteOverride = kAudioSessionOverrideAudioRoute_Speaker; // 1
AudioSessionSetProperty (
kAudioSessionProperty_OverrideAudioRoute, // 2
sizeof (audioRouteOverride), // 3
&audioRouteOverride // 4
);
below is the way i deactivate the speaker
UInt32 audioRouteOverride = kAudioSessionOverrideAudioRoute_None; // 1
AudioSessionSetProperty (
kAudioSessionProperty_OverrideAudioRoute, // 2
sizeof (audioRouteOverride), // 3
&audioRouteOverride // 4
);
when i tried to check the content of kAudioSessionProperty_AudioRoute as
NSLog(#"%#",kAudioSessionProperty_AudioRoute );
,the simulator crashed.
I look up the documentation, CFStringRef is almost the same with NSString type.
Therefore, it is reasonable to use NSLog to print the value of kAudioSessionProperty_AudioRoute.
As you said, kAudioSessionProperty_AudioRoute is supposed to be "headphone" or "speaker"
I still can not make if the code i paste is right and the way i activated speaker is right.
Can you help me?
Your macbook only has one set of speakers, so you'll only hear it from that. The phone has both the receiver earpiece and the speaker on the bottom (which you wanna use).
Just check what kAudioSessionProperty_AudioRoute is set as.
Apple states:
kAudioSessionProperty_AudioRoute...
The name of the current audio route (such as “Headphone,” “Speaker,” and so on). A read-only CFStringRef value.
More info about the override to speaker property:
This property can be used only with the kAudioSessionCategory_PlayAndRecord (or the equivalent AVAudioSessionCategoryRecord) category. (...) By default, output audio for this category goes to the receiver—the speaker you hold to your ear when on a phone call. The kAudioSessionOverrideAudioRoute_Speaker constant lets you direct the output audio to the speaker situated at the bottom of the phone.
kAudioSessionProperty_OverrideCategoryDefaultToSpeaker .. Specifies whether or not to route audio to the speaker (instead of to the receiver) when no other audio route, such as a headset, is connected. By default, the value of this property is FALSE (0). A read/write UInt32 value.
This property retains its value through an audio route change (such as when plugging in or unplugging a headset), and upon interruption; it reverts to its default value only upon an audio session category change. This property can be used only with the kAudioSessionCategory_PlayAndRecord (or the equivalent AVAudioSessionCategoryRecord) category.
See also kAudioSessionProperty_OverrideAudioRoute.
I had the same problem ;
but finally fixed by overriding this way ....
void EnableSpeakerPhone ()
{
UInt32 dataSize = sizeof(CFStringRef);
CFStringRef currentRoute = NULL;
OSStatus result = noErr;
AudioSessionGetProperty(kAudioSessionProperty_AudioRoute, &dataSize, &currentRoute);
// Set the category to use the speakers and microphone.
UInt32 sessionCategory = kAudioSessionCategory_PlayAndRecord;
result = AudioSessionSetProperty (
kAudioSessionProperty_AudioCategory,
sizeof (sessionCategory),
&sessionCategory
);
assert(result == kAudioSessionNoError);
Float64 sampleRate = 44100.0;
dataSize = sizeof(sampleRate);
result = AudioSessionSetProperty (
kAudioSessionProperty_PreferredHardwareSampleRate,
dataSize,
&sampleRate
);
assert(result == kAudioSessionNoError);
// Default to speakerphone if a headset isn't plugged in.
UInt32 route = kAudioSessionOverrideAudioRoute_Speaker;
dataSize = sizeof(route);
result = AudioSessionSetProperty (
// This requires iPhone OS 3.1
kAudioSessionProperty_OverrideCategoryDefaultToSpeaker,
dataSize,
&route
);
assert(result == kAudioSessionNoError);
AudioSessionSetActive(YES);
}
then .. I created a new method called ( void DisableSpeakerPhone(); ) to reverse the (EnableSpeakerPhone) method
void DisableSpeakerPhone ()
{
UInt32 dataSize = sizeof(CFStringRef);
CFStringRef currentRoute = NULL;
OSStatus result = noErr;
AudioSessionGetProperty(kAudioSessionProperty_AudioRoute, &dataSize, &currentRoute);
// Set the category to use the speakers and microphone.
UInt32 sessionCategory = kAudioSessionCategory_PlayAndRecord;
result = AudioSessionSetProperty (
kAudioSessionProperty_AudioCategory,
sizeof (sessionCategory),
&sessionCategory
);
assert(result == kAudioSessionNoError);
Float64 sampleRate = 44100.0;
dataSize = sizeof(sampleRate);
result = AudioSessionSetProperty (
kAudioSessionProperty_PreferredHardwareSampleRate,
dataSize,
&sampleRate
);
assert(result == kAudioSessionNoError);
// Default to speakerphone if a headset isn't plugged in.
// Overriding the output audio route
// The Trick is here
UInt32 audioRouteOverride = kAudioSessionOverrideAudioRoute_None;
dataSize = sizeof(audioRouteOverride);
AudioSessionSetProperty(
kAudioSessionProperty_OverrideAudioRoute,
dataSize,
&audioRouteOverride);
assert(result == kAudioSessionNoError);
AudioSessionSetActive(YES);
}
Now , Make the switcher or any btn call the methods directly ...