Audio Volume in Apple's speakHere example code - iphone

I am trying to increase the volume of my Audio Output using the speakHere example from Apple. The volume is already set to max with :
// set the volume of the queue
XThrowIfError (AudioQueueSetParameter(mQueue, kAudioQueueParam_Volume, 1.0), "set queue volume");
However, the output is directed to the ear-piece speaker, which is not as loud as the bottom-left-speaker on the iPhone. An example of this can be seen nicely in the 'Voice Memos' that comes with the iPhone. They provide a 'Speaker-Button' that toggles between the two speakers. Does anybody have an idea how that is done? What do I need to output my Audio to the bottom speaker?
Any tips, hints, answers will be much appreciated.
Thanks you in advance
Al

You need to set Player to Speaker Mode.
Add this code in AQPlayer.mm:
OSStatus error;
UInt32 audioRouteOverride = kAudioSessionOverrideAudioRoute_Speaker;
error = AudioSessionSetProperty (kAudioSessionProperty_OverrideAudioRoute, sizeof (audioRouteOverride), &audioRouteOverride);
if (error) printf("couldn't set audio speaker!");
Before this code:
XThrowIfError (AudioQueueSetParameter(mQueue, kAudioQueueParam_Volume, 1.0), "set queue volume");
I hope it helps.

Take a look at AudioSessionSetProperty, the kAudioSessionProperty_OverrideCategoryDefaultToSpeaker property in particular.

look at the kAudioSessionProperty_OverrideAudioRoute property

Related

Playing audio from a continuous stream of data (iOS)

Been banging my head against this problem all morning.
I have setup a connection to a datasource which returns audio data (It is a recording device, so there is no set length on the data. the data just streams in. Like, if you would open a stream to a radio)
and I have managed to receive all the packets of data in my code. Now I just need to play it. I want to play the data that is coming in, so I do not want to queue a few minutes or anything, I want to use the data I am recieving at that exact moment and play it.
Now I been searching all morning finding different examples but none were really layed out.
in the
(void)connection:(NSURLConnection )connection didReceiveData:(NSData)data {
function, the "data" package is the audio package. I tried streaming it with AVPlayer, MFVideoPlayer but nothing has worked for me so far. Also tried looking at mattgallagher's Audiostreamer but still was unable to achieve it.
Anyone here can help, has some (preferably) working examples?
Careful: The answer below is only valid if you receive PCM data from the server. This is of course never happens. That's why between rendering the audio and receiving the data you need another step: data conversion.
Depending on format, this could be more or less tricky, but in general you should use Audio Converter Services for this step.
You should use -(void)connection:(NSURLConnection )connection didReceiveData:(NSData)data only to fill a buffer with the data that comes from the server, playing it should not have anything to do with this method.
Now, to play the data you 'stored' in memory using the buffer you need to use RemoteIO and audio units. Here is a good, comprehensive tutorial. You can remove the "record" part from the tutorial as you don't really need it.
As you can see, they define a callback for playback:
callbackStruct.inputProc = playbackCallback;
callbackStruct.inputProcRefCon = self;
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Global,
kOutputBus,
&callbackStruct,
sizeof(callbackStruct));
and playbackCallback function looks like this:
static OSStatus playbackCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
for (int i = 0 ; i < ioData->mNumberBuffers; i++){
AudioBuffer buffer = ioData->mBuffers[i];
unsigned char *frameBuffer = buffer.mData;
for (int j = 0; j < inNumberFrames*2; j++){
frameBuffer[j] = getNextPacket();//this here is a function you have to make to get the next chunk of bytes available in the stream buffer
}
}
return noErr;
}
Basically what it does is to fill up the ioData buffer with the next chunk of bytes that need to be played. Be sure to zero out (silence) the ioData buffer if there is no new data to play (the player is silenced if not enough data is in the stream buffer).
Also, you can achieve the same thing with OpenAL using alSourceQueueBuffers and alSourceUnqueueBuffers to queue buffers one after the other.
That's it. Happy codding!

playing pcm data on iphone

i need to play linear pcm data live on a iphone.
i get a LIVE datastream via RTSP, and i can currently read it out from iphone, save it into a file, play it on a desktop audioplayer that supports pcm, therefore i think the transport is okay.
now i got stuck, i have completely! no idea what to do with my NSData object containing the data.
i did a bit of research, ending up with AudioUnits, but i just cannot assign my NSdata to the audiobuffer, respectivly i have no clue how.
for my instance, i assigned the callback:
AURenderCallbackStruct input;
input.inputProc = makeSound;
input.inputProcRefCon = self;
and having the function 'makeSound':
OSStatus makeSound(
void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
//so what to do here?
//ioData->mBuffers[0].mdata = [mySound bytes]; does not work, nor does
//ioData->mBuffers = [mySound byes];
return noErr;
}
is my approeach wrong in gerneral?
of what do i need to know/learn/implement? i am a complete audio-newbie, so my suggestion was, that i dont need several buffers, since when i get the new sound-package from rtsp, the old one is ended, since its a live stream (i base this on my recordings, that just appended the bytes w/o looking up presentation timestamps, since i dont receive some anyways)
Cheers
I don't know if this is exactly what you are looking for but some of Matt Gallagher's AudioStreamer code might be helpful to you. In particular, check out how he handles the audio buffering.
http://cocoawithlove.com/2010/03/streaming-mp3aac-audio-again.html

Capturing and manipulating microphone audio with AVCaptureSession?

While there are plenty of tutorials for how to use AVCaptureSession to grab camera data, I can find no information (even on apple's dev network itself) on how to properly handle microphone data.
I have implemented AVCaptureAudioDataOutputSampleBufferDelegate, and I'm getting calls to my delegate, but I have no idea how the contents of the CMSampleBufferRef I get are formatted. Are the contents of the buffer one discrete sample? What are its properties? Where can these properties be set?
Video properties can be set using [AVCaptureVideoDataOutput setVideoSettings:], but there is no corresponding call for AVCaptureAudioDataOutput (no setAudioSettings or anything similar).
They are formatted as LPCM! You can verify this by getting the AudioStreamBasicDescription like so:
CMFormatDescriptionRef formatDescription = CMSampleBufferGetFormatDescription(sampleBuffer);
const AudioStreamBasicDescription *streamDescription = CMAudioFormatDescriptionGetStreamBasicDescription(formatDescription);
and then checking the stream description’s mFormatId.

Setting the input volume on an audio queue

So I can't find anything online that says I can't do this, but whenever I try to do it on the iPhone, errors are returned from AudioQueueSetParameter. Specifically, if I try this code:
AudioQueueParameterValue val = f;
XThrowIfError(AudioQueueSetParameter(mQueue, kAudioQueueParam_Volume, val), "set queue volume");
Then I get the following error: kAudioQueueErr_InvalidParameter. Which Apple's documentation says it means: "The specified parameter ID is invalid".
But if I try the same exact code on an output queue, it works just fine. Does anyone have any idea why I can change the volume on output, but not input?
Thanks
According to Apple's Audio Queue Services Reference
AudioQueue Parameters apply only to playback audio queues.
To retrieve information about your input stream try to use AudioQueue Properties.
// streamDescription here means your AudioStreamBasicDescription
UInt32 levelSize = sizeof(AudioQueueLevelMeterState) * streamDescription.mChannelsPerFrame;
AudioQueueLevelMeterState *level = (AudioQueueLevelMeterState*)malloc(levelSize);
if (AudioQueueGetProperty(inQueue,
kAudioQueueProperty_CurrentLevelMeter,
&levelSize,
&level) == noErr) {
printf("Current peak: %f", level[0].mPeakPower);
}
I presume you could just multiply the PCM values of the AudioQueueBuffers by some volume factor yourself to produce a volume adjustment.

AudioQueue and iOS4?

The following code used to work for me in the past. I'm trying it now with iOS4 without luck. It is working in the simulator, but I don't hear anything on the device itself. I first try to record few samples into a NSMutableData variable, and then I try to play them back.
I've tried the SpeakHere sample from Apple - which works (but it playbacks from a file - not memory).
Any idea what am I missing?
AudioSessionInitialize(NULL, NULL, NULL, NULL);
UInt32 sessionCategory = kAudioSessionCategory_PlayAndRecord;
AudioSessionSetProperty(kAudioSessionProperty_AudioCategory, sizeof(sessionCategory), &sessionCategory);
AudioSessionSetActive(true);
AudioQueueNewOutput(&m_format,&OutputCallback,self,CFRunLoopGetCurrent(), kCFRunLoopCommonModes,0,&m_device);
AudioQueueBufferRef nBuffer=NULL;
AudioQueueAllocateBuffer(m_device,[data length],&nBuffer);
nBuffer->mAudioDataByteSize=[data length];
[data getBytes:(nBuffer->mAudioData) length:(nBuffer->mAudioDataByteSize)];
AudioQueueEnqueueBuffer(m_device,nBuffer,0,NULL);
AudioQueueStart(m_device,NULL);
The main things I can suggest are:
(1) make sure the device is not muted and the volume is up
(2) Check the result codes. For instance:
OSStatus errorCode = AudioQueueNewOutput (...) ;
if ( errorCode ) NSLog ( #"Error: %u" , errorCode ) ;
Something else that would give you a little bit more information:
While it is supposed to be running, try adjusting the volume. If it adjusts the ringer volume, the AudioQueue is not playing and/or setup correctly. If it adjusts the playback volume, than the AudioQueue is probably not getting data when it asks for it.
For the record, I have a an application that's using the AudioQueue on iOS 4 on all devices, so I know it works and it's not a bug.
Keep at it: the AudioQueue can be very, very annoying at times.