AVAssetReader and Audio Queue streaming problem - iphone

I have a problem with the AVAssetReader here to get samples from the iPod library and stream it via Audio Queue. I have not been able to find any such example so I try to implement my own but it seems that somehow the AssetReader is "screwed up" at the callback function of audio queue. Specifically it fails when it does the copyNextSampleBuffer ie it returns null when it is not finished yet. I have made sure the pointer exists and such so it will be great if anyone can help.
Below is the callback function code that I have used. This callback function 'works' when it is not called by the AudioQueue callback.
static void HandleOutputBuffer (
void *playerStateH,
AudioQueueRef inAQ,
AudioQueueBufferRef inBuffer
) {
AQPlayerState *pplayerState = (AQPlayerState *) playerStateH;
//if (pplayerState->mIsRunning == 0) return;
UInt32 bytesToRead = pplayerState->bufferByteSize;
[[NSNotificationCenter defaultCenter] postNotificationName:NOTIF_callsample object:nil];
float * inData =(float *) inBuffer->mAudioData;
int offsetSample = 0;
//Loop until finish reading from the music data
while (bytesToRead) {
/*THIS IS THE PROBLEMATIC LINE*/
CMSampleBufferRef sampBuffer = [pplayerState->assetWrapper getNextSampleBuffer]; //the assetreader getting nextsample with copyNextSampleBuffer
if (sampBuffer == nil) {
NSLog(#"No more data to read from");
// NSLog(#"aro status after null %d",[pplayerState->ar status]);
AudioQueueStop (
pplayerState->mQueue,
false
);
pplayerState->mIsRunning = NO;
return;
}
AudioBufferList audioBufferList;
CMBlockBufferRef blockBuffer;
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampBuffer, NULL, &audioBufferList, sizeof(audioBufferList), NULL, NULL, 0, &blockBuffer);
AudioBuffer audioBuffer = audioBufferList.mBuffers[0];
memcpy(inData + (2*offsetSample),audioBuffer.mData,audioBuffer.mDataByteSize);
bytesToRead = bytesToRead - audioBuffer.mDataByteSize;
offsetSample = offsetSample + audioBuffer.mDataByteSize/8;
}
inBuffer->mAudioDataByteSize = offsetSample*8;
AudioQueueEnqueueBuffer (
pplayerState->mQueue,
inBuffer,
0,
0
);
}

I was getting this same mystifying error. Sure enough, "setting up" an audio session made the error go away. This is how I set up my audio session.
- (void)setupAudio {
[[AVAudioSession sharedInstance] setDelegate:self];
[[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryAmbient error:nil];
NSError *activationError = nil;
[[AVAudioSession sharedInstance] setActive: YES error:&activationError];
NSLog(#"setupAudio ACTIVATION ERROR IS %#", activationError);
[[AVAudioSession sharedInstance] setPreferredIOBufferDuration:0.1 error:&activationError];
NSLog(#"setupAudio BUFFER DURATION ERROR IS %#", activationError);
}

From the Audio Session Programming Guide, under AVAudioSessionCategoryAmbient:
This category allows audio from the iPod, Safari, and other built-in applications to play while your application is playing audio.
Using an AVAssetReader probably uses iOS' hardware decoder, which blocks the use of the AudioQueue. Setting AVAudioSessionCategoryAmbient means the audio is rendered in software, allowing both to work at the same time - however, this would have an impact on performance/battery life. (See Audio Session Programming Guide under "How Categories Affect Encoding and Decoding").

Ok I have somehow solved this weird error... Apparently it is because of the audio session not properly set. Talk about lack of documentation on this one...

Related

AVQueuePlayer and audio session issue

I am going to try to give a detailed account of my issue.
I have an app that is in the store that uses in app sound. Currently I am using AVQueuePlayer because some of the sound will overlap and allow it to play in order. A lot of this sound is being played while I am playing embedded videos using AVPlayer which may not matter at all. The problem is that I am having reports of the sound stopping across the entire app. I am unable to reproduce this myself but we have a lot of active users and it is reported by some. Whenever it is reported and we determine its not just the silent switch or the sound volume down restarting the app always solves the problem. Occasionally we've heard of the sound magically returning with no changes. I have also had a couple of reports that it happens when using airplay and bluetooth but that may just be an complication of the problem or coincidence.
Below is the code that I am using and maybe I'm just using a setting wrong or not using a setting that I should be but this code works 99.9% of the time.
I use ducking for all sounds I play to lower the volume of the user's iPod music.
Here is my initialization in appDidFinishLaunchingWithOptions (Maybe its not needed at all in the start and sorry for the mixing of conventions):
AudioSessionInitialize (NULL, NULL, NULL, NULL);
[[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryPlayback error:nil];
UInt32 sessionCategory = kAudioSessionCategory_AmbientSound;
AudioSessionSetProperty (kAudioSessionProperty_AudioCategory, sizeof (sessionCategory), &sessionCategory);
[[AVAudioSession sharedInstance] setActive:YES withFlags:AVAudioSessionSetActiveFlags_NotifyOthersOnDeactivation error:nil];
When I play a sound:
-(void)playSound: (NSString *)soundString
{
OSStatus propertySetError = 0;
UInt32 allowMixing = true;
propertySetError |= AudioSessionSetProperty(kAudioSessionProperty_OtherMixableAudioShouldDuck, sizeof(allowMixing), &allowMixing);
[[AVAudioSession sharedInstance] setActive:YES withFlags:AVAudioSessionSetActiveFlags_NotifyOthersOnDeactivation error:nil];
NSURL *thisUrl = [NSURL fileURLWithPath:[NSString stringWithFormat:#"%#/%#.caf", [[NSBundle mainBundle] resourcePath], soundString]];
AVPlayerItem *item = [[AVPlayerItem alloc] initWithURL:thisUrl];
[[NSNotificationCenter defaultCenter] addObserver:self
selector:#selector(reachedEndOfItem:)
name:AVPlayerItemDidPlayToEndTimeNotification
object:item];
if(_audioPlayerQueue == nil)
{
_audioPlayerQueue = [[AVQueuePlayer alloc] initWithItems:[NSArray arrayWithObject:item]];
}
else
{
if([_audioPlayerQueue canInsertItem:item afterItem:nil])
{
[_audioPlayerQueue insertItem:item afterItem:nil];
}
}
if(_audioPlayerQueue == nil)
{
NSLog(#"error");
}
else
{
[_audioPlayerQueue play];
}
return;
}
When the sound finishes playing:
- (void)reachedEndOfItem: (AVPlayerItem*)item
{
[self performSelector:#selector(turnOffDucking) withObject:nil afterDelay:0.5f];
}
- (void)turnOffDucking
{
NSLog(#"reached end");
[[AVAudioSession sharedInstance] setActive:NO withFlags:AVAudioSessionSetActiveFlags_NotifyOthersOnDeactivation error:nil];
OSStatus propertySetError = 0;
UInt32 allowMixing = false;
propertySetError |= AudioSessionSetProperty(kAudioSessionProperty_OtherMixableAudioShouldDuck, sizeof(allowMixing), &allowMixing);
}
Any insight on what I am doing wrong, what settings I should be using for the audio session or known bugs/problems would be very helpful. I would be willing to look into using a different audio engine as this can have some slight performance issues when playing a video and having iPod music playing in tandem but I'd rather stick with this method of playing audio.
Thank you for any help you can provide.
-Ryan
I had a similar issue in past and found out that concurrent thread access was the reason.
Specifically, I think calling performSelectorAfterDelay could be the reason if another thread tries to modify the audio session at the same time the delay ends, as then we'll have two different threads trying to access the audio session.
So, I suggest to check your code again and make sure all calls to playSound are made from the main thread. Also, it may be better to use performSelectorOnMainThread instead of performSelectorAfterDelay as the docs say:
Invocations of blocks, key-value observers, or notification handlers are not guaranteed to be made on any particular thread or queue. Instead, AV Foundation invokes these handlers on threads or queues on which it performs its internal tasks.

How to play audio over speakers rather than the much weaker ear speakers?

I'm learning core audio. For some reason the sound of the processing graph only plays through the weak "ear speakers" (when you hold device to your ear) but not over the regular speakers of the iPhone.
This is the code which sets up the audio session but I can't see where it configures the audio route:
- (void) setupAudioSession {
AVAudioSession *mySession = [AVAudioSession sharedInstance];
// Specify that this object is the delegate of the audio session, so that
// this object's endInterruption method will be invoked when needed.
[mySession setDelegate: self];
// Assign the Playback category to the audio session.
NSError *audioSessionError = nil;
[mySession setCategory: AVAudioSessionCategoryPlayAndRecord//AVAudioSessionCategoryPlayback
error: &audioSessionError];
if (audioSessionError != nil) {
NSLog (#"Error setting audio session category.");
return;
}
// Request the desired hardware sample rate.
self.graphSampleRate = 44100.0; // Hertz
[mySession setPreferredHardwareSampleRate: graphSampleRate
error: &audioSessionError];
if (audioSessionError != nil) {
NSLog (#"Error setting preferred hardware sample rate.");
return;
}
// Activate the audio session
[mySession setActive: YES
error: &audioSessionError];
if (audioSessionError != nil) {
NSLog (#"Error activating audio session during initial setup.");
return;
}
// Obtain the actual hardware sample rate and store it for later use in the audio processing graph.
self.graphSampleRate = [mySession currentHardwareSampleRate];
// Register the audio route change listener callback function with the audio session.
AudioSessionAddPropertyListener (
kAudioSessionProperty_AudioRouteChange,
audioRouteChangeListenerCallback,
self
);
}
At which point in core audio do you say "play over speakers" when playing sounds with audio units?
You can use the setCategory withOption:
[mySession setCategory:AVAudioSessionCategoryPlayAndRecord withOptions:AVAudioSessionCategoryOptionDefaultToSpeaker error:&audioSessionError];
I had the same problem. Turns out it is something to do with the "play and record" category. Just need to redirect the audio output.
UInt32 audioRouteOverride = kAudioSessionOverrideAudioRoute_Speaker;
AudioSessionSetProperty (
kAudioSessionProperty_OverrideAudioRoute,
sizeof (audioRouteOverride),
&audioRouteOverride
);
Source:
http://developer.apple.com/library/ios/#documentation/Audio/Conceptual/AudioSessionProgrammingGuide/Cookbook/Cookbook.html#//apple_ref/doc/uid/TP40007875-CH6-SW35

Record audio to NSData

I have set up a TCP connection between two iPhones and I am able to send NSData packages between the two.
I would like to talk into the microphone and get the recording as an NSData object and send this to the other iPhone.
I have successfulyl used Audio Queue Services to record audio and play it but I have not managed to get the recording as NSData. I posted a question about converting the recording to NSData when using Audio Queue Services but it has not got me any further.
Therefore I would like to hear if there is any other approach I can take to speak into the microphone of an iPhone and have the input as raw data?
Update:
I need to send the packages continuous while recording. E.g. every second while recording I will send the data recorded during that second.
Both Audio Queues and the RemoteIO Audio Unit will give you buffers of raw audio in real-time with fairly low latency. You can take the buffer pointer and the byte length given in each audio callback to create a new block of NSData. RemoteIO will provide the lowest latency, but may require the network messaging to be done outside the callback thread.
Using AVAudioRecorder like this:
NSURL *filePath = //Your desired path for the file
NSDictionary *settings; //The settings for the recorded file
NSError *error = nil;
AVAudioRecorder *recorder = [[AVAudioRecorder alloc] initWithURL:filePath settings:recordSetting error:&error];
....
[recorder record];
....
[recorder stop];
....
And the retrieve the NSData from the file:
NSData *audioData = [[NSData alloc] initWithContentsOfFile:filePath];
AVAudioRecorder reference, here.
Edit:
In order to retrieve chunks of the recorded audio you could use the subdataWithRange: method in the NSData class. Keep an offset from which you wish to retrieve the bytes. You can have a NSTimer getting fired every second so you can collect the bytes and send them. You will need to find out how many bytes are getting recorded every second.
This is what I did on the recording iphone:
void AudioInputCallback(void * inUserData,
AudioQueueRef inAQ,
AudioQueueBufferRef inBuffer,
const AudioTimeStamp * inStartTime,
UInt32 inNumberPacketDescriptions,
const AudioStreamPacketDescription * inPacketDescs)
{
RecordState * recordState = (RecordState*)inUserData;
if (!recordState->recording)
{
printf("Not recording, returning\n");
}
// if (inNumberPacketDescriptions == 0 && recordState->dataFormat.mBytesPerPacket != 0)
// {
// inNumberPacketDescriptions = inBuffer->mAudioDataByteSize / recordState->dataFormat.mBytesPerPacket;
// }
printf("Writing buffer %lld\n", recordState->currentPacket);
OSStatus status = AudioFileWritePackets(recordState->audioFile,
false,
inBuffer->mAudioDataByteSize,
inPacketDescs,
recordState->currentPacket,
&inNumberPacketDescriptions,
inBuffer->mAudioData);
NSLog(#"DATA = %#",[NSData dataWithBytes:inBuffer->mAudioData length:inBuffer->mAudioDataByteSize]);
[[NSNotificationCenter defaultCenter] postNotificationName:#"Recording" object:[NSData dataWithBytes:inBuffer->mAudioData length:inBuffer->mAudioDataByteSize]];
if (status == 0)
{
recordState->currentPacket += inNumberPacketDescriptions;
}
AudioQueueEnqueueBuffer(recordState->queue, inBuffer, 0, NULL);
}
I have called upon a notification which help to send data packets to the other iPhone in the network.
But I do not know how to read the data on the other side. I am still trying to figure out how that works. I will surely update once I do that.

AVAssetWriter / AVAudioPlayer Conflict?

Weeks ago, I posted this thread regarding problems I was having with AVAssetWriter: AVAssetWriter Woes
Further research seems to lead to a conflict using AVAssetWriter while playing audio with AVAudioPlayer or, really, any audio system. I tried with OpenAL as well.
Here's the background:
Using AVAssetWriter to write frames to a video from an image or set of images works fine UNTIL [AVAudioPlayer play] is called.
This only happens on the device, not the sim.
The error occurs when attempting to create a pixel buffer from CVPixelBufferPoolCreatePixelBuffer.
Once the audio starts playing, the AVAssetWriterInputPixelBufferAdaptor.pixelBufferPool which existed before suddely becomes nil.
You can download the representative project here: http://www.mediafire.com/?5k7kqyvtbfdgdgv
Comment out AVAudioPlayer play and it will work on the device.
Any clues are appreciated.
I've found the solution to this issue.
If you want to have AVAudioPlayer and AVAssetWriter behave correctly together, you must have and audio session category that is 'mixable'.
You can use a category that is mixable like AVAudioSessionCategoryAmbient.
However, I needed to use AVAudioSessionCategoryPlayAndRecord.
You can set any category to be mixable by implementing this:
OSStatus propertySetError = 0;
UInt32 allowMixing = true;
propertySetError = AudioSessionSetProperty (
kAudioSessionProperty_OverrideCategoryMixWithOthers, // 1
sizeof (allowMixing), // 2
&allowMixing // 3
);
This above answer is in complete. It doesn't work. Do this instead
// Setup to be able to record global sounds (preexisting app sounds)
NSError *sessionError = nil;
if ([[AVAudioSession sharedInstance] respondsToSelector:#selector(setCategory:withOptions:error:)])
[[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryPlayAndRecord withOptions:AVAudioSessionCategoryOptionDuckOthers error:&sessionError];
else
[[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryPlayAndRecord error:&sessionError];
// Set the audio session to be active
[[AVAudioSession sharedInstance] setActive:YES error:&sessionError];
//then call your asset writer
movieWriter = [[AVAssetWriter alloc] initWithMovieURL:movieURL size:CGSizeMake(480.0, 640.0)];

playing iPod music and avaudioplayer together

I am trying to write an application where the user records a sound and would be listening to the same with a background music from music library. However, when i try to play the recorded file using AVAudioplayer, the background music (iPod Player) is going very low and not audible. Is there any session property i need to set for playing both the AVAudioPlayer and the iPod player at the same level?
I have tried to put allowMixing property but no success..
Here is the code for setting . May be i missed out something.
[audioSession setCategory :AVAudioSessionCategoryPlayAndRecord error:&err];
UInt32 category = 1;
AudioSessionSetProperty( kAudioSessionProperty_OverrideCategoryMixWithOthers , sizeof( category ) , &category );
UInt32 doChangeDefaultRoute = 1;
OSStatus status;
if (status = AudioSessionSetProperty( kAudioSessionProperty_OverrideCategoryDefaultToSpeaker,
sizeof(doChangeDefaultRoute), &doChangeDefaultRoute)) {
NSLog(#"RunSketchAppDelegate: ERROR: couldn't set kAudioSessionProperty_OverrideCategoryDefaultToSpeaker to %i. Error code = %i", doChangeDefaultRoute, status);
} else {
NSLog(#"RunSketchAppDelegate: successfully set kAudioSessionProperty_OverrideCategoryDefaultToSpeaker to %i", doChangeDefaultRoute);
}
Hope that helps.. I am still getting the playback lovering.