What is kAudioSessionProperty_InputSources actually good for? - iphone

I've tried to fetch the list of available audio input devices on an iPhone by using this code:
CFArrayRef arrayRef;
UInt32 size = sizeof(arrayRef);
OSStatus status = AudioSessionGetProperty(kAudioSessionProperty_InputSources, &size, &arrayRef);
assert(status == noErr);
NSArray *array = (__bridge NSArray *)arrayRef;
The call works and returns without error, but the results array is always empty, no matter what hardware I have connected to it. I've tried two usual headsets for mobiles, an original one from Apple and one from Samsung and two kinds of USB microphones (an iXY from Rode and an iM2X from Tascam), but the array always stays empty. So I wonder what kinds of input sources would actually be listed by this property? Is it usable at all?
By using a listener callback on the audio routes, I was able to verify that all 4 devices are detected correctly. I was also able to record audio with each of the devices, so they all work properly. I use an iPhone 4S with iOS 6.1.3 (10B329).

The property you are referring to is only for audio input sources in a USB audio accessory attached through the iPad camera connection kit, as mentioned in the AudioSessionServices class reference.
To get an array that is not nil you will need to test with say a USB Audio Workstation that plugs into the iPad camera connection kit.
Here is a link that lists some hardware that uses the iPad camera connection kit.
Connecting USB audio interfaces using the Apple iPad Camera Connection Kit.
Also from the class reference
If there is no audio input source available from the attached accessory, this property’s value is an empty array.
So from the list found in the above link (scroll down to List of some compatible devices sub heading), devices you would be interested in, that yield a !nil result, would be some device that offers audio input such as the Alesis iO4, iO2, or iO2 express.
EDIT: there's merit in the answer provided by Shawn Hershey, with regards to using a non-deprecated objective-c alternative. However you would be most interested in the portType property of the AVAudioSessionPortDescription class. (available from iOS 6.0)
Two constants of interest are - AVAudioSessionPortLineIn and AVAudioSessionPortUSBAudio. The first one mentioned is for audio input through the dock connector, which is the way your test microphones mentioned connect.
In iOS 7.0 and later you can query the availableInputs property of the AVAudioSession class. In iOS 6 you can only query the currentRoute property.
I found this Technical Q&A very helpful -
AVAudioSession - microphone selection

I'm very new to audio programming on iPhones so I don't have an answer to the question of what that particular property is good for, but if you want the list of audio inputs, I think this will work:
NSArray * ais = [[AVAudioSession sharedInstance] availableInputs];
This provides an array of AVAudioSessionPortDescription objects.
for (id object in ais) {
AVAudioSessionPortDescription * pd = (AVAudioSessionPortDescription*)object;
NSLog(#"%#",pd.portName);
}

Related

Query for optimal pixel format when capturing video on iOS?

The AVFoundation Programming Guide states that the preferred pixel formats when capturing video are:
for iPhone 4: kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange or kCVPixelFormatType_32BGRA
for iPhone 3G: kCVPixelFormatType_422YpCbCr8 or kCVPixelFormatType_32BGRA
(There are no recommendations [yet] for the iPhone 5 or for iPad devices with cameras.)
There is however no help provided as to how I should go about and determine what device the app is currently running on. And what if the preferred pixel format becomes different on a future and therefor to my app unknown device?
What is the correct, and future proof, way to determine the preferred YpCbCr pixel format for any device?
I believe you can just set the video settings to nil and AVFoundation will use the most efficient format. For instance instead of doing
NSDictionary *videoSettings = [NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange] forKey:(id)kCVPixelBufferPixelFormatTypeKey];
videoOutput.videoSettings = videoSettings;
Do this instead
videoOutput.videoSettings = nil;
You may also try not setting it at all. I know in the past I would just set this to nil unless I needed to capture images in a specific format.
edit
To get the format AVFoundation chose to use.
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(source);
CGColorSpaceRef cref = CVImageBufferGetColorSpace(imageBuffer);

MPMoviePlayerController and AVAudioPlayer audio mixing glitch

I'm developing an interactive storybook type application for the iPhone and I've recently encountered a frustrating bug concerning audio mixing on the device.
Firstly, I setup an audio session. I set the category to AVAudioSessionCategoryAmbient and then init and play my AVAudioPlayer instance. Now, in the background whilst the audio is playing I'm pre-loading a video to play using an MPMoviePlayerController followed by a call to prepareToPlay. The reason I pre-load the video this way is because I need it to play instantly later on cue with fairly strict timing.
In this configuration, the audio/movie works fine and they mix and do not interrupt each other. However, this particular audio session category does not permit audio to continue playing while the device is locked, a feature I really need. As a result I'm forced to consider a different category: AVAudioSessionCategoryPlayback.
By default this category does not permit mixing with other audio, according to the Apple docs. To enable mixing with other audio I am overriding the relevant category:
OSStatus propertySetError = 0;
UInt32 setProperty = 1;
propertySetError = AudioSessionSetProperty(kAudioSessionProperty_OverrideCategoryMixWithOthers, sizeof(setProperty), &setProperty);
assert(propertySetError == 0);
Unfortunately, this solves my playing whilst locked issue but introduces another issue: the AVAudioPlayer audio is interrupted briefly as the video loads with a minor stutter. The stutter is small, perhaps less than a second but is enough to disrupt the user experience. I've read this related post which enabled me to pre-load the video with the AVAudioSessionCategoryAmbient, but unfortunately the same approach doesn't seem to work with the new category.
The audio session category is applied successfully, according to the return code. Does anyone know why enabling audio mixing with this category is not the same as the mixing facility provided by ambient category?
The best way I've found working a similar problem is to use the newer AVPlayer (+1 #adam) and set your app to enable background audio and receive remote control notifications. I was tipped off to this by #MarquelV following How can you play music from the iPod app while still receiving remote control events in your app?
If you can get backgrounding working properly, that should enable you to continue playing while the device is locked. Oh, and don't forget to add keys to info.plist, its easy to do and then have no idea why it isn't working.

iPod mini controls disabled when certain audio session parameters are set

I'm working on a music visualizer for the iPhone/iPad, under iOS 3 you could double-tap the home button and get iPod controls. With the latest version 4.1-4.2, these controls are now grayed out when the home button is pressed. I found a similar complaint at http://openradar.appspot.com/8696944, although there wasn't a solution.
I have the base sound category set to kAudioSessionCategory_PlayAndRecord, with kAudioSessionProperty_OverrideCategoryMixWithOthers set to true. (Just to add more fun to the problem I'm using OpenAL for some sound effects.)
I have tried setting the category back to ambient when the application goes into the background. but either it happens too late or it's not sufficient.
Here's where I've got to so far:
AudioSessionInitialize(NULL, NULL, NULL, self);
UInt32 sessionCategory = kAudioSessionCategory_AmbientSound;
AudioSessionSetProperty(kAudioSessionProperty_AudioCategory, sizeof(sessionCategory), &sessionCategory);
devicetwo = alcOpenDevice(NULL);
contexttwo = alcCreateContext(devicetwo, 0);
//The following two lines are the lines that gray out iPod controls:
alcMakeContextCurrent(contexttwo);
AudioSessionSetActive(YES);
The iPod controls remain grayed out even once the app quits... And removing the two culprit lines of code result in no sound being produced in the app.
Well I've given up.. I'm now coding my own UI based off of the AddMusic example code
http://developer.apple.com/library/ios/#samplecode/AddMusic/Introduction/Intro.html%23//apple_ref/doc/uid/DTS40008845-Intro-DontLinkElementID_2
I'm happy to report that play and stop via the MPMusicPlayerController doesn't seem to conflict with the play and record session settings. And building your own play/pause/FF seems to be fairly straightforward
p.s. I've also discovered that this Music Visualizer app: http://itunes.apple.com/us/app/music-visualizer/id337651694?mt=8 is just this addMusic sample uploaded and this guy is charging 2 bucks for it.. It's got awful reviews.. but it still seems wrong that it's on the app store.
My iPod touch 4G is running iOS 4.2, and it doesn't have this problem. I would attempt to contact Apple.

Audio/Voice Visualization

Hey you Objective-C bods.
Does anyone know how I would go about changing (transforming) an image based on the input from the Microphone on the iPhone?
i.e. When a user speaks into the Mic, the image will pulse or skew.
[edit] Anyone have any ideas, I have (what is basically) a voice recording app. I just wanted something to change as the voice input is provided. I've seen it in a sample project, but that wasn't with an UIImage. [/edit]
Thanking you!!
Apple put together some great frameworks for this! The AVFoundation framework and CoreAudio framework will be the most useful to you.
To get audio level information AVAudioRecorder is useful. Although it is made to be used for recording, it also provides levels data for the microphone. This would be useful for deforming your image base on how loud the user is shouting at his phone ;)
Here is the apple documentation for AVAudioRecorder: AVAudioRecorder Class Reference
A bit more detail:
// You will need an AVAudioRecorder object
AVAudioRecorder *myRecorderObject;
// To be able to get levels data from the microphone you need
// to enable metering for your recorder object
[myRecorderObject prepareToRecord];
myRecorderObject.meteringEnabled=YES;
// Now you can poll the microphone to get some levels data
float peakPower = [myRecorderObject peakPowerForChannel:0];
float averagePower = [myRecorderObject averagePowerForChannel:0];
If you want to see a great example of how an AVAudioRecorder object can be used to get levels data, check out this tutorial.
As far as deforming your image, that would be up to an image library. There are a lot to choose from and some great ones from apple. I am not familiar with anything though so that might be up for someone else to answer.
Best of luck!
You may try using gl-data-visualization-view extensible framework in order to visualize your sound levels.

iPhone SDK audioSession question

In my app i record and play audio at the same time. The app is almost finished. But there is one thing, that annoying me. When audio session is set to PlayAndRecord, sounds become quiet in comparison with the same sounds with the SoloAmbient category. Is there any way to make sound louder using PlayAndRecord?
when you use the session for play and record, the playback comes out of the speaker used for the phone, otherwise it comes out the speaker located at the bottom of the phone. this is to prevent feedback. you can override this like so (but watch out for feedback, not an issue if you aren't doing both at once)
//when the category is play and record the playback comes out of the speaker used for phone conversation to avoid feedback
//change this to the normal or default speaker
UInt32 doChangeDefaultRoute = 1;
AudioSessionSetProperty (kAudioSessionProperty_OverrideCategoryDefaultToSpeaker, sizeof (doChangeDefaultRoute), &doChangeDefaultRoute);
this code works on 3.1.2, earlier sdk's you have to do differently.
UInt32 audioRouteOverride = kAudioSessionOverrideAudioRoute_Speaker;
status = AudioSessionSetProperty (kAudioSessionProperty_OverrideAudioRoute, sizeof (audioRouteOverride), &audioRouteOverride);
you have to be careful with this method, it will override even if you have headphones plugged in, you have to monitor interruptions and change the routes accordingly. much better now using 3.1.2
Ask the user to plug in headphones?
The headphone + mic combination doesn't suffer from this problem.
I don't know if it's a bug, a consequence of the audio hardware,
or if the quiet playback is just an intentional and hamfisted
way of getting cleaner recordings.
UPDATE
I found out that setting the PlayAndRecord session changes your audio route to the receiver.
Apparently the use case is for telephony applications where the user holds the device up to his ear.
If that doesn't violate the Principle of Least Surprise, I don't know what does.