Variable Frame Rate in iPhone video - iphone

How can we change frame rates of an existing video as well as while capturing video from iPhone?
Can we implement it using either AVFoundation framework or by any third party library?

Code for choosing custom frame rate is as below - Added checks to Apple RosyWriter to verify if current format supports FPS chosen
- (void)configureCamera:(AVCaptureDevice *)videoDevice withFrameRate:(int)desiredFrameRate
{
BOOL isFPSSupported = NO;
AVCaptureDeviceFormat *currentFormat = [videoDevice activeFormat];
for ( AVFrameRateRange *range in currentFormat.videoSupportedFrameRateRanges ) {
if ( range.maxFrameRate >= desiredFrameRate && range.minFrameRate <= desiredFrameRate ) {
isFPSSupported = YES;
break;
}
}
if( isFPSSupported ) {
if ( [videoDevice lockForConfiguration:NULL] ) {
videoDevice.activeVideoMaxFrameDuration = CMTimeMake( 1, desiredFrameRate );
videoDevice.activeVideoMinFrameDuration = CMTimeMake( 1, desiredFrameRate );
[videoDevice unlockForConfiguration];
}
}
}
In case current format (activeFormat) didn't support your chosen FPS, use below code to change activeFormat and then chose FPS. Will need to get format dimension to match your needs though.
- (void)configureCamera:(AVCaptureDevice *)device withFrameRate:(int)desiredFrameRate
{
AVCaptureDeviceFormat *desiredFormat = nil;
for ( AVCaptureDeviceFormat *format in [device formats] ) {
for ( AVFrameRateRange *range in format.videoSupportedFrameRateRanges ) {
if ( range.maxFrameRate >= desiredFrameRate && range.minFrameRate <= desiredFrameRate ) {
desiredFormat = format;
goto desiredFormatFound;
}
}
}
desiredFormatFound:
if ( desiredFormat ) {
if ( [device lockForConfiguration:NULL] == YES ) {
device.activeFormat = desiredFormat ;
device.activeVideoMinFrameDuration = CMTimeMake ( 1, desiredFrameRate );
device.activeVideoMaxFrameDuration = CMTimeMake ( 1, desiredFrameRate );
[device unlockForConfiguration];
}
}
}
Note: Usage of AVCaptureConnection videoMinFrameDuration to set FPS is deprecated.

Related

How to compare data in iPhone sdk

I am using the condition,but it don't go to the if condition, every time goes to the else condition.
if ([[dict2 valueForKey:#"meeting_location"] isEqual:#""])
{
lbllocation.text = #"-----";
}
else
{
lbllocation.text = [NSString stringWithFormat:#"Location: %#",[dict2 valueForKey:#"meeting_location"]];
}
Output came from Webservice:-
"meeting_location" = "";
"meeting_time" = "";
"meeting_with" = "";
Use Below Code -
if ([[dict2 valueForKey:#"meeting_location"] length] == 0)
{
lbllocation.text = #"-----";
}
else
{
lbllocation.text = [NSString stringWithFormat:#"Location: %#",[dict2 valueForKey:#"meeting_location"]];
}
Hopefully this will work here.

Move Cursor One Word at a Time in UTextView

I would like to create a button that moves the cursor position in a UITextView one word at a time. From a user perspective, this would be the same as Option-Right Arrow in Mac OS X, which is defined as "go to the word to the right of the insertion point."
I have found a couple ways to move on character at a time. How would you modify this to move one word at a time?
- (IBAction)rightArrowButtonPressed:(id)sender
{
myTextView.selectedRange = NSMakeRange(myTextView.selectedRange.location + 1, 0);
}
Thanks for any suggestions.
Was able to implement it like this,
- (IBAction)nextWord {
NSRange selectedRange = self.textView.selectedRange;
NSInteger currentLocation = selectedRange.location + selectedRange.length;
NSInteger textLength = [self.textView.text length];
if ( currentLocation == textLength ) {
return;
}
NSRange newRange = [self.textView.text rangeOfCharacterFromSet:[NSCharacterSet whitespaceAndNewlineCharacterSet]
options:NSCaseInsensitiveSearch
range:NSMakeRange((currentLocation + 1), (textLength - 1 - currentLocation))];
if ( newRange.location != NSNotFound ) {
self.textView.selectedRange = NSMakeRange(newRange.location, 0);
} else {
self.textView.selectedRange = NSMakeRange(textLength, 0);
}
}
- (IBAction)previousWord {
NSRange selectedRange = self.textView.selectedRange;
NSInteger currentLocation = selectedRange.location;
if ( currentLocation == 0 ) {
return;
}
NSRange newRange = [self.textView.text rangeOfCharacterFromSet:[NSCharacterSet whitespaceAndNewlineCharacterSet]
options:NSBackwardsSearch
range:NSMakeRange(0, (currentLocation - 1))];
if ( newRange.location != NSNotFound ) {
self.textView.selectedRange = NSMakeRange((newRange.location + 1), 0);
} else {
self.textView.selectedRange = NSMakeRange(0, 0);
}
}

Display local UIImage on UIWebview

My setup:
I have a webview in my iPhone app, going to www.mysite.com.
Trials
If I set "backgroundcolor uicolor clearcolor" to my webview and draw an uiimage behind my webview, it won't scroll with my webview!
My problem:
I would like the image to with the webview. Also, this image should be stored in the iPhone app and not on the webserver (Of course, if I put the image on the webserver I can simply draw it as background with CSS).
What is the best way to access this image via html/javascript/Objective C, so I can see this local image as background of my website? Is there a way at all?
You have a couple of options. You can load the page with a baseURL of
[[NSBundle mainBundle] bundleURL]
or you can use javascript to set the path to your image using the following method.
[[NSBundle mainBundle] URLForResource:#"myimage" withExtension:#"png"]
Edit: Check here Link to resources inside WebView - iPhone. It looks like you will use pathForResource rather than URLForResource.
Edit 2: You may want to use Data URI's to add your own local files to the web site.
YourCode.m
#import "NSString+DataURI.h"
#import "NSData+Base64.h"
...
-(void)webViewDidFinishLoad:(UIWebView *)webView
{
NSString *imgPath = [[NSBundle mainBundle] pathForResource:#"image" ofType:#"png"];
NSData *imgData = [NSData dataWithContentsOfFile:imgPath];
NSString *imgB64 = [[imgData base64Encoding] pngDataURIWithContent];
NSString *javascript = [NSString stringWithFormat:#"document.body.style.backgroundImage='url(%#)';", imgB64];
[webView stringByEvaluatingJavaScriptFromString:javascript];
}
The following code I DID NOT WRITE and I am not sure of the origin
NSData+Base64.h
#interface NSData (Base64)
+ (NSData *)dataWithBase64EncodedString:(NSString *)string;
- (id)initWithBase64EncodedString:(NSString *)string;
- (NSString *)base64Encoding;
- (NSString *)base64EncodingWithLineLength:(unsigned int) lineLength;
#end
NSData.Base64.m
#import "NSData+Base64.h"
static char encodingTable[64] = {
'A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P',
'Q','R','S','T','U','V','W','X','Y','Z','a','b','c','d','e','f',
'g','h','i','j','k','l','m','n','o','p','q','r','s','t','u','v',
'w','x','y','z','0','1','2','3','4','5','6','7','8','9','+','/' };
#implementation NSData (VQBase64)
- (id)initWithString:(NSString *)string {
if ((self = [super init])) {
[self initWithBase64EncodedString:string];
}
return self;
}
+ (NSData *) dataWithBase64EncodedString:(NSString *) string {
return [[[NSData allocWithZone:nil] initWithBase64EncodedString:string] autorelease];
}
- (id) initWithBase64EncodedString:(NSString *) string {
NSMutableData *mutableData = nil;
if( string ) {
unsigned long ixtext = 0;
unsigned long lentext = 0;
unsigned char ch = 0;
unsigned char inbuf[4], outbuf[3];
short i = 0, ixinbuf = 0;
BOOL flignore = NO;
BOOL flendtext = NO;
NSData *base64Data = nil;
const unsigned char *base64Bytes = nil;
// Convert the string to ASCII data.
base64Data = [string dataUsingEncoding:NSASCIIStringEncoding];
base64Bytes = [base64Data bytes];
mutableData = [NSMutableData dataWithCapacity:[base64Data length]];
lentext = [base64Data length];
while( YES ) {
if( ixtext >= lentext ) break;
ch = base64Bytes[ixtext++];
flignore = NO;
if( ( ch >= 'A' ) && ( ch <= 'Z' ) ) ch = ch - 'A';
else if( ( ch >= 'a' ) && ( ch <= 'z' ) ) ch = ch - 'a' + 26;
else if( ( ch >= '0' ) && ( ch <= '9' ) ) ch = ch - '0' + 52;
else if( ch == '+' ) ch = 62;
else if( ch == '=' ) flendtext = YES;
else if( ch == '/' ) ch = 63;
else flignore = YES;
if( ! flignore ) {
short ctcharsinbuf = 3;
BOOL flbreak = NO;
if( flendtext ) {
if( ! ixinbuf ) break;
if( ( ixinbuf == 1 ) || ( ixinbuf == 2 ) ) ctcharsinbuf = 1;
else ctcharsinbuf = 2;
ixinbuf = 3;
flbreak = YES;
}
inbuf [ixinbuf++] = ch;
if( ixinbuf == 4 ) {
ixinbuf = 0;
outbuf [0] = ( inbuf[0] << 2 ) | ( ( inbuf[1] & 0x30) >> 4 );
outbuf [1] = ( ( inbuf[1] & 0x0F ) << 4 ) | ( ( inbuf[2] & 0x3C ) >> 2 );
outbuf [2] = ( ( inbuf[2] & 0x03 ) << 6 ) | ( inbuf[3] & 0x3F );
for( i = 0; i < ctcharsinbuf; i++ )
[mutableData appendBytes:&outbuf[i] length:1];
}
if( flbreak ) break;
}
}
}
self = [self initWithData:mutableData];
return self;
}
#pragma mark -
- (NSString *) base64Encoding {
return [self base64EncodingWithLineLength:0];
}
- (NSString *) base64EncodingWithLineLength:(unsigned int) lineLength {
const unsigned char *bytes = [self bytes];
NSMutableString *result = [NSMutableString stringWithCapacity:[self length]];
unsigned long ixtext = 0;
unsigned long lentext = [self length];
long ctremaining = 0;
unsigned char inbuf[3], outbuf[4];
unsigned short i = 0;
unsigned short charsonline = 0, ctcopy = 0;
unsigned long ix = 0;
while( YES ) {
ctremaining = lentext - ixtext;
if( ctremaining <= 0 ) break;
for( i = 0; i < 3; i++ ) {
ix = ixtext + i;
if( ix < lentext ) inbuf[i] = bytes[ix];
else inbuf [i] = 0;
}
outbuf [0] = (inbuf [0] & 0xFC) >> 2;
outbuf [1] = ((inbuf [0] & 0x03) << 4) | ((inbuf [1] & 0xF0) >> 4);
outbuf [2] = ((inbuf [1] & 0x0F) << 2) | ((inbuf [2] & 0xC0) >> 6);
outbuf [3] = inbuf [2] & 0x3F;
ctcopy = 4;
switch( ctremaining ) {
case 1:
ctcopy = 2;
break;
case 2:
ctcopy = 3;
break;
}
for( i = 0; i < ctcopy; i++ )
[result appendFormat:#"%c", encodingTable[outbuf[i]]];
for( i = ctcopy; i < 4; i++ )
[result appendString:#"="];
ixtext += 3;
charsonline += 4;
if( lineLength > 0 ) {
if( charsonline >= lineLength ) {
charsonline = 0;
[result appendString:#"\n"];
}
}
}
return [NSString stringWithString:result];
}
#end
NSString+DataURI.h
#import <Foundation/Foundation.h>
#interface NSString(DataURI)
- (NSString *) pngDataURIWithContent;
- (NSString *) jpgDataURIWithContent;
#end
NSString+DataURI.m
#import "NSString+DataURI.h"
#implementation NSString(DataURI)
- (NSString *) pngDataURIWithContent;
{
NSString * result = [NSString stringWithFormat: #"data:image/png;base64,%#", self];
return result;
}
- (NSString *) jpgDataURIWithContent;
{
NSString * result = [NSString stringWithFormat: #"data:image/jpg;base64,%#", self];
return result;
}
#end
You could simply add this line
webView.backgroundColor=[UIColor colorWithPatternImage:[UIImage imageNamed:#"light.jpg"]];

How can I get AAC encoding with ExtAudioFile on iOS to work?

I need to convert a WAVE file into an AAC encoded M4A file on iOS. I'm aware that AAC encoding is not supported on older devices or in the simulator. I'm testing that before I run the code. But I still can't get it to work.
I looked into Apple's very own iPhoneExtAudioFileConvertTest example and I thought I followed it exactly, but still no luck!
Currently, I get a -50 (= error in user parameter list) while trying to set the client format on the destination file. On the source file, it works.
Below is my code. Any help is very much appreciated, thanks!
UInt32 size;
// Open a source audio file.
ExtAudioFileRef sourceAudioFile;
ExtAudioFileOpenURL( (CFURLRef)sourceURL, &sourceAudioFile );
// Get the source data format
AudioStreamBasicDescription sourceFormat;
size = sizeof( sourceFormat );
result = ExtAudioFileGetProperty( sourceAudioFile, kExtAudioFileProperty_FileDataFormat, &size, &sourceFormat );
// Define the output format (AAC).
AudioStreamBasicDescription outputFormat;
outputFormat.mFormatID = kAudioFormatMPEG4AAC;
outputFormat.mSampleRate = 44100;
outputFormat.mChannelsPerFrame = 2;
// Use AudioFormat API to fill out the rest of the description.
size = sizeof( outputFormat );
AudioFormatGetProperty( kAudioFormatProperty_FormatInfo, 0, NULL, &size, &outputFormat);
// Make a destination audio file with this output format.
ExtAudioFileRef destAudioFile;
ExtAudioFileCreateWithURL( (CFURLRef)destURL, kAudioFileM4AType, &outputFormat, NULL, kAudioFileFlags_EraseFile, &destAudioFile );
// Create canonical PCM client format.
AudioStreamBasicDescription clientFormat;
clientFormat.mSampleRate = sourceFormat.mSampleRate;
clientFormat.mFormatID = kAudioFormatLinearPCM;
clientFormat.mFormatFlags = kAudioFormatFlagIsPacked | kAudioFormatFlagIsSignedInteger;
clientFormat.mChannelsPerFrame = 2;
clientFormat.mBitsPerChannel = 16;
clientFormat.mBytesPerFrame = 4;
clientFormat.mBytesPerPacket = 4;
clientFormat.mFramesPerPacket = 1;
// Set the client format in source and destination file.
size = sizeof( clientFormat );
ExtAudioFileSetProperty( sourceAudioFile, kExtAudioFileProperty_ClientDataFormat, size, &clientFormat );
size = sizeof( clientFormat );
ExtAudioFileSetProperty( destAudioFile, kExtAudioFileProperty_ClientDataFormat, size, &clientFormat );
// Make a buffer
int bufferSizeInFrames = 8000;
int bufferSize = ( bufferSizeInFrames * sourceFormat.mBytesPerFrame );
UInt8 * buffer = (UInt8 *)malloc( bufferSize );
AudioBufferList bufferList;
bufferList.mNumberBuffers = 1;
bufferList.mBuffers[0].mNumberChannels = clientFormat.mChannelsPerFrame;
bufferList.mBuffers[0].mData = buffer;
bufferList.mBuffers[0].mDataByteSize = ( bufferSize );
while( TRUE )
{
// Try to fill the buffer to capacity.
UInt32 framesRead = bufferSizeInFrames;
ExtAudioFileRead( sourceAudioFile, &framesRead, &bufferList );
// 0 frames read means EOF.
if( framesRead == 0 )
break;
// Write.
ExtAudioFileWrite( destAudioFile, framesRead, &bufferList );
}
free( buffer );
// Close the files.
ExtAudioFileDispose( sourceAudioFile );
ExtAudioFileDispose( destAudioFile );
Answered my own question: I had to pass this problem to my colleague and he got it to work! I never had the chance to analyze my original problem but I thought, I'd post it here for the sake of completeness. The following method is called from within an NSThread. Parameters are set via the 'threadDictionary' and he created a custom delegate to transmit progress feedback (sorry, SO doesn't understand the formatting properly, the following is supposed to be one block of method implementation):
- (void)encodeToAAC
{
RXAudioEncoderStatusType encoderStatus;
OSStatus result = noErr;
BOOL success = NO;
BOOL cancelled = NO;
UInt32 size;
ExtAudioFileRef sourceAudioFile,destAudioFile;
AudioStreamBasicDescription sourceFormat,outputFormat, clientFormat;
SInt64 totalFrames;
unsigned long long encodedBytes, totalBytes;
int bufferSizeInFrames, bufferSize;
UInt8 * buffer;
AudioBufferList bufferList;
NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];
NSFileManager * fileManager = [[[NSFileManager alloc] init] autorelease];
NSMutableDictionary * threadDict = [[NSThread currentThread] threadDictionary];
NSObject<RXAudioEncodingDelegate> * delegate = (NSObject<RXAudioEncodingDelegate> *)[threadDict objectForKey:#"Delegate"];
NSString *sourcePath = (NSString *)[threadDict objectForKey:#"SourcePath"];
NSString *destPath = (NSString *)[threadDict objectForKey:#"DestinationPath"];
NSURL * sourceURL = [NSURL fileURLWithPath:sourcePath];
NSURL * destURL = [NSURL fileURLWithPath:destPath];
// Open a source audio file.
result = ExtAudioFileOpenURL( (CFURLRef)sourceURL, &sourceAudioFile );
if( result != noErr )
{
DLog( #"Error in ExtAudioFileOpenURL: %ld", result );
goto bailout;
}
// Get the source data format
size = sizeof( sourceFormat );
result = ExtAudioFileGetProperty( sourceAudioFile, kExtAudioFileProperty_FileDataFormat, &size, &sourceFormat );
if( result != noErr )
{
DLog( #"Error in ExtAudioFileGetProperty: %ld", result );
goto bailout;
}
// Define the output format (AAC).
memset(&outputFormat, 0, sizeof(outputFormat));
outputFormat.mFormatID = kAudioFormatMPEG4AAC;
outputFormat.mSampleRate = 44100;
outputFormat.mFormatFlags = kMPEG4Object_AAC_Main;
outputFormat.mChannelsPerFrame = 2;
outputFormat.mBitsPerChannel = 0;
outputFormat.mBytesPerFrame = 0;
outputFormat.mBytesPerPacket = 0;
outputFormat.mFramesPerPacket = 1024;
// Use AudioFormat API to fill out the rest of the description.
//size = sizeof( outputFormat );
//AudioFormatGetProperty( kAudioFormatProperty_FormatInfo, 0, NULL, &size, &outputFormat);
// Make a destination audio file with this output format.
result = ExtAudioFileCreateWithURL( (CFURLRef)destURL, kAudioFileM4AType, &outputFormat, NULL, kAudioFileFlags_EraseFile, &destAudioFile );
if( result != noErr )
{
DLog( #"Error creating destination file: %ld", result );
goto bailout;
}
// Create the canonical PCM client format.
memset(&clientFormat, 0, sizeof(clientFormat));
clientFormat.mSampleRate = sourceFormat.mSampleRate;
clientFormat.mFormatID = kAudioFormatLinearPCM;
clientFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked; //kAudioFormatFlagIsPacked | kAudioFormatFlagIsSignedInteger;
clientFormat.mChannelsPerFrame = 2;
clientFormat.mBitsPerChannel = 16;
clientFormat.mBytesPerFrame = 4;
clientFormat.mBytesPerPacket = 4;
clientFormat.mFramesPerPacket = 1;
// Set the client format in source and destination file.
size = sizeof( clientFormat );
result = ExtAudioFileSetProperty( sourceAudioFile, kExtAudioFileProperty_ClientDataFormat, size, &clientFormat );
if( result != noErr )
{
DLog( #"Error while setting client format in source file: %ld", result );
goto bailout;
}
size = sizeof( clientFormat );
result = ExtAudioFileSetProperty( destAudioFile, kExtAudioFileProperty_ClientDataFormat, size, &clientFormat );
if( result != noErr )
{
DLog( #"Error while setting client format in destination file: %ld", result );
goto bailout;
}
// Make a buffer
bufferSizeInFrames = 8000;
bufferSize = ( bufferSizeInFrames * sourceFormat.mBytesPerFrame );
buffer = (UInt8 *)malloc( bufferSize );
bufferList.mNumberBuffers = 1;
bufferList.mBuffers[0].mNumberChannels = clientFormat.mChannelsPerFrame;
bufferList.mBuffers[0].mData = buffer;
bufferList.mBuffers[0].mDataByteSize = ( bufferSize );
// Obtain total number of audio frames to encode
size = sizeof( totalFrames );
result = ExtAudioFileGetProperty( sourceAudioFile, kExtAudioFileProperty_FileLengthFrames, &size, &totalFrames );
if( result != noErr )
{
DLog( #"Error in ExtAudioFileGetProperty, could not get kExtAudioFileProperty_FileLengthFrames from sourceFile: %ld", result );
goto bailout;
}
encodedBytes = 0;
totalBytes = totalFrames * sourceFormat.mBytesPerFrame;
[threadDict setValue:[NSValue value:&totalBytes withObjCType:#encode(unsigned long long)] forKey:#"TotalBytes"];
if (delegate != nil)
[self performSelectorOnMainThread:#selector(didStartEncoding) withObject:nil waitUntilDone:NO];
while( TRUE )
{
// Try to fill the buffer to capacity.
UInt32 framesRead = bufferSizeInFrames;
result = ExtAudioFileRead( sourceAudioFile, &framesRead, &bufferList );
if( result != noErr )
{
DLog( #"Error in ExtAudioFileRead: %ld", result );
success = NO;
break;
}
// 0 frames read means EOF.
if( framesRead == 0 ) {
success = YES;
break;
}
// Write.
result = ExtAudioFileWrite( destAudioFile, framesRead, &bufferList );
if( result != noErr )
{
DLog( #"Error in ExtAudioFileWrite: %ld", result );
success = NO;
break;
}
encodedBytes += framesRead * sourceFormat.mBytesPerFrame;
if (delegate != nil)
[self performSelectorOnMainThread:#selector(didEncodeBytes:) withObject:[NSValue value:&encodedBytes withObjCType:#encode(unsigned long long)] waitUntilDone:NO];
if ([[NSThread currentThread] isCancelled]) {
cancelled = YES;
DLog( #"Encoding was cancelled." );
success = NO;
break;
}
}
free( buffer );
// Close the files.
ExtAudioFileDispose( sourceAudioFile );
ExtAudioFileDispose( destAudioFile );
bailout:
encoderStatus.result = result;
[threadDict setValue:[NSValue value:&encoderStatus withObjCType:#encode(RXAudioEncoderStatusType)] forKey:#"EncodingError"];
// Report to the delegate if one exists
if (delegate != nil)
if (success)
[self performSelectorOnMainThread:#selector(didEncodeFile) withObject:nil waitUntilDone:YES];
else if (cancelled)
[self performSelectorOnMainThread:#selector(encodingCancelled) withObject:nil waitUntilDone:YES];
else
[self performSelectorOnMainThread:#selector(failedToEncodeFile) withObject:nil waitUntilDone:YES];
// Clear the partially encoded file if encoding failed or is cancelled midway
if ((cancelled || !success) && [fileManager fileExistsAtPath:destPath])
[fileManager removeItemAtURL:destURL error:NULL];
[threadDict setValue:[NSNumber numberWithBool:NO] forKey:#"isEncoding"];
[pool release];
}
Are you sure the sample rates match? Can you print the values for clientFormat and outputFormat at the point you’re getting the error? Otherwise I think you might need an AudioConverter.
I tried out the code in Sebastian's answer and while it worked for uncompressed files (aif, wav, caf), it didn't for a lossy compressed file (mp3). I also had an error code of -50, but in ExtAudioFileRead rather than ExtAudioFileSetProperty. From this question I learned that this error signifies a problem with the function parameters. Turns out the buffer for reading the audio file had a size of 0 bytes, a result of this line:
int bufferSize = ( bufferSizeInFrames * sourceFormat.mBytesPerFrame );
Switching it to use the the bytes per frame from clientFormat instead (sourceFormat's value was 0) worked for me:
int bufferSize = ( bufferSizeInFrames * clientFormat.mBytesPerFrame );
This line was also in the question code, but I don't think that was the problem (but I had too much text for a comment).

Detect if headphones (not microphone) are plugged in to an iOS device [duplicate]

This question already has answers here:
Are headphones plugged in? iOS7
(8 answers)
Closed 6 years ago.
I need to change my audio depending on whether or not headphones are plugged in. I'm aware of kAudioSessionProperty_AudioInputAvailable, which will tell me if there's a microphone, but I'd like to test for any headphones, not just headphones with a built-in microphone. Is this possible?
Here is a method of my own which is a slightly modified version of one found on this site : http://www.iphonedevsdk.com/forum/iphone-sdk-development/9982-play-record-same-time.html
- (BOOL)isHeadsetPluggedIn {
UInt32 routeSize = sizeof (CFStringRef);
CFStringRef route;
OSStatus error = AudioSessionGetProperty (kAudioSessionProperty_AudioRoute,
&routeSize,
&route);
/* Known values of route:
* "Headset"
* "Headphone"
* "Speaker"
* "SpeakerAndMicrophone"
* "HeadphonesAndMicrophone"
* "HeadsetInOut"
* "ReceiverAndMicrophone"
* "Lineout"
*/
if (!error && (route != NULL)) {
NSString* routeStr = (NSString*)route;
NSRange headphoneRange = [routeStr rangeOfString : #"Head"];
if (headphoneRange.location != NSNotFound) return YES;
}
return NO;
}
Here's a solution based on rob mayoff's comment:
- (BOOL)isHeadsetPluggedIn
{
AVAudioSessionRouteDescription *route = [[AVAudioSession sharedInstance] currentRoute];
BOOL headphonesLocated = NO;
for( AVAudioSessionPortDescription *portDescription in route.outputs )
{
headphonesLocated |= ( [portDescription.portType isEqualToString:AVAudioSessionPortHeadphones] );
}
return headphonesLocated;
}
Simply link to the AVFoundation framework.
Just a heads up for any future readers of this post.
Most of the AVToolbox methods have been deprecated with the release of iOS 7 without alternative so audio listeners are now largely redundancy
I started with the code given above by jpsetung, but there were a few issues with it for my use case:
No evidence of something called kAudioSessionProperty_AudioRoute in the docs
Leaks route
No audio session check
String check for headphones instead of logical awareness of categories
I was more interested in whether the iPhone was using its speakers, with "headphones" meaning "anything other than speakers". I feel that leaving out options like "bluetooth", "airplay", or "lineout" was dangerous.
This implementation broadens the check to allow for any type of specified output:
BOOL isAudioRouteAvailable(CFStringRef routeType)
{
/*
As of iOS 5:
kAudioSessionOutputRoute_LineOut;
kAudioSessionOutputRoute_Headphones;
kAudioSessionOutputRoute_BluetoothHFP;
kAudioSessionOutputRoute_BluetoothA2DP;
kAudioSessionOutputRoute_BuiltInReceiver;
kAudioSessionOutputRoute_BuiltInSpeaker;
kAudioSessionOutputRoute_USBAudio;
kAudioSessionOutputRoute_HDMI;
kAudioSessionOutputRoute_AirPlay;
*/
//Prep
BOOL foundRoute = NO;
CFDictionaryRef description = NULL;
//Session
static dispatch_once_t onceToken;
dispatch_once(&onceToken, ^{
AudioSessionInitialize(NULL, NULL, NULL, NULL);
});
//Property
UInt32 propertySize;
AudioSessionGetPropertySize(kAudioSessionProperty_AudioRouteDescription, &propertySize);
OSStatus error = AudioSessionGetProperty(kAudioSessionProperty_AudioRouteDescription, &propertySize, &description);
if ( !error && description ) {
CFArrayRef outputs = CFDictionaryGetValue(description, kAudioSession_AudioRouteKey_Outputs);
CFIndex count = CFArrayGetCount(outputs);
if ( outputs && count ) {
for (CFIndex i = 0; i < count; i++) {
CFDictionaryRef route = CFArrayGetValueAtIndex(outputs, i);
CFStringRef type = CFDictionaryGetValue(route, kAudioSession_AudioRouteKey_Type);
NSLog(#"Got audio route %#", type);
//Audio route type
if ( CFStringCompare(type, routeType, 0) == kCFCompareEqualTo ) {
foundRoute = YES;
break;
}
}
}
} else if ( error ) {
NSLog(#"Audio route error %ld", error);
}
//Cleanup
if ( description ) {
CFRelease(description);
}
//Done
return foundRoute;
}
Used like so:
if ( isAudioRouteAvailable(kAudioSessionOutputRoute_BuiltInSpeaker) ) {
//Do great things...
}