iPhone App Pick Up Sound - iphone

I am trying to do a certain action based on whether or not the user makes a loud sound. I'm not trying to do any voice recognition or anything. Just simply do an action based on whether the iPhone picks up a loud sound.
Any suggestions, tutorials, I can't find anything on the apple developer site. I'm assuming i'm not looking or searching right.

The easiest thing for you do is to use the AudioQueue services. Here's the manual:
Apple AQ manual
Basically, look for any example code that initialized things with AudioQueueNewInput(). Something like this:
Status = AudioQueueNewInput(&_Description,
Audio_Input_Buffer_Ready,
self,
NULL,
NULL,
0,
&self->Queue);
Once you have that going, you can enable sound level metering with something like this:
// Turn on level metering (iOS 2.0 and later)
UInt32 on = 1;
AudioQueueSetProperty(self->Queue,kAudioQueueProperty_EnableLevelMetering,&on,sizeof(on));
You will have a callback routine that is invoked for each chunk of audio data. In it, you can check the current meter levels with something like this:
//
// Check metering levels and detect silence
//
AudioQueueLevelMeterState meters[1];
UInt32 dlen = sizeof(meters);
Status = AudioQueueGetProperty(_Queue,kAudioQueueProperty_CurrentLevelMeterDB,meters,&dlen);
if (Status == 0) {
if (meters[0].mPeakPower > _threshold) {
silence = 0.0; // reset silence timer
} else {
silence += time;
}
}
//
// Notify observers of incoming data.
//
if (delegate) {
[delegate audioMeter:meters[0].mPeakPower duration:time];
[delegate audioData:Buffer->mAudioData size:Buffer->mAudioDataByteSize];
}
Or, in your case, instead of silence you can detect if the decibel level is over a certain value for long enough. Note that the decibel values you will see will range from about -70.0 for dead silence, up to 0.0db for very loud things. On an exponential scale. You'll have to play with it to see what values work for your particular application.

Apple has examples such as Speak Here which looks to have code relating to decibels. I would check some of the meter classes for examples. I have no audio programming experience but hopefully that will get you started while someone provides you with a better answer.

Related

iOS WatchOS5 - how to detect programmatically if Apple Watch was on the wrist (worn) at specific time interval?

I'm interested if there is some HealthKit or other data source I can query to know if the Apple Watch was worn/in contact with the wrist at a given time interval. Currently I'm relying on HealthKit query for HeartRate and it appears that if I get no heart rate readings within a certain window, then the watch was most likely off the wrist or charging.
Is there a better way to detect if the Apple Watch was worn on the wrist?
The problem with this method is that it is not very descriptive - if the user put on the watch at the last minute and got a measurement, this logic would consider the entire period as having the watch "On". Is there something better?
// obtain heartRateSamples from HealthKit and filter them
let hrFilterStart = startDate.addingTimeInterval(startSecondsOffset)
let hrFilterEnd = hrFilterStart.addingTimeInterval(Double(30 * 60) )
let heartRateDuringTimeSlice = heartRateSamples.filter{ sample -> Bool in
let fallsBetween = (hrFilterStart ... hrFilterEnd).contains(sample.startDate)
return fallsBetween
}
if heartRateDuringTimeSlice.count == 0 {
//watch is not on the wrist - probably charging, ignore this interval
}
HealthKit does not expose any information that you can use to reliably determine whether the Apple Watch was on-wrist. Using the presence of heart rate or other automatically collected samples will work well enough for most users, but keep in mind that there are situations where heart rate samples might not be collected at a consistent frequency even when the watch is on-wrist.

MediaPlayer.framework: How to "translate" MPMusicRepeatModeDefault into an actual mode?

As is stated in Apple documents:
enum {
MPMusicRepeatModeDefault,
MPMusicRepeatModeNone,
MPMusicRepeatModeOne,
MPMusicRepeatModeAll
};
typedef NSInteger MPMusicRepeatMode;
Yet, MPMusicRepeatModeDefault is described as The user’s preferred repeat mode. Since I am writing a music player I require to know every time what is the current repeat mode, and when this is returned, what of the "actual" modes:
MPMusicRepeatModeNone
MPMusicRepeatModeOne
MPMusicRepeatModeAll
shall be chosen? Or is there no way to get such information?
My understanding is that MPMusicRepeatModeDefault is only used for instantiating your own player as described here.
MPMusicPlayerController* appMusicPlayer = [MPMusicPlayerController applicationMusicPlayer];
// Use whatever the user has set in their iPod settings
// Omitting this line has no real effect because deferring to the
// user mode is the default setting for new players
[appMusicPlayer setRepeatMode: MPMusicRepeatModeDefault];
If you want to know what that default setting actually is, you should be able to get it from the iPodMusicPlayer instance:
MPMusicPlayerController* iPodMusicPlayer =
[MPMusicPlayerController iPodMusicPlayer];
MPMusicRepeatMode theDefaultMode = [iPodMusicPlayer repeatMode];

How to use kAudioUnitSubType_LowShelfFilter of kAudioUnitType_Effect which controls bass in core Audio?

i'm back with one more question related to BASS. I already had posted this question How Can we control bass of music in iPhone, but not get as much attention of your people as it should get. But now I have done some more search and had read the Core AUDIO. I got one sample code which i want to share with you people here is the link to download it iPhoneMixerEqGraphTest. Have a look on it in this code what i had seen is the developer had use preset Equalizer given by iPod in Apple. Lets see some code snippet too:----
// iPodEQ unit
CAComponentDescription eq_desc(kAudioUnitType_Effect, kAudioUnitSubType_AUiPodEQ, kAudioUnitManufacturer_Apple);
What kAudioUnitSubType_AUiPodEQ does is it get preset values from iPod's equalizer and return us in Xcode in an array which we can use in PickerView/TableView and can set any category like bass, rock, Dance etc. It is helpless for me as it only returns names of equalizer types like bass, rock, Dance etc. as i want to implement bass only and want to implement it on UISLider.
To implement Bass on slider i need values so that i can set minimum and maximum value so that on moving slider bass can be changed.
After getting all this i start reading Core Audio's Audio Unit framework's classes and got this
after that i start searching for bass control and got this
So now i need to implement this kAudioUnitSubType_LowShelfFilter. But now i don't know how to implement this enum in my code so that i can control the bass as written documentation. Even Apple had not write that how can we use it. kAudioUnitSubType_AUiPodEQ this category was returning us an array but kAudioUnitSubType_LowShelfFilter category is not returning any array. While using kAudioUnitSubType_AUiPodEQ this category we can use types of equalizer from an array but how can we use this category kAudioUnitSubType_LowShelfFilter. Can anybody help me regarding this in any manner? It would be highly appreciable.
Thanks.
Update
Although it's declared in the iOS headers, the Low Shelf AU is not actually available on iOS.
The parameters of the Low Shelf are different from the iPod EQ.
Parameters are declared and documented in `AudioUnit/AudioUnitParameters.h':
// Parameters for the AULowShelfFilter unit
enum {
// Global, Hz, 10->200, 80
kAULowShelfParam_CutoffFrequency = 0,
// Global, dB, -40->40, 0
kAULowShelfParam_Gain = 1
};
So after your low shelf AU is created, configure its parameters using AudioUnitSetParameter.
Some initial parameter values you can try would be 120 Hz (kAULowShelfParam_CutoffFrequency) and +6 dB (kAULowShelfParam_Gain) -- assuming your system reproduces bass well, your low frequency content should be twice as loud.
Can u tell me how can i use this kAULowShelfParam_CutoffFrequency to change the frequency.
If everything is configured right, this should be all that is needed:
assert(lowShelfAU);
const float frequencyInHz = 120.0f;
OSStatus result = AudioUnitSetParameter(lowShelfAU,
kAULowShelfParam_CutoffFrequency,
kAudioUnitScope_Global,
0,
frequencyInHz,
0);
if (noErr != result) {
assert(0 && "error!");
return ...;
}

Why are my audio sounds not playing on time?

One of my apps has a simple metronome-style feature that plays a click sound a specified number of times per minute (bpm). I'm doing this by starting an NSTimer, with an interval calculated from the specified bpm, that calls a method that plays the sound.
If I put an NSLog line into the play method, I can see that NSTimer is firing accurately to about 1 millisecond. However, if I record the sound output into an audio editor and then measure the interval between clicks, I can see that they are not evenly spaced. For example, with 150 bpm, the timer fires every 400 milliseconds. But most of the sounds play after 395 milliseconds, with every third or fourth sound playing after 418 milliseconds.
So the sounds are not uniformly delayed, but rather, they follow a pattern of shorter and longer intervals. It seems as if the iOS has a lower resolution for timing of sounds, and is rounding each sound event to the nearest available point, rounding up or down as needed to keep on track overall.
I have tried this with system sounds, AVAudioPlayer and OpenAL and have gotten the exact same results with all three methods. With each method, I'm doing all the setup when the view loads, so each time I play the sound all I have to do is play it. With AVAudioPlayer, I tried calling prepareToPlay using a second timer after each time the sound plays, so it is initialized and ready to go next time, but got the same results.
Here's the code for setting up the OpenAL sound in viewDidLoad (adapted from this tutorial):
// set up the context and device
ALCcontext *context;
ALCdevice *device;
OSStatus result;
device = alcOpenDevice(NULL); // select the "preferred device"
if (device) {
context = alcCreateContext(device, NULL); // use the device to make a context
alcMakeContextCurrent(context); // set the context to the currently active one
}
// open the sound file
NSString *soundFilePath = [[NSBundle mainBundle] pathForResource:#"TempoClick" ofType:#"caf"];
NSURL *soundFileURL = [NSURL fileURLWithPath:soundFilePath];
AudioFileID fileID;
result = AudioFileOpenURL((CFURLRef)soundFileURL, kAudioFileReadPermission, 0, &fileID);
if (result != 0) DLog(#"cannot open file %#: %ld", soundFilePath, result);
// get the size of the file data
UInt32 fileSize = 0;
UInt32 propSize = sizeof(UInt64);
result = AudioFileGetProperty(fileID, kAudioFilePropertyAudioDataByteCount, &propSize, &fileSize);
if (result != 0) DLog(#"cannot find file size: %ld", result);
DLog(#"file size: %li", fileSize);
// copy the data into a buffer, then close the file
unsigned char *outData = malloc(fileSize);
AudioFileOpenURL((CFURLRef)soundFileURL, kAudioFileReadPermission, 0, &fileID); // we get a "file is not open" error on the next line if we don't open this again
result = AudioFileReadBytes(fileID, false, 0, &fileSize, outData);
if (result != 0) NSLog(#"cannot load data: %ld", result);
AudioFileClose(fileID);
alGenBuffers(1, &tempoSoundBuffer);
alBufferData(self.tempoSoundBuffer, AL_FORMAT_MONO16, outData, fileSize, 44100);
free(outData);
outData = NULL;
// connect the buffer to the source and set some preferences
alGenSources(1, &tempoSoundSource);
alSourcei(tempoSoundSource, AL_BUFFER, tempoSoundBuffer);
alSourcef(tempoSoundSource, AL_PITCH, 1.0f);
alSourcef(tempoSoundSource, AL_GAIN, 1.0f);
alSourcei(tempoSoundSource, AL_LOOPING, AL_FALSE);
And then in the play method I just call:
alSourcePlay(self.tempoSoundSource);
Can anyone explain what is happening here, and how I can work around it?
UPDATE 1:
I have another project that plays brief sounds with audio units, so as a quick test I added a timer to that project to play my click sound every 400 milliseconds. In that case, the timing is nearly perfect. So, it seems that NSTimer is fine but system sounds, AVAudioPlayer and OpenAL are less accurate in their playback than audio units.
UPDATE 2:
I just reworked my project to use audio units and now the audio is playing back much more accurately. It still occasionally drifts by up to four milliseconds in either direction, but this is better than the other audio methods. I'm still curious why the other methods all show a pattern of short, short, short, long intervals -- it's like the audio playback times are being rounded up or down to map to some kind of frame rate -- so I'll leave this question open for anyone who can explain that and/or offer a workaround for the other audio methods.
NSTimer does not guarantee when your method will actually get fired.
More info here: How to program a real-time accurate audio sequencer on the iphone?
Regarding your edits:
AVAudioPlayer takes some time to initialize itself. If you call prepareToPlay, it will initialize itself such that it can play the currently loaded sound immediately upon calling play. Once playback stops, it uninitializes itself, so you'd need to call prepareToPlay again to reinitialize. It's best to use this class for stream-y playback rather than discrete sound playback.
With OpenAL, once you've loaded the buffer, attaching it to a source and playing it should cause no delay at all.
You can encapsulate your audio units code into a .mm file and then call that from .m modules without having to compile those as C++.
Okay, I've figured it out. The real reason audio units worked better than the other audio methods is that my audio unit class, which I was adapting from another project, was setting a buffer duration property in the audio session, like this:
Float32 preferredBufferSize = .001;
UInt32 size = sizeof(preferredBufferSize);
AudioSessionSetProperty(kAudioSessionProperty_PreferredHardwareIOBufferDuration, size, &preferredBufferSize);
When I added this code to the OpenAL version, or even to the AVAudioPlayer version, I got accuracy to within a few milliseconds, the same as with audio units. (System Sounds, however, were still not very accurate.) I can verify the connection by increasing the buffer size and watching the playback intervals get less accurate.
Of course I only figured this out after spending an entire day adapting my project to use audio units -- tweaking it to compile under C++, testing the interruption handlers, etc. I hope this can save someone else from the same trouble.

(iphone) Is having many AVAudioPlayer instance fine?

For my small game, I'd like to play effect sound for various scenario.
Mostly it will be user-interaction related.
I may need to play multiple sounds at one time.
I'm planning to allocate AVAudioPlayer for each sound.
I wonder a viewController having about 10-20 AVAudioPlayers is fine.
(sound data itself is rather small, less than 100k in aac)
I just feel that declaring 10-20 AVAudioPlayer instance in a class seems weird.
Is there a better way of doing it or am I just over-thinking it?
I think OpenAL is a better option in such situations. Dont worry if you dont know it. There is a great video tutorial here (with source code): http://www.71squared.com/2009/05/iphone-game-programming-tutorial-9-sound-manager/
To find more you can visit:
http://benbritten.com/2008/11/06/openal-sound-on-the-iphone/comment-page-1/
Yes, it's fine to have many AVAudioPlayer instances. I don't know how many the limit is but it's definitely more than a dozen.
Here are some gotchas:
AVAudioPlayer doesn't do level mixing, so if your sounds are high volume, they may end up constructively interfering with each other and causing waveform distortion. I set a maximum volume of 0.8 to try to work around this, but it's not reliable.
If you try to start them all at the same time, using the play method may end up starting them out of sync. Instead, figure out a time soon enough that the user won't notice, but far enough away that it gives your code time to exit and AVFoundation time to get ready. Then use [player playAtTime:soon].
Here's some code that's working for me now. YMMV:
-(void)play
{
BOOL success;
AVAudioPlayer *player = self.player;
player.numberOfLoops = -1;
player.currentTime = 0;
player.volume = _volume;
// NSLog(#"deviceCurrentTime=%f", player.deviceCurrentTime);
static double soon = 0;
if (soon < player.deviceCurrentTime) {
soon = player.deviceCurrentTime + 0.5; // why so flakey???
}
success = [player playAtTime:soon]; // too flakey for now
if (!success) {
NSLog(#"player %# FAILED", player);
} else {
NSLog(#"player %# %# playing at: %f", player, [[player.url relativePath] lastPathComponent], soon);
}
}
(I'm not sure if my "soon" var is thread-safe, and you should adjust the slop until it works for you... 0.1 was too fast for me at some point or other so I bumped it up to 0.5.)