Why are my accelerometer readings so slow? - iphone

Somewhere in the documentation they mentioned 400 Hz. Nice figure, but I end up getting something less than 100. Even on the latest, coolest, most awesome iPhone 4. And I'm not doing anything except incrementing a counter (ivar) and assigning the value to a label. Can't imagine this is the bottleneck.
I set the frequency to the maximum possible (very small number, like 1.0/10000). It is supposed to be capped to the max whatever the hardware supports.

You can adjust that frequency - see my answer.

Related

AVSpeechUtterance 10% step rate changes

I need to implement AVSpeechUtterance rate change in range from 50% to 150% relative to normal speed with a step of 10%.
The problem is that I can't really find neither absolute, nor relative rate values to build on.
I do know that normal rate value is 0.5. But setting it to 0.6 at least doubles the speed.
Documentation mentions AVSpeechUtteranceMinimumSpeechRate and AVSpeechUtteranceMaximumSpeechRate, but I fail to understand how to make use of it since I can't even access it.
Any help is appreciated!

AudioKit AKMetronome callback timing seems imprecise or quantized

I'm new to AudioKit and digital audio in general, so I'm sure there must be something I'm missing.
I'm trying to get precise timing from AKMetronome by getting the timestamp of each callback. The timing seems to be quantized in some way though, and I don't know what it is.
Example: if my metronome is set to 120, each callback should be exactly 0.5 seconds apart. But if I calculate the difference from one tick to the next, I get this:
0.49145491666786256
0.49166241666534916
0.5104563333334227
0.4917322500004957
0.5104953749978449
0.49178879166720435
0.5103940000008151
0.4916401666669117
It's always one of 2 values, within a very small margin of error. I want to be able to calculate when the next tick is coming so I can trigger animation a few frames ahead, but this makes it difficult. Am I overlooking something?
edit: I came up with a solution since I originally posted this question, but I'm not sure if it's the only or best solution.
I set the buffer to the smallest size using AKSettings.BufferLength.veryShort
With the smallest buffer, the timestamp is always on within a millisecond or two. I'm still not sure though if I'm doing this right, or whether this is the intended behavior of the AKCallback. It seems like the callback should be on time even with a longer buffer.
Are you using Timer to calculate the time difference? From my point of view and based on my findings, the issue is related to the Timer that is not meant to be precise in ios see the thread (Accuracy of NSTimer).
Alternatively, you can look into AVAudioTime (https://audiokit.io/docs/Extensions/AVAudioTime.html)

In Tableau how do you change y-axis to be calculated by custom function?

I am working with a 2 y-axis graph, one is generally between 40K and 60K, the other between 5K and 10K. What I would like to do is set the the 40K to be a number such as if the MIN = 42K, start at 40K and increment by 5K. If It is 38K, start at 35K. Similarly for the 2nd y-axis, do the same but based on 2K increments. When I set it to automatic I get basically straight lines or I say do not include 0 and i get huge drastic swings. I can set the starting and set the increment, but that means every day I would have to go in and verify that still works, for example 40K is a good start, but one that that may be too high or too low. (I suppose the fact it is 2 axis has nothing to do with it, but in case it does) The key is dynamically changing based on the result set.
If there is a better way to do this, I would love it. However, this got me close to what I wanted. First, I created 2 calculated fields, MIN and MAX using a windowed function on the data. They look something like this below. Note I did 2x the differences to give a window that is roughly 5x the total distance from min to max. Better math could give a better sizing.
Max_Ln=WINDOW_MAX(SUM([Profit]))+(WINDOW_MAX(SUM([Profit]))-WINDOW_MIN(SUM([Profit])))*2
Min_Ln= WINDOW_MIN(SUM([Profit]))-(WINDOW_MAX(SUM([Profit]))-WINDOW_MIN(SUM([Profit])))*2
I then added both to the Details pane and used this to add reference lines. I added the reference line with no title and no line. This will cause the automatic spacing to take them into account, but not show anything. From there I did the same on the 2nd y-axis and now everything looks good and will adjust dynamically.

What is AVAudioPlayer's averagePowerForChannel averaging?

The docs for AVAudioPlayer say averagePowerForChannel: "Returns the average power for a given channel, in decibels, for the sound being played." But it doesn't say what "average" means, or how/if that average is weighted. I'm particularly interested in what the range of samples is, specifically the time interval and whether it goes forward into the future at all.
AVAudioRecorder: peak and average power says that the formula used to calculate power is RMS (root mean square) over a number of samples, but also doesn't say how many samples are used.
Optional bonus question: if I want to calculate what the average power will be in, say, 100 msec -- enough time for an animation to begin now and reach an appropriate level soon -- what frameworks should I be looking into? Confusion with meters in AVAudioRecorder says I can get ahold of the raw sample data with AudioQueue or RemoteIO, and then I can use something like this: Android version of AVAudioPlayer's averagePowerForChannel -- but I haven't found sample code (pardon the pun).

How to get the reading of deciBels from iOS AVAudioRecorder in a correct scale?

I'm trying to obtain a noise level in my iOS app, using AVAudioRecorder.
The code I'm using is:
[self.recorder updateMeters];
float decibels = [self.recorder averagePowerForChannel:0];
// 160+db here, to scale it from 0 to 160, not -160 to 0.
decibels = 160+decibels;
NSLog(#"Decibels: %.3f", decibels);
The readings I get, when the phone sits on my desk are at about 90-100dB.
I checked this this link and the table I saw there shows that:
Vacuum Cleaner - 80dB
Large Orchestra - 98dB
Walkman at Maximum Level - 100dB
Front Rows of Rock Concert - 110dB
Now, however my office might seem to be a loud one, it's not near the walkman at maximum level.
Is there something I should do here to get the correct readings? As it seems my iPhone's mic is very sensitive. It's an iPhone4S, if it makes a difference.
Forget my previous answer. I figured out a better solution (correct me if I am wrong). I think what both of us want to achieve is the decibel SPL but the averagePowerChannel method gives us the mic's output voltage. The decibel SPL is a logarithmic unit that indicates ratio. We need to convert that output in decibel SPL which is not so easy because for that you need reference values. In other words you need a DB SPL values and the according voltage values to them. You can also try to estimate them by comparing your results with an app like decibel Ultra. To come straight to the point: The formula you need is as follows:
SPL = 20 * log10(referenceLevel * powf(10, (averagePowerForChannel/20)) * range) + offset;
you can set the referenceLevel to 5. That gives me good results on my iPhone. The averagePowerForChannel is the value you gain from the method averagePowerForChannel: method and range indicates the upper limit of the range. I set that to 160. Finally offset is an offset you can add to get into the area you want. I added 50 here.
Still, if anybody got a better solution to this. It would be great!