update frequency set for deviceMotionUpdateInterval it's the actual frequency? - iphone

analyzing the deviceMotion.timestamp i saw that the the update frequency set in DeviceMotion is not the actual frequency of update.
I implemented an app in order to test, below what I saw!
update frequency actual frequency average time between two calls
1/10.000000 10.232265 0.097730
1/20.000000 19.533729 0.051194
1/30.000000 30.696613 0.032577
1/40.000000 42.975122 0.023269
1/50.000000 53.711000 0.018618
1/60.000000 53.719106 0.018615
1/70.000000 71.627016 0.013961
1/80.000000 71.627263 0.013961
1/90.000000 53.719365 0.018615
1/100.000000 107.442667 0.009307
1/110.000000 107.437022 0.009308
someone has noticed the same thing?
it's a bug?

Some people are reporting the same phenomenon, for example Actual frequency of device motion updates lower than expected, but scales up with setting but there is still no answer. Surprisingly you are the first on to report higher actual frequencies. I did several tests on this and it makes no real difference which way you go.
Push or pull i.e. handler callback or own timer loop
iOS 4.2x, iOS 4.3x ([Update:]tested with pull only)
Raw sensor data or Device Motion
Gyroscope or accelerometer
Running it in a separate thread
I assume it is a little bug in the Core Motion framework.

Related

Machine Learning to predict time-series multi-class signal changes

I would like to predict the switching behavior of time-dependent signals. Currently the signal has 3 states (1, 2, 3), but it could be that this will change in the future. For the moment, however, it is absolutely okay to assume three states.
I can make the following assumptions about these states (see picture):
the signals repeat periodically, possibly with variations concerning the time of day.
the duration of state 2 is always constant and relatively short for all signals.
the duration of states 1 and 3 are also constant, but vary for the different signals.
the switching sequence is always the same: 1 --> 2 --> 3 --> 2 --> 1 --> [...]
there is a constant but unknown time reference between the different signals.
There is no constant time reference between my observations for the different signals. They are simply measured one after the other, but always at different times.
I am able to rebuild my model periodically after i obtained more samples.
I have the following problems:
I can only observe one signal at a time.
I can only observe the signals at different times.
I cannot trigger my measurement with the state transition. That means, when I measure, I am always "in the middle" of a state. Therefore I don't know when this state has started and also not exactly when this state will end.
I cannot observe a certain signal for a long duration. So, i am not able to observe a complete period.
My samples (observations) are widespread in time.
I would like to get a prediction either for the state change or the current state for the current time. It is likely to happen that i will never have measured my signals for that requested time.
So far I have tested the TimeSeriesPredictor from the ML.NET Toolbox, as it seemed suitable to me. However, in my opinion, this algorithm requires that you always pass only the data of one signal. This means that assumption 5 is not included in the prediction, which is probably suboptimal. Also, in this case I had problems with the prediction not changing, which should actually happen time-dependently when I query multiple predictions. This behavior led me to believe that only the order of the values entered the model, but not the associated timestamp. If I have understood everything correctly, then exactly this timestamp is my most important "feature"...
So far, i did not do any tests on Regression-based approaches, e.g. FastTree, since my data is not linear, but keeps changing states. Maybe this assumption is not valid and regression-based methods could also be suitable?
I also don't know if a multiclassifier is required, because I had understood that the TimeSeriesPredictor would also be suitable for this, since it works with the single data type. Whether the prediction is 1.3 or exactly 1.0 would be fine for me.
To sum it up:
I am looking for a algorithm which is able to recognize the switching patterns based on lose and widespread samples. It would be okay to define boundaries, e.g. state duration 3 of signal 1 will never last longer than 30s or state duration 1 of signal 3 will never last longer 60s.
Then, after the algorithm has obtained an approximate model of the switching behaviour, i would like to request a prediction of a certain signal state for a certain time.
Which methods can I use to get the best prediction, preferably using the ML.NET toolbox or based on matlab?
Not sure if this is quite what you're looking for, but if detecting spikes and changes using signals is what you're looking for, check out the anomaly detection algorithms in ML.NET. Here are two tutorials that show how to use them.
Detect anomalies in product sales
Spike detection
Change point detection
Detect anomalies in time series
Detect anomaly period
Detect anomaly
One way to approach this would be to first determine the periodicity of each of the signals independently. This could be done by looking at the frequency distribution of time differences between measurements of state 2 only and separately for each signal.
This will give a multinomial distribution. The shortest time difference will be the duration of the switching event (after discarding time differences less than the max duration of state 2). The second shortest peak will be the duration between the end of one switching event and the start of the next.
When you have the 3 calculations of periodicity you can simply calculate the difference between each of them. Given you have the timestamps of the measurements of state 2 for each signal you should be able to calculate the time of switching for all other signals.

Low average call duration

My Mera VoIP Transit Softswitch (MVTS) shows very low ACD (about 0.3 min) for several directions (route groups) at peak hours. Looking for factors causing low ACD, I foung this topic: http://support.sippysoft.com/support/discussions/topics/3000137333, but all mentioned parameters seem to be normal. There is another strange thing also. As seen at this graph, there are about 10 lines occupied for each real call. I guess these problems are related somehow, though not sure yet.
What can cause such behavior?
You should check the followings:
SIP trunk quality
Your trunk or service providers might have quality issues to these directions. You can easily test this by sending the same traffic in the same time to some other carriers.
Low call quality under high load
You can easily verify this by just making a call during peak time and hear it yourself.
Some other factor causing call drop in similar circumstances.
Causes might include maximum supported channels, billing cut-off or others.
You should grab a statistics about disconnect codes and compare it to off-peak time or to other directions

Running a filter at a high speed

I'm writing a signal processing software in CVI.
I've got a signal, transmitted to the computer via USB at a very high speed (~50K).
I want to filter it in RT.
In order to do it I created a filter in Simulink and turned it into a C code, which I run in CVI using:
FuncName_initialize()
FuncName.in
FuncName_step()
FuncName.Out
The thing is that after a while (about 5-7 min) the filter works wrongly... Meaning showing inaccurate results and artifacts. I believe this is a result of using it too fast (because I used it before at lower speeds and this was fine).
Any suggestions on what might be the problem? How can I implement a RT filter in CVI directly (meaning a one that get one point at the input and gets one point in the output while maintaining some window).
I know that the data transmitted just fine at this speed since recording the signal works OK, and showing the raw data on screen works OK as well.
Thank you

Processing accelerometer data

I would like to know if there are some libraries/algorithms/techniques that help to extract the user context (walking/standing) from accelerometer data (extracted from any smartphone)?
For example, I would collect accelerometer data every 5 seconds for a definite period of time and then identify the user context (ex. for the first 5 minutes, the user was walking, then the user was standing for a minute, and then he continued walking for another 3 minutes).
Thank you very much in advance :)
Check new activity recognization apis
http://developer.android.com/google/play-services/location.html
its still a research topic,please look at this paper which discuss the algorithm
http://www.enggjournals.com/ijcse/doc/IJCSE12-04-05-266.pdf
I don't know of any such library.
It is a very time consuming task to write such a library. Basically, you would build a database of "user context" that you wish to recognize.
Then you collect data and compare it to those in the database. As for how to compare, see Store orientation to an array - and compare, the same holds for accelerometer.
Walking/running data is analogous to heart-rate data in a lot of ways. In terms of getting the noise filtered and getting smooth peaks, look into noise filtering and peak detection algorithms. The following is used to obtain heart-rate information for heart patients, it should be a good starting point : http://www.docstoc.com/docs/22491202/Pan-Tompkins-algorithm-algorithm-to-detect-QRS-complex-in-ECG
Think about how you want to filter out the noise and detect peaks; the filters will obviously depend on the raw data you gather, but it's good to have a general idea of what kind of filtering you'd want to do on your data. Think about what needs to be done once you have filtered data. In your case, think about how you would go about designing an algorithm to find out when the data indicates activity (like walking, running,etc.), and when it shows the user being stationary. This is a fairly challenging problem to solve, once you consider the dynamics of the device itself (how it's positioned when the user is walking/running), and the fact that there are very few (if not no) benchmarked algos that do this with raw smartphone data.
Start with determining the appropriate algorithms, and then tackle the complexities (mentioned above) one by one.

Whate are the basic concepts for implementing anti-shock and anti-shake algorithms?

I have some animations happening upon fine acceleration detections. But when the user sits in a car or is walking it may get annoying.
Basically, all that stuff has to be disabled automatically as soon as there is too much vibration or shaking. Conceptually, I think that it's very hard to filter those vibrations out , since the "vibration phase" changes permanently. I woul define "unwanted vibration or shocks" as acceleration values that change very fast by an large interval of values, or, an permanently changing accumulated value that does not exceed an specified treshold range in an specified minimum period of time.
I am looking for "proven" concepts, before I start reinventing the wheel for a couple of days.
I don't have any concrete answers for you, but you might want to Google band-pass filters or anti-aliasing filters for some ideas on how to approach this. Basically, if you can identify the frequency range of accelerations that you want to consider real, you can filter out frequencies that fall outside this range.
Before you start doing too much pre-optimization, I think you should implement a low pass filter and see if that does the job. Most iPhone apps effectively use a variation of an LPF to get rid of unwanted accelerometer noise.
You could also go the other way and use a high pass filter. Once you get a certain power level passing through the HPF, stop processing data.