Android app dev: Finding the best way to synchronize the timestamps of two sensors - movesense

There's already a good answer on the technical details and constraints of timing the gyro measurement:
Movesense, timestamp source of imu data, and timing issues in general
However, I would like to ask more practical question from the Android app developer perspective working with two sensors and requirement for high accuracy with Gyro measurement timing.
What would be the most accurate way to synchronize/consolidate the timestamps from two sensors and put the measurements on the same time axis?
The sensor SW version 1.7 introduced Time/Detailed API to check the internal time stamp and the UTC time set on the sensor device. This is how I imagined it would play out with two sensors:
Before subscribing anything, set the UTC time (microseconds) on the sensor1 and sensor2 based on Android device time (PUT /Time)
Get the difference of the "Time since sensor turned on" (in milliseconds) and "UTC time set on sensor" (in microseconds) (on sensor1 and sensor2) (GET /Time/Detailed).
Calculate the difference of these two timestamps (in milliseconds)(for both sensors).
Get the gyro values from the sensor with the internal timestamp. Add the calculated value from step 3 to the internal timestamp to get the correct/global UTC time value.
Is this procedure correct?
Is there a more efficient or accurate way to do this? E.g. the GATT service to set the time was mentioned in the linked post as the fastest way. Anything else?
How about the possible drift in the sensor time for gyro? Are there any tricks to limit the impact of the drift afterwards? Would it make sense to get the /Time/Detailed info during longer measurements and check if the internal clock has drifted/changed compared to the UTC time?
Thanks!

Very good guestion!
Looking at the accuracy of the crystals (+- 20 ppm) it means that typical drift between sensors should be no more than 40 ppm. That translates to about 0.14 seconds over an hour. for longer measurements and or better accuracy, a better synchronization is needed.
Luckily the clock drift should stay relatively constant unless the temperature of the sensor is changing rapidly. Therefore it should be enough to compare the mobile phone clock and each sensor UTC at the beginning and end of the measurement. Any drift of each of sensors should be visible and the timestamps easily compensated.
If there is need to even more accurate timestamps, taking regular samples of /Time/Detailed from each sensor and comparing it to the phone clock should provide a way to estimate possible sensor clock drift.
Full Disclosure: I work for the Movesense team

Related

How to measure change in altitude at high frequency with Apple Watch

I am trying to measure changes in altitude with an Apple Watch in a sport activity (Kite Surfing). Currently my App is just collecting data for analysis. I am recording barometric and GPS altitude for comparison at a frequency of 10 measurements per second. Basically, it works and data is recorded, but it seems these data are just worthless. In both measurements there are sudden jumps in the dataset of up to +-10m and spikes in GPS readings of up to 75m. Does anyone have an idea how to get somehow accurate readings? I basically do not care about absolute altitude; I am just interested in the change of altitude.
Use startRelativeAltitudeUpdates(to:withHandler:) and when your done remember to stopRelativeAltitudeUpdates()
Here is a link to doc.
Also you can ignore anomalies. for example: if the max possible change in altitude in 100 milliseconds is 2 meters (72 km/h). Then if you see any changes more than 2 meters in 100 millisecond just ignore the data and wait for the next reading.
remember when you ignore one reading to account for the time difference.

STM32 ADC: leave it running at 'high' speed or switch it off as much as possible?

I am using a G0 with one ADC and 8 channels. Works fine. I use 4 channels. One is temperature that is measured constantly and I am interested in the value every 60s. Another one is almost the opposite: it is measuring sound waves for a couple a minutes per day and I need those samples at 10kHz.
I solved this by letting all 4 channels sample at 10kHz and have the four readings moved to memory by DMA (array of length 4 with 1 measurement each). Every 60s I take the temperature and when I need the audio, I retrieve the audio values.
If I had two ADC's, I would start the temperature ADC reading for 1 conversion every 60s. Non-stop. And I would only start the audio ADC for the the couple of minutes a day that it is needed. But with the one ADC solution, it seems simple to let all conversions run at this high speed continuously and that raised my question: Is there any true downside in having 40.000 conversions per second, 24 hours per day? If not, the code is simple. I just have the most recent values in memory all the time. But maybe I ruin the chip? I use too much energy I know. But there is plenty of it in this case.
You aren't going to "wear it out" by running it when you don't need to.
The main problems are wasting power and RAM.
If you have enough of these, then the lesser problems are:
The wasted power will become heat, this may upset your temperature measurements (this is a very small amount though).
Having the DMA running will increase your interrupt latency and maybe also slow down the processor slightly, if it encounters bus contention (this only matters if you are close to capacity in these regards).
Having it running all the time may also have the advantage of more stable readings, not being perturbed turning things on and off.

Most power efficient way to determine when sensor is worn

I'm working with Movesense 2.0.0 on a HR+ sensor and I have to minimize the power consumption when device is not worn.
I can't turn it completely off since I need it to keep the correct time so, to reduce the battery usage, when I don't receive a HR notification for a certain amount of time I unsubscribe from all sensors.
What's the most power efficient way to determine when device is worn again? I was thinking about subscribing to accelerometer (as I understand it is the sensor with the lowest power consuption) and when I detect movement I resubscribe to HR and check for incoming data.
Is it a valid approach?
I also noticed that when device isn't worn but still connected to the strap I sometimes receive incorrect HR notifications, like the strap is acting as an antenna for electromagnetic noise. Is there a way to detect when the device is in that status except for looking at HR data to see if they make sense?
Your question is a bit vague in what you mean by "wear a sensor" (I'm assuming you mean HR-strap on chest). In that case if you look at the power consumption documentation (see the PowerOff measurements compared to no-wakeup) you'll notice that
HR wakeup (/System/States/2 (=Connector)) is ~0.2 uA
Movement wakeup (/System/States/0 (=Movement)) is ~4 uA
All other measurements are much higher starting from 10 uA for Acc # 13 Hz.
So the easiest and lowest power determination is to SUBSCRIBE the /System/States/2.
If you base your firmware on version >=2.1 and you measure HR or ECG you also get updates during measurement when the connection is lost (so called Leads-Off detection), so this should help to filter out the spurious HR detections. For firmware 2.0 and earlier you get Connector state 2 (=Unknown) when measuring.
Note: the leads on detection (/System/State/2 when no HR measurement is ongoing) is very sensitive and can give "connected" state when the HR-strap is sweaty.
Full disclosure: I work for the Movesense team

Why do my temperatur sensors became suddenly instable?

I have four DS18B20 temperature sensors connected to my Raspberry Pi. I use 1wire and a pull-up resistor.
I read the values directly via cat from the 1wire devices and put the velues without calculation into a gnuplot data file.
The setup has been running fine for weeks now, measuring a coolbox at different places in a range between about 0°C and 30°C. I got quite accurate readings and plots.
But suddenly the values of all sensors start to "flutter", the became unstable. They also dropped -- all four -- about quarter of a °C. The flutter is about about 0.1°C to 0.2°C. Two of the sensors are actually inside fluid (0.5l and 25l) and thus it is virtually impossible that they suddenly drop or flutter.
The time of the event coincides with me checking the cooler box. I might have shifted or touched some sensoring parts. But can that explain the temperature shift? What happend? How can I fix it?
It seems that a cause of the problem could be resolution being lowered. This is a (volatile) setting stored in the sensor itself. It can be set to 9, 10, 11 or 12 bits. The higher the resolution the more precise the measurements will be, but at a cost of longer measurement time.
According to DS18B20 datasheet, by default the resolution is set to 12 bits after power up. In addition, the drivers handling 1-wire communication with the sensor often also by default set highest possible resolution during startup. This could explain why rebooting fixed the problem in case of OP, but doesn't explain why the change in resolution happened in the first place. That likely depends on the specific setup and will likely have to be resolved on a case-by-case basis.
Furthermore - to confirm that the measurements are indeed done at lower resolution, one can get the numeric values of the samples and check the minimum value by which the measurements change. For example, for 12bit resolution the minimum delta is 0.0625 degrees, while for 9bit resolution it may only change by 0.5 degree and nothing in between.

What is the average time to get an accurate GPS location on the iPhone?

I realize the answer will most likely vary based on the desired accuracy. I'm most interested in 3km accuracy (kCLLocationAccuracyThreeKilometers), but data for the other levels would also be useful.
I'm suppose I'm not exactly looking for the average time, but the point in time that I should move on and assume I'm not going to get any more accurate locations. In my use case, the GPS coordinates are not essential to my app, but highly useful.
At that distance, it is unlikely GPS will be used, as the OS will opt for cell tower or wifi triangulation. Therefore, the time is likely to be less than 42 seconds, which seems very high in its own right.
Although I have no specific data on this, I have observed - through testing our own app - that geolocation takes approximately between ten and twenty seconds.