We're keeping our backend servers in sync using NTP but are not sure how synchronized are the time stamps retrieved on the fronted (using NSDate). Can you please comment if you have experience with this issue?
In my experience, although often it can be fairly close, from time to time it can be inaccurate by several seconds, and the offset vary even over several hours.
Also, the tower time it syncs to can be off, I've seen 15 seconds, 60 seconds (for months), and once : 1 hour when daylight savings was applied when it shouldn't have been. Once it isn't syncing to the tower it can drift 10s of seconds per day, and worse when "off".
Related
I am using a G0 with one ADC and 8 channels. Works fine. I use 4 channels. One is temperature that is measured constantly and I am interested in the value every 60s. Another one is almost the opposite: it is measuring sound waves for a couple a minutes per day and I need those samples at 10kHz.
I solved this by letting all 4 channels sample at 10kHz and have the four readings moved to memory by DMA (array of length 4 with 1 measurement each). Every 60s I take the temperature and when I need the audio, I retrieve the audio values.
If I had two ADC's, I would start the temperature ADC reading for 1 conversion every 60s. Non-stop. And I would only start the audio ADC for the the couple of minutes a day that it is needed. But with the one ADC solution, it seems simple to let all conversions run at this high speed continuously and that raised my question: Is there any true downside in having 40.000 conversions per second, 24 hours per day? If not, the code is simple. I just have the most recent values in memory all the time. But maybe I ruin the chip? I use too much energy I know. But there is plenty of it in this case.
You aren't going to "wear it out" by running it when you don't need to.
The main problems are wasting power and RAM.
If you have enough of these, then the lesser problems are:
The wasted power will become heat, this may upset your temperature measurements (this is a very small amount though).
Having the DMA running will increase your interrupt latency and maybe also slow down the processor slightly, if it encounters bus contention (this only matters if you are close to capacity in these regards).
Having it running all the time may also have the advantage of more stable readings, not being perturbed turning things on and off.
When committing to/setting SLAs for a service, what time period should the SLA be calculated over?
For example, if I wanted all the services in my organization to commit to P95 latency, and one of the services commits to 500ms, what is the time window - because the P95 will be different based on the time window we look at.
Depends on in what cycles your latency fluctuates.
No daily / hourly peaks? A couple thousand samples do just fine.
Daily fluctuations (e.g. peak usage, concurrent backups etc.)? Then you will need to measure at least a whole day.
Weekly fluctuations (e.g. tied to work hours or evening activities etc.)? Then you will need to sample over a full week.
There is no strict requirement to sample everything over the chosen time window, but your time window better be representative or you may be held liable. Also make sure to be fair when you under-sample.
If you want to be on the safe side, take the worst-case-scenario in your load cycle, and within that scenario take a full minute worth of samples. That gives you a good estimate what will be held against you.
There's already a good answer on the technical details and constraints of timing the gyro measurement:
Movesense, timestamp source of imu data, and timing issues in general
However, I would like to ask more practical question from the Android app developer perspective working with two sensors and requirement for high accuracy with Gyro measurement timing.
What would be the most accurate way to synchronize/consolidate the timestamps from two sensors and put the measurements on the same time axis?
The sensor SW version 1.7 introduced Time/Detailed API to check the internal time stamp and the UTC time set on the sensor device. This is how I imagined it would play out with two sensors:
Before subscribing anything, set the UTC time (microseconds) on the sensor1 and sensor2 based on Android device time (PUT /Time)
Get the difference of the "Time since sensor turned on" (in milliseconds) and "UTC time set on sensor" (in microseconds) (on sensor1 and sensor2) (GET /Time/Detailed).
Calculate the difference of these two timestamps (in milliseconds)(for both sensors).
Get the gyro values from the sensor with the internal timestamp. Add the calculated value from step 3 to the internal timestamp to get the correct/global UTC time value.
Is this procedure correct?
Is there a more efficient or accurate way to do this? E.g. the GATT service to set the time was mentioned in the linked post as the fastest way. Anything else?
How about the possible drift in the sensor time for gyro? Are there any tricks to limit the impact of the drift afterwards? Would it make sense to get the /Time/Detailed info during longer measurements and check if the internal clock has drifted/changed compared to the UTC time?
Thanks!
Very good guestion!
Looking at the accuracy of the crystals (+- 20 ppm) it means that typical drift between sensors should be no more than 40 ppm. That translates to about 0.14 seconds over an hour. for longer measurements and or better accuracy, a better synchronization is needed.
Luckily the clock drift should stay relatively constant unless the temperature of the sensor is changing rapidly. Therefore it should be enough to compare the mobile phone clock and each sensor UTC at the beginning and end of the measurement. Any drift of each of sensors should be visible and the timestamps easily compensated.
If there is need to even more accurate timestamps, taking regular samples of /Time/Detailed from each sensor and comparing it to the phone clock should provide a way to estimate possible sensor clock drift.
Full Disclosure: I work for the Movesense team
How can I calculate the number of hours between two times, taking into account the change from standard to daylight savings time between them?
I need to determine which crew is working in my customer's plant. There are four possibilities, changing in a known order from one to the next every four days, so the crew pattern recurs every 16 days. I had planned to store a reference time in my database. To calculate the crew, I would calculate the elapsed hours between the reference time and the current time, modulo it by 384, and use crew A if the result is below 96, crew B for 96-192, and so on.
I am pretty sure that in the spring, when an hour is repeated at the time change, the crew shift is 13 hours long, and in the fall, the crew shift is only 11 hours long. My scheme, at least if it relied on timestamp with time zone objects, would be wrong for an hour every shift for half the year.
Thank you.
Is it possible to have Silk Performer recalculate results based on a specific time period from testing? What I am seeing is a spike at the end of my test. This needs investigated, but I would like to show that prior to that spike, the average time was good. Because of this spike I am getting a couple seconds higher - at least that is my assumption right now. I would like to show that before the spike the times are good.
Here is what I am seeing:
In the Silk Performance Explorer there is a resample option under results. You can set the specific time period for calculation there.