I am trying to measure changes in altitude with an Apple Watch in a sport activity (Kite Surfing). Currently my App is just collecting data for analysis. I am recording barometric and GPS altitude for comparison at a frequency of 10 measurements per second. Basically, it works and data is recorded, but it seems these data are just worthless. In both measurements there are sudden jumps in the dataset of up to +-10m and spikes in GPS readings of up to 75m. Does anyone have an idea how to get somehow accurate readings? I basically do not care about absolute altitude; I am just interested in the change of altitude.
Use startRelativeAltitudeUpdates(to:withHandler:) and when your done remember to stopRelativeAltitudeUpdates()
Here is a link to doc.
Also you can ignore anomalies. for example: if the max possible change in altitude in 100 milliseconds is 2 meters (72 km/h). Then if you see any changes more than 2 meters in 100 millisecond just ignore the data and wait for the next reading.
remember when you ignore one reading to account for the time difference.
Related
TL;DR:
Can I get Grafana to show me the previous data point, when the currently selected time period does not have a data point? I have an example which sounds ridiculous, but at least it's simple to understand: I send data every 1 minute, and I wish to zoom into the last 30 seconds, and still see data. You may ask "why not just zoom out to 2 minutes" but the reason is that other data is on the same graph that has updated more often, and I wish to compare with that data. Also, for the more lengthy reasons below.
If not, how can I achieve what I want to achieve, see below?
Context
For a few years, I have been monitoring the water level in three of our basement sumps (which have pumps installed) by sending this data from Node-RED to InfluxDB, then visualising the sump levels in Grafana. I have set up three waterproof ultrasonic distance sensors, each pointed down a pipe that is inserted vertically into each sump. The water fills the pipe and the distance sensor, connected to an Arduino, sends me the reading. The Arduino also has other sensors connected (temp / humidity) and deals with distance calibrations to calculate the percent full of each sump. All this data is sent to Node-RED. In total, I am sending 4 values per sump: distance measurement in mm, percent full, temp, humidity. So that's 12 fields. Data is sent every 2 seconds, because I wished to have a reasonably high resolution to see nice curves in graphs.
Also I decided to store all this data so that I could later troubleshoot issues (we have had sewage floods resulting in water not being able to be pumped away, etc...) and design some warning systems for these issues based on data.
Storing 12 values for every 2 seconds, over the course of a number of years, takes up a lot of space (8GB).
Nature of the data
Storing this resolution of data has also helped me be able to describe the nature of the data. I will do so here.
(1) Non-meaningful NOISE (see below) - the percent-full reading goes up and down by 1 or 2 percent every couple of seconds:
(2) Meaningful DRIFT (see below) - I don't mean sensor drift, I am referring to actual water levels changing slowly over time, e.g. over 1 day or 1 week. Perhaps condensation on the walls drips down into the sump, or water evaporates from the sump, and the value can waver by a few percent over the course of a day. Each sump has slightly different characteristics.
(3) Meaningful MONITORING DATA - during wet weather, depending on rainfall amount, the sumps fill up over the course of say 30 mins to 3 hours. Then the pumps run and the water level drops again, wavers a bit, then the sumps continue to fill up. If the rain stopped, you can see a lovely curve as the water fills in progressively more slowly (see the green line below):
Solution to downsample
I know Influx has its own downsampling possibilities, however because of the nature of the data (which can hardly vary for 2 months but when it does, I really need to capture it in detail), I don't think lowering the sample rate is a great idea.
I have some understanding of digital filters (e.g. low pass etc) but have never programmed one myself. So I have written a basic filter in javascript (a Node-RED function) to filter the data in realtime as follows: only send each reading when it has changed from the previous one by x amount. (And update the previous one, when that occurs.)
This has already vastly reduced the amount of data being stored, and I can vary x to filter out noise shown in my first graph above, at the expense of resolution when the pumps run. Even if I set the x value to 2, it still vastly reduces data over long periods of dry weather.
So - onto my problem! Now data is not being logged to InfluxDB unless there is some meaningful change. Which means that when I zoom in to e.g. 15 minute timeframe of data, there is nothing to see.
Grafana does have the option of "fill (previous)" but this draws a line between points on the existing graph, rather than showing the previous data as if it hasn't changed since that point. Now my grafana dashboard looks a bit sad :(
One proposed solution is, in addition to sending "delta" data, send "summary" data, that is - send a full suite of data every 1 minute regardless of whether data changed or not. But then we get noise back again, and pointless storage.
Any other ideas?
is it possible to obtain the heart rate (or even the raw ecg data) from the Apple Watch in real time? If yes, what is the 'update frequency'? I.e. does the watch send the data every 100ms, every second, every 10 seconds? I'd need the data only for a short period (1-2mins), but with the highest possible update frequency. I assume the data must be captured and processed at a very high rate since the heart rate variability is computed from it, but I am wondering if I as a developer have access to the full raw data at full time resolution.
Thanks a lot!
PS: there are a few related posts on this topic, but they are quite old and not 100% what I need
There's already a good answer on the technical details and constraints of timing the gyro measurement:
Movesense, timestamp source of imu data, and timing issues in general
However, I would like to ask more practical question from the Android app developer perspective working with two sensors and requirement for high accuracy with Gyro measurement timing.
What would be the most accurate way to synchronize/consolidate the timestamps from two sensors and put the measurements on the same time axis?
The sensor SW version 1.7 introduced Time/Detailed API to check the internal time stamp and the UTC time set on the sensor device. This is how I imagined it would play out with two sensors:
Before subscribing anything, set the UTC time (microseconds) on the sensor1 and sensor2 based on Android device time (PUT /Time)
Get the difference of the "Time since sensor turned on" (in milliseconds) and "UTC time set on sensor" (in microseconds) (on sensor1 and sensor2) (GET /Time/Detailed).
Calculate the difference of these two timestamps (in milliseconds)(for both sensors).
Get the gyro values from the sensor with the internal timestamp. Add the calculated value from step 3 to the internal timestamp to get the correct/global UTC time value.
Is this procedure correct?
Is there a more efficient or accurate way to do this? E.g. the GATT service to set the time was mentioned in the linked post as the fastest way. Anything else?
How about the possible drift in the sensor time for gyro? Are there any tricks to limit the impact of the drift afterwards? Would it make sense to get the /Time/Detailed info during longer measurements and check if the internal clock has drifted/changed compared to the UTC time?
Thanks!
Very good guestion!
Looking at the accuracy of the crystals (+- 20 ppm) it means that typical drift between sensors should be no more than 40 ppm. That translates to about 0.14 seconds over an hour. for longer measurements and or better accuracy, a better synchronization is needed.
Luckily the clock drift should stay relatively constant unless the temperature of the sensor is changing rapidly. Therefore it should be enough to compare the mobile phone clock and each sensor UTC at the beginning and end of the measurement. Any drift of each of sensors should be visible and the timestamps easily compensated.
If there is need to even more accurate timestamps, taking regular samples of /Time/Detailed from each sensor and comparing it to the phone clock should provide a way to estimate possible sensor clock drift.
Full Disclosure: I work for the Movesense team
I realize the answer will most likely vary based on the desired accuracy. I'm most interested in 3km accuracy (kCLLocationAccuracyThreeKilometers), but data for the other levels would also be useful.
I'm suppose I'm not exactly looking for the average time, but the point in time that I should move on and assume I'm not going to get any more accurate locations. In my use case, the GPS coordinates are not essential to my app, but highly useful.
At that distance, it is unlikely GPS will be used, as the OS will opt for cell tower or wifi triangulation. Therefore, the time is likely to be less than 42 seconds, which seems very high in its own right.
Although I have no specific data on this, I have observed - through testing our own app - that geolocation takes approximately between ten and twenty seconds.
to all i want to know why location manager in iphone gives wrong coordinate at first time when run application.Due to this my distance is come 100 meter at start of application and my average speed is also effected due to this
Each location you receive will have a horizontal accuracy. If the accuracy is above some threshold, say 10 meters, then disregard it. It will take longer to get an accurate read. A negative accuracy means unknown and should also be discarded.
You could also keep your current logic, but reset all data the first time the accuracy is below your threshold. You will still be disregarding inaccurate data, but you can give the user some initial feedback the way map programs do.
Which approach to use depends on your application.