Why isn't path showing on map in activity posted to Google Fit using REST API? - rest

I am using Google Fit REST API (via Google Java Client Library) to post an activity into Google Fit.
In summary what I am doing is creating three DataSets covering the given time period:
"com.google.location.sample" - Location
"com.google.step_count.delta" - Steps
"com.google.calories.expended" - Calories
... then creating a Session, and finaly a DataSet with a single Activity Segment (in this case all the time is walking).
This basically all seems to work - I can looking in http://fit.google.com, and I can see the activity, with the correct time, location, duration, steps and calories. The problem is with the map... all it shows is a shaded circle over the whole area of the walking - it doesn't show the track/path that I included in the location DataSet.
EDIT... Here is an example of what it looks like (in web UI).
Why would this not be showing up correctly, when all of the rest of the activity shows up perfectly?
These are some of my suspicions
My data does not have either altitude or accuracy - which are two of the fields needed by "com.google.location.sample". So I set altitude to 0.0 (metres), and set accuracy to 5.0 (metres). I particularly wonder if Google is reacting badly to me setting the altitude to 0.0 for each point?
My location DataSet has say 100 DataPoints in it, whereas by steps and calories DataSets only have one DataPoint in each - i.e. I only have total steps, and total calories, for the walk. So there's an inconsistency (the earliest start and latest end dates are the same for each data set)
Can anybody give any guidance about why this is happening please?

Think this may be due to conflicting data points. As stated here. Although this is for the API for Android, I think it holds true too when using the REST API.
Each DataPoint in your app's DataSet must have a startTime and an
endTime that defines a unique interval within that DataSet, with no
overlap between DataPoint instances. If your app attepts to insert a
new DataPoint that conflicts with an existing DataPoint instance, the
new DataPoint is discarded. To insert a new DataPoint that may overlap
existing data points, use the HistoryApi.updateData method described in
Update data.
You mentioned that the dates are the same across data points. Thus overriding the others and it is only treated as one.
For your com.google.location.sample data type fields. Think it's better to leave them as is. Try not to place a static value and for altitude and accuracy.

Related

Find Average time in different flow paths

Im currently building a Anylogic model and want to calculate average time spent by customer in different flow paths (I have added the process flow below). In the picture i have named the paths i want to calculate the average time as path A and path B
Anyligic has dedicated blocks for this (although it can be done simply in code).
See in detail here.
The TimeMeasureEnd block contains a dataset.
The following code returns the average of the Y-axis values:
timeMeasureEnd.dataset.getYMean();
Good luck!

How to persist previous data point when time range doesn't include a data point

TL;DR:
Can I get Grafana to show me the previous data point, when the currently selected time period does not have a data point? I have an example which sounds ridiculous, but at least it's simple to understand: I send data every 1 minute, and I wish to zoom into the last 30 seconds, and still see data. You may ask "why not just zoom out to 2 minutes" but the reason is that other data is on the same graph that has updated more often, and I wish to compare with that data. Also, for the more lengthy reasons below.
If not, how can I achieve what I want to achieve, see below?
Context
For a few years, I have been monitoring the water level in three of our basement sumps (which have pumps installed) by sending this data from Node-RED to InfluxDB, then visualising the sump levels in Grafana. I have set up three waterproof ultrasonic distance sensors, each pointed down a pipe that is inserted vertically into each sump. The water fills the pipe and the distance sensor, connected to an Arduino, sends me the reading. The Arduino also has other sensors connected (temp / humidity) and deals with distance calibrations to calculate the percent full of each sump. All this data is sent to Node-RED. In total, I am sending 4 values per sump: distance measurement in mm, percent full, temp, humidity. So that's 12 fields. Data is sent every 2 seconds, because I wished to have a reasonably high resolution to see nice curves in graphs.
Also I decided to store all this data so that I could later troubleshoot issues (we have had sewage floods resulting in water not being able to be pumped away, etc...) and design some warning systems for these issues based on data.
Storing 12 values for every 2 seconds, over the course of a number of years, takes up a lot of space (8GB).
Nature of the data
Storing this resolution of data has also helped me be able to describe the nature of the data. I will do so here.
(1) Non-meaningful NOISE (see below) - the percent-full reading goes up and down by 1 or 2 percent every couple of seconds:
(2) Meaningful DRIFT (see below) - I don't mean sensor drift, I am referring to actual water levels changing slowly over time, e.g. over 1 day or 1 week. Perhaps condensation on the walls drips down into the sump, or water evaporates from the sump, and the value can waver by a few percent over the course of a day. Each sump has slightly different characteristics.
(3) Meaningful MONITORING DATA - during wet weather, depending on rainfall amount, the sumps fill up over the course of say 30 mins to 3 hours. Then the pumps run and the water level drops again, wavers a bit, then the sumps continue to fill up. If the rain stopped, you can see a lovely curve as the water fills in progressively more slowly (see the green line below):
Solution to downsample
I know Influx has its own downsampling possibilities, however because of the nature of the data (which can hardly vary for 2 months but when it does, I really need to capture it in detail), I don't think lowering the sample rate is a great idea.
I have some understanding of digital filters (e.g. low pass etc) but have never programmed one myself. So I have written a basic filter in javascript (a Node-RED function) to filter the data in realtime as follows: only send each reading when it has changed from the previous one by x amount. (And update the previous one, when that occurs.)
This has already vastly reduced the amount of data being stored, and I can vary x to filter out noise shown in my first graph above, at the expense of resolution when the pumps run. Even if I set the x value to 2, it still vastly reduces data over long periods of dry weather.
So - onto my problem! Now data is not being logged to InfluxDB unless there is some meaningful change. Which means that when I zoom in to e.g. 15 minute timeframe of data, there is nothing to see.
Grafana does have the option of "fill (previous)" but this draws a line between points on the existing graph, rather than showing the previous data as if it hasn't changed since that point. Now my grafana dashboard looks a bit sad :(
One proposed solution is, in addition to sending "delta" data, send "summary" data, that is - send a full suite of data every 1 minute regardless of whether data changed or not. But then we get noise back again, and pointless storage.
Any other ideas?

Modeler question: Is there a function in SPSS for multiple 'if' statements? Forecasting dates

I am trying to build a forecast for interest expense for floating debt in my company.
I have been given a set of ResetDates which help me match a given rate based on when the ResetDate is.
I have been successful in forecasting one period, but I need a much longer set of periods to satisfy my requirements.
I've tried derive nodes and nested if statements as well as filler nodes.
I am given this data to work with, I can only look at one ResetDate ahead.
Here you will find the data I used: Columns A/B/C/D is what i'm given, Column E (or 5th column from left to right) is what I want to derive as my output
I want to use 'InterestPayDate' and derive:
if it's more than 'NextReset' , the add 90 days to the 'NextReset' to create 'NextReset2'
That is as far as I can get.... where my problem lies is I want to look at NextReset2 and derive:
if 'InterestPayDate' is more than 'NextReset2', then add 90 days to 'NextReset2', if it's less than 'NextReset2', keep the current value for 'NextReset2'
Output should look like Column E here
Not sure if I need to dig deeper into the logical functions, in all honesty, I've just picked up SPSS and I am really trying to learn. Hopefully, you can point me in the right direction.
Thank you.
After computing the first NextReset2, you need to use a Filler node like the one below to change the value of the field.
You might need more than one identical nodes like this - one for each potential 90-day period that you are looking to extend the NextReset2 date. In your sample data, you will need at least two Filler nodes to get the correct value of NextReset2 for the last of the records.
There might be a more elegant way to do it, but this will work and it's easy enough to make copies of a node and string them together like this.
Please also see a sample IBM SPSS Modeler stream showing this approach here and using your sample data.

Averaged Historical Data from Xively feed API

The xively (Cosm) web interface issues the following function for averaged historical datapoints
// For averaged historical datapoints
https://www.xively.com/feeds/<feedId>/datastreams/Humidity/graph.json&duration=21600seconds&interval=30&limit=1000&find_previous=true&function=average
I would like to fetch averaged historical data points (That is if there are multiple samples within the interval I am asking then return an averaged rollup as representative point of the interval) using Xively REST API
However this seems to return the raw data points (They just pick one datapoint to represent the sample interval)
https://api.xively.com/v2/feeds/127181539.json?datastreams=TEMP&duration=1month&interval=21600&limit=200&function=average
So questions
1) How can I return averaged data points like the Xively web interface? what parameter is needed for feed API call?
2) Does anyone know about the parameter interval_type? I have read what is here (https://xively.com/dev/docs/api/quick_reference/historical_data/) about 50 times now but I still don't get it!
Update
function=sum as well as function=average works for
/datastreams/TEMP.json endpoint. Also, they are discrete by default.
The function=average does not works with /feeds/feed_id.json
endPoint. Maybe a Bug?
If you've got "function=average" (which you have) as a query parameter, then the points you get back should be bucketed to the interval you specified (21600 seconds / 6 hours). Each point represents the average value for that period.
It might be worth making this query against the datastreams endpoint though, e.g.
https://api.xively.com/v2/feeds/127181539/datastreams/TEMP.json?duration=1month&interval=21600&limit=200&function=average
Hope this helps!

How can I control how Jasper Reports combines data for a single value in a time series?

I have a time series and I'd like to:
a) Know how Jasper Reports (or JFreeChart) will combine my data for a single point on the chart by default
and
b) Be able to change how that combination is performed
For instance, let's say that I have samples of data once per second, and my time series is configured for "minute". That means that I have 60 pieces of real data for each single value shown on the chart. I'd like to be able to control how that mapping is done (e.g. average, maximum, etc.).
I looked around for documentation on how to see the default or modify how the plot works, but I wasn't able to find anything. Perhaps my search terms (chart, time series, etc.) were too generic.