What is the definition of the notification response for Movesense /Algo/ECGRR? - movesense

What is the content & format of the notification response for /Algo/ECGRR?
I've subscribed to it on the movesense hardware using a slightly modified version of the DataLoggerSample Android app from the Movesense-mobile-lib repository, with the sensor running the default device firmware. I'm able to pull a .json log off the sensor after a while.
(This was mentioned in another SO question I ran across while trying to figure out how to log data on the device.)
But I don't see the default device firmware in the Movesense-device-lib repository, and there are no /Algo APIs listed in the online docs.
I'm looking for the specific units and internal representations of the notification response. The units would normally be given in the yaml API definition, for example, here it is for /Meas/HR:
/Meas/HR/Subscription:
post:
description: |
Combined subscription to average HR and R-R data.
responses:
200:
description: Operation completed successfully
schema:
$ref: 'types.yaml#/definitions/HRData'
x-notification:
description: |
Notifications comprise average HR (Hz) with the latest RR (ms) data
schema:
$ref: 'types.yaml#/definitions/HRData'
and the post/x-notification/description field tells us the units.
From the .json output with norification responses for /Algo/ECGRR, e.g.:
{"Algo":[
{"RR":742,"SNR":16,"Timestamp":14326776},
{"RR":743,"SNR":16,"Timestamp":14327521},
{"RR":726,"SNR":17,"Timestamp":14328240},
{"RR":720,"SNR":14,"Timestamp":14328961},
...
{"RR":660,"SNR":12,"Timestamp":20613697}]
We can assume the units are:
milliseconds for RR interval
unitless for SNR
milliseconds for Timestamp
and we can make an educated guess that the internal representations are:
uint16 for RR interval
uint8 for SNR
uint32 for Timestamp
but I'd rather see confirmation somewhere, instead of assuming.
And what zero reference is the Timestamp field relative to?
The other SO question tells us
If you are storing /Meas/HR then the generated storage format is total
of 6 bytes long.
and the .json output is a float with an integer:
"Meas":{"HR":[
{"average":98.791664123535156,"rrData":[720]},
{"average":97.158706665039062,"rrData":[712]},
...
so an educated guess would be the internal representation is float32 & uint16, but that's still just a guess.
How long is the storage format for /Algo/ECGRR? and where do I find that information for other types? since I don't see it in the API.

EDITED to answer modified question
You're correct about the structure of the /Algo/ECGRR notification. I've attached the definition from yaml file below for you:
definitions:
ECGRRData:
required:
- Timestamp
- RR
- SNR
properties:
Timestamp:
description: Local timestamp of RR detection.
type: integer
format: uint32
x-unit: millisecond
RR:
description: RR interval
type: integer
format: uint16
x-unit: millisecond
SNR:
description: Estimated SNR of ECG signal (dB)
type: integer
format: int8
Original answer
The question could be better rephrased as "How can I include /Algo/ECGRR service for my own firmware?" and our current documentation does not contain that info too well (it will be added).
Short answer: The ECGRR service is a separately licensed software module that is available to the developers that need it. The way to get it is to request it from the Movesense team (info (at) movesense.com) and it may incur licensing fees when ordering production quantities (decided by the people in charge of pricing).
Full disclosure: I work for the Movesense team

Related

Getting measurement into influxdb, nosql database

I have a measurement im want to persist in a influxdb database. The measurement itself consists of approx. 4000 measurement points which are generated by a microcontroller. Measurement points are in float format and are generated periodically (every few minutes) with a constant frequency.
I trying to get some knowledge for NoSQL databases, influxdb is my first try here.
Question is: How do I get these measurements in the influxdb assuming they are within an mqtt-message (in json format)? How are the insert strings generated/handled?
{
"begin_time_of_meas": "2020-11-19T16:02:48+0000",
"measurement": [
1.0,
2.2,
3.3,
...,
3999.8,
4000.4
],
"device": "D01"
}
I have used Node-RED in the past and i know there is a plugin for influx db, so i guess this would be a way. But im an quite unsure how the insert string is genereated/handled for an array of measurement points. Every exmaple i have seen so far handles only 1 point measurements like one temperature measurement every few seconds or cpu load. Thanks for your help.
I've successfully used the influxdb plugin with a time precision of milliseconds. Not sure how to make it work for more precise timestamps, and I've never needed to.
It sounds like you have more than a handful of points arriving per second; send groups of messages as an array to the influx batch node.
In your case, it depends what those 4000 measurements are, and how it best makes sense to group them. If the variables all measure the same point, something like this might work. I've no idea what the measurements are, etc. A function that takes the mqtt message and converts it to a block of messages like this might work well (note that this function output could replace the join node):
[{
measurement: "microcontroller_data",
timestamp: new Date("2020-11-19T16:02:48+0000").getTime(),
tags: {
device: "D01",
point: "0001",
},
fields: {
value: 1.0
}
},
{
measurement: "microcontroller_data",
timestamp: new Date("2020-11-19T16:02:48+0000").getTime(),
tags: {
device: "D01",
point: "0002",
},
fields: {
value: 2.2
}
},
...etc...
]
That looks like a lot of information to store, but measurement and tags values are basically header values that don't get written with every entry. The fields values do get stored, but these are compressed. The json describing the data to be stored is much larger than the on-disk space the storage will actually use.
It's also possible to have multiple fields, but I believe this will make data retrieval trickier:
{
measurement: "microcontroller_data",
timestamp: new Date("2020-11-19T16:02:48+0000").getTime(),
tags: {
device: "D01",
point: "0001",
},
fields: {
value_0001: 1.0,
value_0002: 2.2,
...etc...
}
}
Easier to code, but it would make for some ugly and inflexible queries.
You will likely have some more meaningful names than "microcontroller_data", or "0001", "0002" etc. If the 4000 signals are for very distinct measurements, it's also possible that there is more than one "measurement" that makes sense, e.g. cpu_parameters, flowrate, butterflies, etc.
Parse your MQTT messages into that shape. If the messages are sent one-at-a-time, then send to a join node; mine is set to send after 500 messages or 1 second of inactivity; you'll find something that fits.
If the json objects are grouped into an array by your processing, send directly to the influx batch node.
In the influx batch node, under "Advanced Query Options", I set the precision to ms because that's the default from Date().getTime().

Pushing key/value pair data to graphite/grafana

We are trying to see if graphite will fit our use case. So we have a number of public parameters. Like key value pairs.
Say:
Data:
Caller:abc
Site:xyz
Http status: 400
6-7 more similar fields (key values pairs) .
Etc.
This data is continuously posted to use in a data report. What we want is to draw visualisations over this data.
We want graphs that will say things like how many 400s by sites etc. Which are the top sites or callers for whom there is 400.
Now we are wondering if this can be done with graphite.
But we have questions. Graphite store numerical values. So how will we represent this in graphite.
Something like this ?
Clicks.metric.status.400 1 currTime
Clicks.metric.site.xyz 1 currTime
Clicks.metric.caller.abc 1 currTime
Adding 1 as the numerical value to record the event.
Also how will we group the set of values together.
For eg this http status is for this site as it is one record.
In that case we need something like
Clicks.metric.status.{uuid1}.400 1 currTime
Clicks.metric.site.{uuid1}.xyz 1 currTime
Our aim is to then use grafana to have graphs on this data as in what are the top site which have are showing 400 status?
will this is ok ?
regards
Graphite accepts three types of data: plaintext, pickled, and AMQP.
The plaintext protocol is the most straightforward protocol supported
by Carbon.
The data sent must be in the following format: <metric path> <metric
value> <metric timestamp>. Carbon will then help translate this line
of text into a metric that the web interface and Whisper understand.
If you're new to graphite (which sounds like you are) plaintext is definitely the easiest to get going with.
As to how you'll be able to group metrics and perform operations on them, you have to remember that graphite doesn't natively store any of this for you. It stores timeseries metrics, and provides functions that manipulate that data for visual / reporting purposes. So when you send a metric, prod.host-abc.application-xyz.grpc.GetStatus.return-codes.400 1 1522353885, all you're doing is storing the value 1 for that specific metric at timestamp 1522353885. You can then use graphite functions to display that data, e.g.,: sumSeries(prod.*.application-xyz.grpc.GetStatus.return-codes.400) will produce a sum of all 400 error codes from all hosts.

What are "Included Services" in a Service useful for?

I have a custom profile for a proprietary device (my smartphone app will be the only thing communicating with my peripheral) that includes two simple services. Each service allows the client to read and write a single byte of data on the peripheral. I would like to add the ability to read and write both bytes in a single transaction.
I tried adding a third service that simply included the two existing single byte services but all that appears to do is assign a UUID that combines the UUIDs for the existing services and I don't see how to use the combined UUID since it doesn't have any Characteristic Values.
The alternatives I'm considering are to make a separate service for the two bytes and combine their effects on my server, or I could replace all of this with a single service that includes the two bytes along with a boolean flag for each byte that indicates whether or not the associated byte should be written.
The first alternative seems overly complicated and the second would preclude individual control of notifications and indications for the separate bytes.
Is there a way to use included services to accomplish my goals?
It's quite an old question, but in case anyone else comes across it I leave a comment here.
Here are two parts. One is a late answer for Lance F: You had a wrong understanding of the BLE design principles. Services are defined on the host level of the BLE stack. And you considered your problem from the application level point view, wanting an atomic transaction to provide you with a compound object of two distinct entities. Otherwise why would you have defined two services?
The second part is an answer to the actual question taken as quote from "Getting Started with Bluetooth Low Energy" by Kevin Townsend et al., O'Reilly, 2014, p.58:
Included services can help avoid duplicating data in a GATT server. If a service will be referenced by other services, you can use this mechanism to save memory and simplify the layout of the GATT server. In the previous analogy with classes and objects, you could see include definitions as pointers or references to an existing object instance.
It's an update of my answer to clarify why there is no need for the included services in a problem stated by Lance F.
I am mostly familiar with BLE use in medical devices, so I briefly sketch the SIG defined Glucose Profile as an example to draw some analogies with your problem.
Let's imagine a server device which has the Glucose Service with 2 defined characteristics: Glucose Measurement and Glucose Measurement Context. A client can subscribe for notifications of either or both of these characteristics. In some time later the client device can change its subscriptions by simply writing to the Client Configuration Characteristic Descriptor of the corresponding characteristic.
Server also has a special mandatory characteristic - Record Access Control Point (RACP), which is used by a client to retrieve or update glucose measurement history.
If a client wants to get a number of stored history records it writes to the RACP { OpCode: 4 (Report number of stored records), Operator: 1 (All records) }. Then a server sends an indication from the RACP { OpCode: 5 (Number of stored records response), Operator: 0 (Null), Operand: 17 (some number) }.
If a client wants to get any specific records it writes to the RACP { OpCode: 1 (Report stored records), Operator: 4 (Within range of, inclusive), Operand: [13, 14] (for example the records 13 and 14) }. In response a server sends requested records one by one as notifications of the Glucose Measurement and Glucose Measurement Context characteristics, and then sends an indication from the RACP characteristic to report a status of the operation.
So Glucose Measurement and Glucose Measurement Context are your Mode and Rate characteristics, then you also need one more control characteristic - an analog of the RACP. Now you need to define a number of codes, operators, and operands. Create a structure whichever suits you best, for example, Code: 1 - update, Operator: 1 - Mode only, Operand: actual number. A client writes it to the control point characteristic. A server gets notified on write, interprets it, and acts in a way defined by your custom profile.

Time Stamped Data

Our edge device has an inbuilt data logging function which logs data at regular intervals. If for some reason a connection to the cloud is lost for a period of time, then the next time it connects it will upload data from it's internal data log memory. In this case the sample is sent with a time stamp of when the data was logged, which is obviously different to the time it is received by the cloud.
The time stamp is sent in a standard format as shown by the packet below.
{"d": { "Ch_1": 37.4,"Ch_2": 37.1,"Ch_3": 3276.7,"Ch_4": 3276.7},"bt": "2016-09-19T14:35:00.00+12:00"}
where "bt" is name for the base time of the sample. Looking at the property details in the schemas, I can set the data type to a string type but how would I get this data to be recognized as a date/time stamp and store this data accordingly?
Is there a way of doing this?

What is the proper way to communicate data wtih units in a REST payload?

I am working on a REST api that shows numbers of different items, each in a unit. For example, storage capacity. I have a media type which defines this payload, but I feel that the REST api should not be in the business of formatting the numbers, and they should all be in a baseUnit, say bits. It's up to the UI to format that as they wish.
My question is - should the data "hint" as to the units say:
{ name: "capacity", number: 33333, units: "bits" }
on every piece of data to make it explicit, or is that something that has to be inferred through documentation?