Our edge device has an inbuilt data logging function which logs data at regular intervals. If for some reason a connection to the cloud is lost for a period of time, then the next time it connects it will upload data from it's internal data log memory. In this case the sample is sent with a time stamp of when the data was logged, which is obviously different to the time it is received by the cloud.
The time stamp is sent in a standard format as shown by the packet below.
{"d": { "Ch_1": 37.4,"Ch_2": 37.1,"Ch_3": 3276.7,"Ch_4": 3276.7},"bt": "2016-09-19T14:35:00.00+12:00"}
where "bt" is name for the base time of the sample. Looking at the property details in the schemas, I can set the data type to a string type but how would I get this data to be recognized as a date/time stamp and store this data accordingly?
Is there a way of doing this?
Related
We save all our datetime data on database in UTC (a timestamp with time zone column in postgresql).
Assuming "America/Sao_Paulo" timezone, if a user saves an event "A" to the database at 2021-08-24 22:00:00 (local time) this will be converted to UTC and saved as 2021-08-25 01:00:00.
So, we are wondering what would be the best way (here "the best way" refers to the developer experience) to consume an API where is possible to filter events by start and end date.
Imagine the following situation: the user is on the website and needs to generate a report with all events that happened on 2021-08-24 (local time America/Sao_Paulo). For this, the user fills start and end date both with 2021-08-24.
If the website forwards this request directly to the API, the server will receive the same date provided by the user and some outcomes can happen:
If the server does not apply any transformation at all, the data returned will not contain the event "A" — by the user perspective, this is wrong.
The server can assume that the date is in UTC and transform start date to 2021-08-24 00:00:00 and end date to 2021-08-24 23:59:59. Then, apply the timezone of the user, generating: 2021-08-24 03:00:00 and 2021-08-25 02:59:59. Filtering the database now would bring the expected event "A".
The API itself could expected a start and end datetime in UTC. This way, the developer can apply the user timezone on client side and then forward to server (2021-08-24T03:00:00Z and 2021-08-25T02:59:59Z).
The API itself could expected a start and end datetime either in UTC or in with the supplied offset (2021-08-24T00:00:00-03:00 and 2021-08-24T23:59:59-03:00). Github does it this way.
What got us thinking was that a lot of APIs accept only a date part on a range filter (like the github API). So, are those APIs filtering the data in the client timezone or they assume the client knows the equivalent UTC date that they should filter by (we could not find any documentation that explains how github deals with an incoming date only filter)?
For us, makes more sense the date filter consider the timezone of the client and not leave to them the burden to know the equivalent UTC datetime of the saved event. But this complicates a bit the filtering logic.
To facilitate the filter logic, we thought that maybe have another column on database to also save the local datetime of the event (or only the local date) would be interesting. Is this a valid approach? Do you know any drawbacks?
*We know that on a database perspective, it is recommended to save datetime in UTC (not always, as showed here) but in our case this seems to only make things more difficult when handling API consumption.
*It is importante to know that, when saving an event, the user cannot provide when it happens, we always assume the event happens in the moment it is being saved.
I am receiving a handful of errors regarding the event timestamp. I have confirmed that I am sending a UNIX timestamp in seconds. This is what I have implemented in the front-end of our code to get the UNIX timestamp in seconds: Math.round(Date.now() / 1000)
Also, it looks like less then 1% of the events are affected for each event created, so that's why I'm a bit confused and not sure how to resolve these errors.
Error message:
The timestamp for the InitiateCheckout events sent from your server is in the future. Timestamps are metadata sent alongside each event you send from your server and they represent the time of day when an event actually occurred. For example: the time that a customer made a purchase on your website. All timestamps should represent a point in time that occurred within the last 7 days"
Click here to see a screenshot of the errors
Has anyone encountered these type of errors? If so, any advice on how to resolve them?
I am not sure how to go about this
I have a bunch of historical data (csv) which I want to make accessible through sth-comet. The data is the history of water levels from multiple revers. The data is not provided live, but more or less on a daily basis and contains all the historic records for multiple days.
What I did so far was:
Convert the data into NGSIv2 format data model with dateObserved: DateTime and waterlevel : number field
Update/append the data into Fiware orion
Create a subscription for sth-comet for the entity type
Access the historical data in sth-comet (wrong time)
With this I now have the problem that the "rcvTime" is of course the time when sth-cometreceived the data. Is there a way that I can "overwrite" that attribute or is there a better solution? I also looked at cygnus on inserting data but I think the underlying problem is the same.
I could not find any hint in the avaiable documentation.
In the case of using Cygnus NGSIMongoSink and NGSISthSink you can use TimeInstant metadata in attributes to override received time with the time given in the metadata value.
Have a look to NGSIMongoSink documentation
By default, NGSIMongoSink stores the notification reception timestamp. Nevertheless, if (and only if) working in row mode and a metadata named TimeInstant is notified, then such metadata value is used instead of the reception timestamp. This is useful when wanting to persist a measure generation time (which is thus notified as a TimeInstant metadata) instead of the reception time.
or this similar fragment in NGSISTHSink documentation:
By default, NGSISTHSink stores the notification reception timestamp. Nevertheless, if a metadata named TimeInstant is notified, then such metadata value is used instead of the reception timestamp. This is useful when wanting to persist a measure generation time (which is thus notified as a TimeInstant metadata) instead of the reception time.
I am trying to write a Rest API client where ServiceNow Database will be polled every 10 minutes to get the Data.
Below is the Url that I built:
"https://servicenowinstance.com/api/now/table/employee_table?sysparm_limit=200&sysparm_offset=0&sysparm_query=sys_created_onBETWEENjavascript:gs.dateGenerate('2018-02-28','14:23:40')#javascript:gs.dateGenerate('2018-02-28','15:17:04')^ORDERBYsys_created_on".
After implementing Pagination I am starting the Incremental Load. Where I poll every 10 minutes to get the New Data. So in the above URL I get the BETWEEN . So I will get the Data which satisfies the Between Condition.
My Question is that the VM machine I use maintains UTC time. And I am not sure which Timezone does the ServiceNow Tables use to store the Data.
In short my question is what Timezone does ServiceNow use to store its Sys_created Field. Is it same as UTC or is it different?
The database stores dates and times as UTC (KB0534905), but depending on how you pull data via REST, it may return in the timezone of the user account being used for authentication.
Take a look at Table API GET, in particular the sysparm_display_value field.
Data retrieval operation for reference and choice fields. Based on
this value, retrieves the display value and/or the actual value from
the database.
true returns display values for all fields.
false returns actual values from the database. If a value is not specified, this parameter defaults to false.
all returns both actual and display values.
In your case since you're not setting that parameter, it should be in UTC.
I'm trying to figure out how I can raise a notification when a new value is inserted on my influxDB and send a notification to an HTTP endpoint with the data of the new inserted measurement sample. I'm not sure if it's the goal of Kapacitor (I'm new on the TICK stack) or it's better to use another tool (any suggestion will be welcome).
Thanks in advance.
Best regards,
Albert.
In Kapacitor there is two types of task namely batch and stream. The former is meant for processing historical data and stream is for real time purpose.
Looking at your requirement I guess it is obvious that stream is the way to go as it will enable you to watch data from an influxdb's measurement in real time. For invoking an endpoint in TICK script you can use the HttpPostNode node.
Example (Pseudo code ONLY):
var data = stream
|from()
.database('myInfluxDB')
.retentionPolicy('autogen')
.measurement('measurement_ABCD')
|window()
.period(10s)
.every(10s)
data
|httpPost('http://your.service.url/api/endpoint_xyz')
In this instance the TICK script will watch for new inserted data on measurement, measurement_ABCD for a window period of 10 seconds before doing a HTTP POST to the defined endpoint and this entire process will repeat again every 10 seconds.
That is, you have a moving window of 10 seconds.
Reference:
https://docs.influxdata.com/kapacitor/v1.3/nodes/http_post_node/