I am trying to write a Rest API client where ServiceNow Database will be polled every 10 minutes to get the Data.
Below is the Url that I built:
"https://servicenowinstance.com/api/now/table/employee_table?sysparm_limit=200&sysparm_offset=0&sysparm_query=sys_created_onBETWEENjavascript:gs.dateGenerate('2018-02-28','14:23:40')#javascript:gs.dateGenerate('2018-02-28','15:17:04')^ORDERBYsys_created_on".
After implementing Pagination I am starting the Incremental Load. Where I poll every 10 minutes to get the New Data. So in the above URL I get the BETWEEN . So I will get the Data which satisfies the Between Condition.
My Question is that the VM machine I use maintains UTC time. And I am not sure which Timezone does the ServiceNow Tables use to store the Data.
In short my question is what Timezone does ServiceNow use to store its Sys_created Field. Is it same as UTC or is it different?
The database stores dates and times as UTC (KB0534905), but depending on how you pull data via REST, it may return in the timezone of the user account being used for authentication.
Take a look at Table API GET, in particular the sysparm_display_value field.
Data retrieval operation for reference and choice fields. Based on
this value, retrieves the display value and/or the actual value from
the database.
true returns display values for all fields.
false returns actual values from the database. If a value is not specified, this parameter defaults to false.
all returns both actual and display values.
In your case since you're not setting that parameter, it should be in UTC.
Related
We save all our datetime data on database in UTC (a timestamp with time zone column in postgresql).
Assuming "America/Sao_Paulo" timezone, if a user saves an event "A" to the database at 2021-08-24 22:00:00 (local time) this will be converted to UTC and saved as 2021-08-25 01:00:00.
So, we are wondering what would be the best way (here "the best way" refers to the developer experience) to consume an API where is possible to filter events by start and end date.
Imagine the following situation: the user is on the website and needs to generate a report with all events that happened on 2021-08-24 (local time America/Sao_Paulo). For this, the user fills start and end date both with 2021-08-24.
If the website forwards this request directly to the API, the server will receive the same date provided by the user and some outcomes can happen:
If the server does not apply any transformation at all, the data returned will not contain the event "A" — by the user perspective, this is wrong.
The server can assume that the date is in UTC and transform start date to 2021-08-24 00:00:00 and end date to 2021-08-24 23:59:59. Then, apply the timezone of the user, generating: 2021-08-24 03:00:00 and 2021-08-25 02:59:59. Filtering the database now would bring the expected event "A".
The API itself could expected a start and end datetime in UTC. This way, the developer can apply the user timezone on client side and then forward to server (2021-08-24T03:00:00Z and 2021-08-25T02:59:59Z).
The API itself could expected a start and end datetime either in UTC or in with the supplied offset (2021-08-24T00:00:00-03:00 and 2021-08-24T23:59:59-03:00). Github does it this way.
What got us thinking was that a lot of APIs accept only a date part on a range filter (like the github API). So, are those APIs filtering the data in the client timezone or they assume the client knows the equivalent UTC date that they should filter by (we could not find any documentation that explains how github deals with an incoming date only filter)?
For us, makes more sense the date filter consider the timezone of the client and not leave to them the burden to know the equivalent UTC datetime of the saved event. But this complicates a bit the filtering logic.
To facilitate the filter logic, we thought that maybe have another column on database to also save the local datetime of the event (or only the local date) would be interesting. Is this a valid approach? Do you know any drawbacks?
*We know that on a database perspective, it is recommended to save datetime in UTC (not always, as showed here) but in our case this seems to only make things more difficult when handling API consumption.
*It is importante to know that, when saving an event, the user cannot provide when it happens, we always assume the event happens in the moment it is being saved.
I have a bunch of historical data (csv) which I want to make accessible through sth-comet. The data is the history of water levels from multiple revers. The data is not provided live, but more or less on a daily basis and contains all the historic records for multiple days.
What I did so far was:
Convert the data into NGSIv2 format data model with dateObserved: DateTime and waterlevel : number field
Update/append the data into Fiware orion
Create a subscription for sth-comet for the entity type
Access the historical data in sth-comet (wrong time)
With this I now have the problem that the "rcvTime" is of course the time when sth-cometreceived the data. Is there a way that I can "overwrite" that attribute or is there a better solution? I also looked at cygnus on inserting data but I think the underlying problem is the same.
I could not find any hint in the avaiable documentation.
In the case of using Cygnus NGSIMongoSink and NGSISthSink you can use TimeInstant metadata in attributes to override received time with the time given in the metadata value.
Have a look to NGSIMongoSink documentation
By default, NGSIMongoSink stores the notification reception timestamp. Nevertheless, if (and only if) working in row mode and a metadata named TimeInstant is notified, then such metadata value is used instead of the reception timestamp. This is useful when wanting to persist a measure generation time (which is thus notified as a TimeInstant metadata) instead of the reception time.
or this similar fragment in NGSISTHSink documentation:
By default, NGSISTHSink stores the notification reception timestamp. Nevertheless, if a metadata named TimeInstant is notified, then such metadata value is used instead of the reception timestamp. This is useful when wanting to persist a measure generation time (which is thus notified as a TimeInstant metadata) instead of the reception time.
I am trying to send two separate sets of data from the same collection from server to client. Data is being inserted in the collection on a set interval of 30 seconds. One set of data sent to the client must return all documents over the course of the current day on an hourly basis, while the other set of data simply sends the most recent entry in the collection. I have a graph that needs to display hourly data, as well as fields that need to display the most recent record every 30 seconds, however, I cannot seem to decouple these two data sets. The query for the most recent entry seems to always overwrite the query for the hourly data when attempting to access the data on the client. So my question summed up is: How does one send two separate sets of data of the same collection from server to client, and then access these two separate sets independently on the client?
The answer is simple, you cannot!
The server is always answering to client with a result set that the client asked for. So, if the client needs two separate (different) result sets, then the client must fire up two different queries. Queries that request hourly data OR last (newest) entry.
Use added, changed, removed to modify the results from the two queries so that they are "transformed" into different fields. https://docs.meteor.com/api/pubsub.html#Subscription-added
However, this is probably not your issue. You are almost certainly using the same string as the name argument of your Meteor.publish call, or you are accidentally Meteor.subscribe-ing to the same Meteor.publish twice.
Make two separate Meteor.publish names, one for the most recent and one for the hourly data. Subscribe to each of them separately. The commenter is incorrect.
I'm pretty confused concerning this hip thing called NoSQL, especially CloudantDB by Bluemix. As you know, this DB doesn't store the values chronologically. It's the programmer's task to sort the entries in case he wants the data to.. well.. be sorted.
What I try to achive is to simply get the last let's say 100 values a sensor has sent to Watson IoT (which saves everything in the connected CloudantDB) in an ORDERED way. In the end it would be nice to show them in a D3.css style kind of graph but that's another task. I first need the values in an ordered array.
What I tried so far: I used curl to get the data via PHP from https://averylongID-bluemix.cloudant.com/iotp_orgID_iotdb_2018-01-25/_all_docs?limit=20&include_docs=true';
What I get is an unsorted array of 20 row entries with random timestamps. The last 20 entries in the DB. But not in terms of timestamps.
My question is now: Do you know of a way to get the "last" 20 entries? Sorted by timestamp? I did a POST request with a JSON string where I wanted the data to be sorted by the timestamp, but that doesn't work, maybe because of the ISO timestamp string.
Do I really have to write a javascript or PHP script to get ALL the database entries and then look for the 20 or 100 last entries by parsing the timestamp, sorting the array again and then get the (now really) last entries? I can't believe that.
Many thanks in advance!
I finally found out how to get the data in a nice ordered way. The key is to use the _design api together with the _view api.
So a curl request with the following URL / attributes and a query string did the job:
https://alphanumerical_something-bluemix.cloudant.com/iotp_orgID_iotdb_2018-01-25/_design/iotp/_view/by-date?limit=120&q=name:%27timestamp%27
The curl result gets me the first (in terms of time) 120 entries. I just have to find out how to get the last entries, but that's already a pretty good result. I can now pass the data on to a nice JS chart and display it.
One option may be to include the timestamp as part of the ID. The _all_docs query returns documents in order by id.
If that approach does not work for you, you could look at creating a secondary index based on the timestamp field. One type of index is Cloudant Query:
https://console.bluemix.net/docs/services/Cloudant/api/cloudant_query.html#query
Cloudant query allows you to specify a sort argument:
https://console.bluemix.net/docs/services/Cloudant/api/cloudant_query.html#sort-syntax
Another approach that may be useful for you is the _changes api:
https://console.bluemix.net/docs/services/Cloudant/api/database.html#get-changes
The changes API allows you to receive a continuous feed of changes in your database. You could feed these changes into a D3 chart for example.
I am trying to save a date into meteor mongodb my challenge is as follows:
1) if i use new Date() it creates a date object in mongo DB however it saves the time as local time as javascript Date() this always comes with a timezone +0x:hours based on browser local timezone. When i retrieve this it causes havoc as i am assuming everything in my db is UTC.
2) I want to use moment js library which is great because it can represent dates in UTC properly but my challenge is how do i get mongo db to accept a moment time? The minute i use moment.format() it saves it as a string!
So how can i send a date to a mongodb insert command with a date object that is in UTC? string just dont work :(
Any help would be appreciated.
Thanks
I think everything you need to know about both of these questions can be found here and here.
TLDR:
If you directly insert/update from the client you will store a timestamp based on the user's clock. It will still be stored as UTC, but you may or may not want to trust that the time is correct. I strongly suggest using a method for any db modifications which involve time so that the server's version of time will always be used.
Moment objects are not serializable to a format compatible with mongodb. Use a date object and format it on the client.
The problem with saving dates on the client is that each client can have a different time zone, or even wrong time set. Thus the only solution is to have the date set on the server. Using a method for each insert / update is not an elegant solution.
A common practice is to modify the document inside allow or deny callback:
Messages.allow({
insert: function(userId, doc) {
...
doc.timestamp = new Date();
return true;
},
});
That way you ensure all documents have a compatible timestamp, and you can use usual db methods on the client.
The Meteor community recently started an extensive document about how to use dates and times. You'll find a lot of useful information there, in addition to David Weldon's links:
https://meteor.hackpad.com/Meteor-Cookbook-Using-Dates-and-Times-qSQCGFc06gH
However, in particular I recommend using https://github.com/mizzao/meteor-timesync when security is not a concern. It allows you to client-locally obtain an accurate server time even if the client's clock is way off, without a round-trip to the server. This can be useful for all kinds of reasons - in my apps, I universally just use server-relative time and don't care about what the client's time is at all.