PostgreSQL REST API Pagination - postgresql

I'm following Pagination Done the Right Way, which orders by date for news.
I want to order by created_at (a timestamp) for posts (like Facebook posts).
According to PostgreSQL Date/Time Types, timestamp has a resolution of 1 microsecond.
The API's clients, however, only need (to display) whole seconds.
So, should I just round created_at to whole seconds (with CURRENT_TIMESTAMP(0)) by default when inserting new posts?
That way, to get the next page, the client can simply send back to the REST API server the created_at timestamp (in whole seconds) of the last post it received.
Otherwise, the client would have to know the exact created_at timestamp (down to the microseconds) of the last post it received.
Is there any reason to store the exact microseconds in the database (especially if they're never sent to the clients)? Isn't a whole second enough precision for something like Facebook (or Instagram) posts?

Related

XRPL: How to get the history of the balance of an account?

I would like to query the history of the balance of an XRPL account with the new WebSocket API.
For example, how do I check the balance of an account on a particular day?
I know with the v2 api, there was a possibility to query balance_changes. But this doesn't seem to be part of the new version.
For example:
https://data.ripple.com/v2/accounts/rf1BiGeXwwQoi8Z2ueFYTEXSwuJYfV2Jpn/balance_changes?start=2018-01-01T00:00:00Z
How is this done with the new Websocket API's?
There's no convenient API call that the WebSocket API can do to get this. I assume you want the XRP balance, not token/issued currency balances, which are in a different place.
One way to go about it is to make an account_tx call and then iterate through the metadata. Many, but not all, transactions will have a ModifiedNode entry of type AccountRoot—if that transaction changed the account's XRP balance, you can see the difference in the PreviousFields vs. FinalFields for that entry. The Look Up Transaction Results tutorial has some details on how to parse out metadata this way. There are some kind of tricky edge cases here: for example, if you send a transaction that buys 10 drops of XRP in the exchange but burns 10 drops of XRP as a transaction cost, then the metadata won't show a balance change because the net change was zero (+10, -10).
Another approach could be to estimate what ledger_index was most recently closed at a given time, then use account_info to look up the account's balance as of that time. The hard part there is figuring out what the latest ledger index was at a given time. This is one of the places where the Data API was just more convenient than the WebSocket API—there's no way to look up by date in WebSocket so you have to try a ledger index, see what the close time of the ledger was, try another ledger index, see what the date is, etc.

How to get Height data given during registration from Googlefit REST Apis

I am new to google fit rest APIs and I have been trying to calculate BMI of a user for my project.
So, when the user authorizes to my application at that time I get data for the last 30 days (historical data for some calculation). From then on I extract data periodically this way I always get the latest weight data.
But the issue comes in the scenario when the user has been using google fit way before authentication because I wouldn't know the exact date I should pass as parameter to extract height data.
The aggregate data type accepts the date range of only 90 days, else it returns the error as aggregate duration too large . But the user's height data could be as old as 2/3 years.
I also looked at the option of custom data types, but there again I don't know the exact date range to make an API call.
Is there any way that I can either get the user's joining date (the date he/she registered to Google fit) because a user has to provide height during registration or if I could somehow get data under About you (Gender, DoB, Weight, Height) in Profile tab
Or If I could get the calculated BMI data point directly from API.
Any help would be highly appreciated.

What time does the Graph's Insights API refresh the attributes with day as the aggregation period?

For example,
https://developers.facebook.com/docs/graph-api/reference/v2.11/insights
For the aggregation period 'day', Does anyone know at what time does Facebook refresh that value?
I remember reading it to be around 8am, but I can't remember if it was accurate or where I read it.
When you check out the endtime you get inside the values (Graph API Explorer request for me?fields=insights.metric(page_stories); supply your own page access token), for all three periods (day/wekk/days_28) it is of the form
2018-02-19T08:00:00+0000
Same time portion in each case.

Get inbox messages from a date onwards

Using the Graph API Explorer, and using GET /me/inbox, I can get a list of messages.
I was wondering how to limit them to messages from the past day, for example?
You can use time based paging this way:
me/inbox?since=1372395600
It relies on the updated_time (unix timestamp) parameter of an inbox thread. This way you could get all the threads updated with a message at a time since yesterday, for example.

How to implement robust pagination with a RESTful API when the resultset can change?

I'm implementing a RESTful API which exposes Orders as a resource and supports pagination through the resultset:
GET /orders?start=1&end=30
where the orders to paginate are sorted by ordered_at timestamp, descending. This is basically approach #1 from the SO question Pagination in a REST web application.
If the user requests the second page of orders (GET /orders?start=31&end=60), the server simply re-queries the orders table, sorts by ordered_at DESC again and returns the records in positions 31 to 60.
The problem I have is: what happens if the resultset changes (e.g. a new order is added) while the user is viewing the records? In the case of a new order being added, the user would see the old order #30 in first position on the second page of results (because the same order is now #31). Worse, in the case of a deletion, the user sees the old order #32 in first position on the second page (#31) and wouldn't see the old order #31 (now #30) at all.
I can't see a solution to this without somehow making the RESTful server stateful (urg) or building some pagination intelligence into each client... What are some established techniques for dealing with this?
For completeness: my back-end is implemented in Scala/Spray/Squeryl/Postgres; I'm building two front-end clients, one in backbone.js and the other in Python Django.
The way I'd do it, is to make the indices from old to new. So they never change. And then when querying without any start parameter, return the newest page. Also the response should contain an index indicating what elements are contained, so you can calculate the indices you need to request for the next older page. While this is not exactly what you want, it seems like the easiest and cleanest solution to me.
Initial request: GET /orders?count=30 returns:
{
"start"=1039;
"count"=30;
...//data
}
From this the consumer calculates that he wants to request:
Next requests: GET /orders?start=1009&count=30 which then returns:
{
"start"=1009;
"count"=30;
...//data
}
Instead of raw indices you could also return a link to the next page:
{
"next"="/orders?start=1009&count=30";
}
This approach breaks if items get inserted or deleted in the middle. In that case you should use some auto incrementing persistent value instead of an index.
The sad truth is that all the sites I see have pagination "broken" in that sense, so there must not be an easy way to achieve that.
A quick workaround could be reversing the ordering, so the position of the items is absolute and unchanging with new additions. From your front page you can give the latest indices to ensure consistent navigation from up there.
Pros: same url gives the same results
Cons: there's no evident way to get the latest elements... Maybe you could use negative indices and redirect the result page to the absolute indices.
With a RESTFUL API, Application state should be in the client. Here the application state should some sort of time stamp or version number telling when you started looking at the data. On the server side, you will need some form of audit trail, which is properly server data, as it does not depend on whether there have been clients and what they have done. At the very least, it should know when the data last changed. No contradiction with REST here.
You could add a version parameter to your get. When the client first requires a page, it normally does not send a version. The server replies contains one. For instance, if there are links in the reply to next/other pages, those links contains &version=... The client should send the version when requiring another page.
When the server recieves some request with a version, it should at least know whether the data have changed since the client started looking and, dependending of what sort of audit trail you have, how they have changed. If they have not, it answer normally, transmitting the same version number. If they have, it may at least tell the client. And depending how much it knows on how the data have changed, it may taylor the reply accordingly.
Just as an example, suppose you get a request with start, end, version, and that you know that since version was up to date, 3 rows coming before start have been deleted. You might send a redirect with start-3, end-3, new version.
WebSockets can do this. You can use something like pusher.com to catch realtime changes to your database and pass the changes to the client. You can then bind different pusher events to work with models and collections.
Just Going to throw it out there. Please feel free to tell me if it's completely wrong and why so.
This approach is trying to use a left_off variable to sort through without using offsets.
Consider you need to make your result Ordered by timestamp order_at DESC.
So when I ask for first result set
it's
SELECT * FROM Orders ORDER BY order_at DESC LIMIT 25;
right?
This is the case when you ask for the first page (in terms of URL probably the request that doesn't have any
yoursomething.com/orders?limit=25&left_off=$timestamp
Then When receiving your data set. just grab the timestamp of last viewed item. 2015-12-21 13:00:49
Now to Request next 25 items go to: yoursomething.com/orders?limit=25&left_off=2015-12-21 13:00:49 (to lastly viewed timestamp)
In Sql you would just make the same query and say where timestamp is equal or less than $left_off
SELECT * FROM (SELECT * FROM Orders ORDER BY order_at DESC) as a
WHERE a.order_at < '2015-12-21 13:00:49' LIMIT 25;
You should get a next 25 items from the last seen item.
For those who sees this answer. Please comment if this approach is relevant or even possible in the first place. Thank you.