We've recently started creating API endpoints. One of these end points is hardcoded to change 2 of our reference type codes (i.e. code: "P" for mobile is being changed to "M") from their system value to a custom value (out of a configurable list that has approximately 12 records at the moment. I'm trying to convince them it's bad practice and a terrible idea to change this reference data because of all of the issues it can cause for systems that use the api, however they believe it increases the "independence" of the API from the system of truth. We work in an enterprise environment and currently only our systems hit the api.
Is there any other data or information (Copious amounts of google searching hasn't revealed anyone discussing this sort of issue specifically) that suggests this is a bad idea? Or am I wrong in thinking so?
Edit:
For reference here's some examples:
What the data would look like in the source system the api pulls from
{
"phone_type": "P",
"phone_number": "1234567890",
"user_id":"username"
}
What that same data would look like coming from our API now
{
"phone_type": "M",
"phone_number": "1234567890",
"user_id":"username"
}
What the reference data would look like coming from our reference codes end point
[
{
"code": "P",
"description": "Mobile Number",
"active":"true"
}
]
Related
The OSRM Routing engine returns "hints" in many of its outputs, and you are able to pass these back into a new request, which saves on lookup time and thereby optimizes the query.
My question is how do I pass these "hints" back into the
/table/v1/car
API call as per the example below?
EXAMPLE:
An OSRM API request of
/table/v1/car/-0.693000,52.078000;-0.724000,52.040000
gives back (snippet) :
"sources": [
{
"hint": "uImugOqJroBBAAAAAAAAALoBAAAAAAAA7WvYQQAAAACaUzhDAAAAAEEAAAAAAAAAugEAAAAAAAAXCgAAmXb1__mxGgP4bPX_sKUaAwYALwrjJ41R",
"distance": 388.619802,
"location": [
-0.690535,
52.081145
],
"name": ""
},
The original coordinates:
-0.693000, 52.078000
have been fixed up to:
-0.690535, 52.081145
(snapped to a nearby road and the hint is as above).
So I would like to utilise these "hints" in a new API query for the same LAT/LNG location, which should optimize the query.
The manual says about hints:
This can be used on subsequent request to significantly speed up the query and to connect multiple services.
I've tried various combinations and looking at the manual, but so far nothing has worked.
Has anybody successfully passed "hint" data into the /table/v1/car
API for OSRM Routing?
If so, please would you let me know what you did
I tried your request:
/table/v1/car/-0.693000,52.078000;-0.724000,52.040000
and got response:
{"code":"Ok",
"durations":[[0,596.2],[615.9,0]],"destinations":[
{"hint":"teJ0h-fidIdBAAAAAAAAALoBAAAAAAAA7WvYQQAAAACaUzhDAAAAAEEAAAAAAAAAugEAAAAAAACrkAAAmXb1__mxGgP4bPX_sKUaAwYALwr88AjE",
"distance":388.619802,"name":"","location":[-0.690535,52.081145]},{"hint":"dbcDgLevA4BpAAAAAAAAAAQGAAAwCAAA4-dpQQAAAACIYVZDGSCSQzQAAAAAAAAAAgMAABwEAACrkAAATvb0_48VGgPg8_T_QBEaAw4Afwf88AjE",
"distance":129.943557,"name":"","location":[-0.723378,52.041103]}],
"sources":[
{"hint":"teJ0h-fidIdBAAAAAAAAALoBAAAAAAAA7WvYQQAAAACaUzhDAAAAAEEAAAAAAAAAugEAAAAAAACrkAAAmXb1__mxGgP4bPX_sKUaAwYALwr88AjE",
"distance":388.619802,"name":"","location":[-0.690535,52.081145]},{"hint":"dbcDgLevA4BpAAAAAAAAAAQGAAAwCAAA4-dpQQAAAACIYVZDGSCSQzQAAAAAAAAAAgMAABwEAACrkAAATvb0_48VGgPg8_T_QBEaAw4Afwf88AjE",
"distance":129.943557,"name":"","location":[-0.723378,52.041103]}]}
Your request has 2 points, so you have to add 2 hints, one for each point.
So, the request with hints is:
/table/v1/car/-0.693000,52.078000;-0.724000,52.040000?hints=teJ0h-fidIdBAAAAAAAAALoBAAAAAAAA7WvYQQAAAACaUzhDAAAAAEEAAAAAAAAAugEAAAAAAACrkAAAmXb1__mxGgP4bPX_sKUaAwYALwr88AjE;dbcDgLevA4BpAAAAAAAAAAQGAAAwCAAA4-dpQQAAAACIYVZDGSCSQzQAAAAAAAAAAgMAABwEAACrkAAATvb0_48VGgPg8_T_QBEaAw4Afwf88AjE
where hints are separated by semicolon.
I´m trying to make a applications that migrates data over cloud services, while trying to transfer mail messages I was incapable of finding a way to set the sent date for messages, after some search it seams that it cant be done using MSGraph. I know that ews can do it but ews is now deprecated so my questions is. Does any one know a way to do it using ms graph? There is really no solution for this and i will really be forced to use a deprecated api?
You need to set a few Extended properties to do this you need to set the MessageFlags extended property to make it appear as if it was a Sent Message. You also need to set the ClientSubmitTime https://learn.microsoft.com/en-us/office/client-developer/outlook/mapi/pidtagclientsubmittime-canonical-property and the delivery time https://learn.microsoft.com/en-us/office/client-developer/outlook/mapi/pidtagmessagedeliverytime-canonical-property to the date you want the message to be sent.
{
"Subject": "Test123"
,"Sender":{
"EmailAddress":{
"Name":"senderblah",
"Address":"senderblah#blah.com"
}}
,"Body": {
"ContentType": "HTML",
"Content": "Just the facts"
}
,"ToRecipients": [
{
"EmailAddress":{
"Name":"blah",
"Address":"blah#blah.com"
}}
]
,"SingleValueExtendedProperties": [
{
"PropertyId":"Integer 0x0E07",
"Value":"1"
}
,{
"PropertyId":"SystemTime 0x0039",
"Value":"2020-03-04T09:55:38.7169+11:00"
}
,{
"PropertyId":"SystemTime 0x0E06",
"Value":"2020-03-04T09:55:38.7169+11:00"
}
]
}
That said because you can't import the MIMEContent of a Message using the Graph API at the moment so doing large scale data migrations using the Graph is a little impractical (but it will work okay for small scale apps without to much diversity of content).I would still suggest using EWS for migration products while depreciated its still supported (and used by most migration vendors).
I am struggling with matching the "active time" returned by Fit REST API with reality.
As an example - on 12/14 I had two walks, about 45 minutes each. The api returns one of them as type 7 ("walking" - right!) and another one as type 0 (in vehicle - wrong!). However, Fit app shows both as "walking", so it apparently uses a different data source.
I checked some other days and on these days, the session with type 0 is indeed a valid "in vehicle" session.
I tried all aggregated data sources that return com.google.activity.segment. Most of them are empty, I've found data only in merge_activity_segments and platform_activity_segments (and they seem to be identical).
Google's docs have a caveat about delay in data sync, but they never specified how long this delay is. The data I am looking at is about 24 hours old - if their sync is that slow, then this API is more or less unusable.
I am using the following POST to https://www.googleapis.com/fitness/v1/users/me/dataset:aggregate
{
"aggregateBy": [
{
"dataSourceId": "derived:com.google.activity.segment:com.google.android.gms:merge_activity_segments"
}
],
"endTimeMillis": "1481788800000",
"startTimeMillis": "1481702400000",
"bucketByTime": {
"period": {
"timeZoneId": "America/Los_Angeles",
"type": "day",
"value": 1
}
}
}
For reference - activity types: https://developers.google.com/fit/rest/v1/reference/activity-types
Has anyone been able to retrieve activity time from Fit's REST API that is correct? Any suggestions?
By the way, steps and calories seem to work fine - just aggregate the following datasets:derived:com.google.calories.expended:com.google.android.gms:merge_calories_expended and derived:com.google.step_count.delta:com.google.android.gms:estimated_steps
A side note - it is probably the worst documented API from a major company I have seen.
I am trying to get realtime stock data from BSE and NSE using yahoo finance web-services. I was able to get some data using following URL
http://finance.yahoo.com/webservice/v1/symbols/COALINDIA.NS/quote?format=json
But it gives me very limited information.
{
"list": {
"meta": {
"type": "resource-list",
"start": 0,
"count": 1
},
"resources": [
{
"resource": {
"classname": "Quote",
"fields": {
"name": "COAL INDIA LTD",
"price": "367.649994",
"symbol": "COALINDIA.NS",
"ts": "1418895539",
"type": "equity",
"utctime": "2014-12-18T09:38:59+0000",
"volume": "2826975"
}
}
}
]
}
}
I need more information like yearly high, low, last traded price etc. and I couldn't find any documentation related to this from yahoo where it details how to get more information.
Is there documentation available related to these services? Or please suggest if there are any alternatives available.
I don't know where the definitive documentation might be but for your particular example try appending &view=detail to your URL.
http://finance.yahoo.com/webservice/v1/symbols/COALINDIA.NS/quote?format=json&view=detail
This will at least give you the year_high and year_low that you asked after.
Now, even though the following won't work for your COALINDIA.NS symbol (I suspect the exchange is not supported), it might be worth exploring the following two examples:
Example 1: As before, but for Apple and Yahoo symbols, with &view=detail appended:
http://finance.yahoo.com/webservice/v1/symbols/YHOO,AAPL/quote?format=json&view=detail
Example 2: And now using a completely different url, resulting in much more response data. One key caveat is this data is delayed by 15 minutes:
http://query.yahooapis.com/v1/public/yql?q=select%20*%20from%20yahoo.finance.quotes%20where%20symbol%20IN%20(%22YHOO%22,%22AAPL%22)&format=json&env=http://datatables.org/alltables.env
If you discover the major differences between those two options and what impact they might have then please do let us all know; I'd be interested in finding out more.
If you are fine with getting NSE qoutes, you can use this package for the purpose, it is extremely easy to setup.
http://nsetools.readthedocs.org/en/latest/index.html
Since it uses NSE website/services as data source, the quotes will not be delayed (max few seconds).
Beware that these data are both delayed and inconsistent. You are not getting anything even remotely close to tick or real-time data.
From example 2, refresh a few times, and inspect the "LastTradeWithTime" key-value pair. I sometimes get different quotes from different times of day, for no apparent reason. They are sometimes delayed up to three hours.
You get what you pay for; in other words, this is not a free lunch.
For those who are curious about the different options available in the Yahoo Finance URLs, I think these links might help. If it's not what you're looking for, sorry.
http://internetbandaid.com/2009/03/31/yahoo-stocks-api/
https://ilmusaham.wordpress.com/tag/stock-yahoo-data/
Note: the wordpress site contains information that was taken from a site called gummy-stuff.org which is listed in full at the bottom of the above site (I can only list 2 urls in this post so I had to do the round-about way). Oddly, I found this site on my own yesterday. Funny how stuff comes back around. If you visit this site you'll just see a statement from Yahoo that the info he had originally listed (you're looking at some of this site on the above wordpress site) was never intended to be for public consumption and is a violation of Yahoo's terms and conditions agreement as it can apparently be used for hacking purposes. I was curious to see what was on the original post so I searched for it on the WayBack Machine. BTW, the links to the spread sheets are still active in the archive.
Cheers. Thom
Introduction
/me/books.reads returns books[1].
It includes an array of books and the following fields for each book:
title
type
id
url
Problem
I'd like to get the author name(s) at least. I know that written_by is an existing field for books.
I'd like to get ISBN, if possible.
Current situation
I tried this:
/me/books.reads?fields=data.fields(author)
or
/me/books.reads?fields=data.fields(book.fields(author))
But the error response is:
"Subfields are not supported by data"
The books.reads response looks like this (just one book included):
{
"data": [
{
"id": "00000",
"from": {
"name": "User name",
"id": "11111"
},
"start_time": "2013-07-18T23:50:37+0000",
"publish_time": "2013-07-18T23:50:37+0000",
"application": {
"name": "Books",
"id": "174275722710475"
},
"data": {
"book": {
"id": "192511337557794",
"url": "https://www.facebook.com/pages/A-Semantic-Web-Primer/192511337557794",
"type": "books.book",
"title": "A Semantic Web Primer"
}
},
"type": "books.reads",
"no_feed_story": false,
"likes": {
"count": 0,
"can_like": true,
"user_likes": false
},
"comments": {
"count": 0,
"can_comment": true,
"comment_order": "chronological"
}
}
}
If I take the id of a book, I can get its metadata from the open graph, for example http://graph.facebook.com/192511337557794 returns something like this:
{
"category": "Book",
"description": "\u003CP>The development of the Semantic Web...",
"genre": "Computers",
"is_community_page": true,
"is_published": true,
"talking_about_count": 0,
"were_here_count": 0,
"written_by": "Grigoris Antoniou, Paul Groth, Frank Van Harmelen",
"id": "192511337557794",
"name": "A Semantic Web Primer",
"link": "http://www.facebook.com/pages/A-Semantic-Web-Primer/192511337557794",
"likes": 1
}
The response includes ~10 fields, including written_by which has the authors of the book.
Curiously, link field seems to map to url of the books.reads response. However, the field names are different, so I'm starting to loose hope that I would be able to ask for written_by in books.reads request..
The only reference that I've found about /me/books is https://developers.facebook.com/docs/reference/opengraph/object-type/books.book/
This is essentially about user sharing that he/she has read a book, not the details of the book itself.
The data structure is focused on the occasion of reading a book: when reading was started, when this story was published, etc.
[1] I know this thanks to How to get "read books"
FQl does not looks very promising – although you can request books from the user table, it seems to deliver just a string value with only the book titles comma-separated.
You can search page table by name – but I doubt it will work with name in (subquery) when what that subquery delivers is just one string of the format 'title 1,title 2,…'.
Can’t really test this right now, because I have read only one book so far (ahm, one that I have set as “books I read” on FB, not in general …) – but using that to search the page table by name already delivers a multitude of pages, and even if I narrow that selection down by AND is_community_page=1, I still get several, so no real way of telling which would be the right one, I guess.
So, using the Graph API and a batch request seems to be more promising.
Similar to an FQL multi-query, batch requests also allow you to refer data from the previous “operation” in a batch, by giving operations a “name”, and then referring to data from the first operation by using JSONPath expression format (see Specifying dependencies between operations in the request for details).
So a batch query for this could look like this,
[
{"method":"GET","name":"get-books","relative_url":"me\/books?fields=id"},
{"method":"GET","relative_url":"?ids={result=get-books:$.data.*.id}
&fields=description,name,written_by"}
]
Here all in one line, for easier copy&paste, so that line breaks don’t cause syntax errors:
[{"method":"GET","name":"get-books","relative_url":"me\/books?fields=id"},{"method":"GET","relative_url":"?ids={result=get-books:$.data.*.id}&fields=description,name,written_by"}]
So, to test this:
Go to Graph API Explorer.
Change method to POST via the dropdown, and clear whatever is in the field right next to it.
Click “Add a field”, and input name batch, and as value insert the line copy&pasted from above.
Since that will also get you a lot of “headers” you might not be interested in, you can add one more field, name include_headers and value false to get rid of those.
In the result, you will get a field named body, that contains the JSON-encoded data for the second query. If you want more fields, add them to the fields parameter of the second query, or leave that parameter out completely if you want all of them.
OK, after some trial-and-error I managed to create a direct link to Graph API Explorer to test this – the right amount of URL-encoding to use is a little fiddly to figure out :-)
(I left out the fields parameter for the second operation here, so this will give you all the info for the book that there is.)
As I said, I only got one book on FB, but this should work for a user with multiple books the same way (since the second operation just takes however many IDs it is given from the first one).
But I can’t tell you off the top of my head how this will work for a lot of books – how slow the second operation might get with that, when you set a high limit for the first one. And I also don’t know how this will behave in regard to pagination, which you might run into when me/books delivers a lot of books for a user.
But I think this should be a good enough starting point for you to figure the rest out by trying it on users with more data. HTH.
Edit: ISBN does not seem to be part of the info for a book’s community page, at least not for the ones I checked. And also written_by is optional – my book doesn’t have it. So you’ll only get that info if it is actually provided.