Facebook API mark a thread as "read" - facebook

I have been trying to find a way to POST changes to one of the Threads object fields in my inbox to mark it as "read", i.e. change unread from 1 to 0 as seen below from the JSON response I get:
"unread": 1,
"id": "1643543545",
"updated_time": "2013-02-12T14:53:26+0000",
"comments": {
"data": [
{
.
.
.
}
]
}
However, I am a bit lost in finding out which part of the API document talks about that and which objects into which you can POST and alter their fields. Looking at the Thread object, what's mentioned there is only the ability to fetch data in read-only manner. There's no mention whatsoever if the object's fields can be updated or altered, i.e. change unread from 1 to 0.
Is it possible in the first place, or POST is only dedicated to specific part of the API lie "feeds", "messages", etc.
If such think do not exist, any ideas on how to do that would be appreciated ( you'll get a discontinued Canadian penny as a token of appreciation :)

It is not currently possible for developers to mark as thread or message as "read".
Possible duplicate of:
Facebook Graph API, mark inbox as read?

Related

Clio API V4 Communication how to add Time Entry VIA API?

I'm just wondering if there is a way to add a Recorded Time in Communication Phone Call using the API?
Okay I just found it in the documentation Activity Endpoint.
https://app.clio.com/api/v4/documentation#operation/Activity#create
Sounds like you managed to answer your own question, but for anyone dealing with the same or similar issue, I thought I would throw in my two cents.
This is really a two step operation:
Create the Communication log through a POST request to /api/v4/communications.json endpoint.
Create an activities entry associated with the communication you just created through a POST request to /api/v4/activites.json.
After you create the communication, you can get the ID back for the record you just made. Using that, you create a new Activity record, and as part of the payload, you include the ID in the following.
{
"data": {
"communication": {
"id": 0
},
"quantity": $time_qty,
"date": "YYYY-MM-DD",
"type": "TimeEntry"
}
}

What is the standard practice for designing REST API if it is being used for inserting / updating a list of records

We are building an API which will be used for inserting and updating the records in a database. So if the record exists based on the Key the record will be updated and if it does not then it will be inserted.
I have two questions.
As per REST guidelines, what are the options for designing such an API e.g. PUT / POST OR PATCH? How should the list of objects be represented?
NOTE: I know from other answers that I read that there is confusion over how it should be as per REST guidelines. So I am OK if I can get some guidance on general best practice (irrespective of REST part)
Secondly, the part where I am really confused about is how to represent the output of this or what this API should return.
Specific guidance/inputs on above topic would be really appreciated.
I've seen many different implementations for inserts/updates across various vendors (Stripe, HubSpot, PayPal, Google, Microsoft). Even though they differ, the difference somehow fits well with their overall API implementation and is not usually a cause for stress.
With that said, the "general" rule for inserts is:
POST /customers - provide the customer details within the body.
This will create a new customer, returns the unique ID and customer details in the response (along with createdDate and other auto-generated attributes).
Pretty much most, if not all API vendors, implement this logic for inserts.
Updates, are quite different. Your options include:
POST
POST /customer/<customer_id> - include attributes and values you want to update within the body.
Here you use a POST to update the customer. It's not a very common implementation, but I've seen it in several places.
PUT
PUT/customer/<customer_id> - include either all, or partially updated attributes within the body.
Given PUT is technically an idempotent method, you can either stay true to the REST convention and expect your users to provide all the attributes to update the resource, or make it simpler by only accepting the attributes they want to update. The second option is not very "RESTful", but is easier to handle from a users perspective (and reduces the size of the payload).
PATCH
PATCH /customer/<customer_id> - include the operation and attributes that you want to update / remove/ replace / etc within the body. More about how to PATCH.
The PATCH method is used for partial updates, and it's how you're "meant" to invoke partial updates. It's a little harder to use from a consumers perspective.
Now, this is where the bias kicks-in. I personally prefer to use POST, where I am not required to provide all the attributes to invoke an update (just the ones I want to update). Reason is due to simplicity of usage.
In terms of the response body for the updates, usually they will return object within the response body include the updated attributes (and updated auto-generated attributes, such updatedDate).
Bulk inserts/ updates
Looking at the Facebook Graph HTTP API (Batch Request) for inspiration, and assuming POST is your preferred method for updates, you could embed an array of requests using a dedicated batch resource as an option.
Endpoint: POST /batch/customers
Body:
{
["first_name": "John", "last_name": "Smith"...], //CREATE
["id": "777", "first_name": "Jane", "last_name": "Doe"...], //UPDATE
["id": "999", "first_name": "Mike", "last_name": "Smith"...], //UPDATE
[....]
}
Sample Response
{
"id": "123",
"result":[
{ // Creation successful
"code": 200,
"headers":{..},
"body": {..},
"uri": "/customers/345"
},
{ // Update successful
"code": 200,
"headers":{..},
"body": {..},
"uri": "/customers/777",
},
{ // A failed update request
"code": 404,
"headers":{..},
"body": {..}, // body includes error details
}
]
}

API provides pagination functionality - client wants to query beyond bounds of API

For a web project, I am consuming an API that returns Educational Materials (books, videos, etc) -- in simplicity you can request:
API accepted parameters :
type: accepts 1 or many: [book, video, software]
subject matter: accepts 1 or many: [science, math, english, history]
per page: accepts an integer, defaults to 2, 0 returns ALL results
page: accepts an integer, defaults to 1
Important: This is a contrived example of a real use case, so it's not just 1 or 2 requests I'd have to cache, it's almost an infinite amount of combinations.
and it returns objects that look like:
{
"total-results": 15,
"page": 1,
"per-page": 2,
"data": [
{
"title": "Foobar",
"type": "book",
"subject-matter": [
"history",
"science"
],
"age": 10
},
{
"title": "Barfoo",
"type": "video",
"subject-matter": [
"history"
],
"age": 14
}
]
}
The client wants to be able to allow users to filter by age on my site -- so I have to essentially query everything and re-run my pagination.
I'd like to suggest to the API team (which we control) to allow me to query by age as well, but trying to explain this concept to the business is proving fruitless.
Right now all that I can think to solve this are 2 options: (1) convince the API team to allow me to query by age or (2) to cache the life out of my requests and use "0" by default and handle pagination on my end.
Again, Important: This is a contrived example of a real use case, so it's not just 1 or 2 requests I'd have to cache, it's almost an infinite amount of combinations.
Anyone have experience dealing with something similar to this?
Edit: Eric Stein asked a very sound question, here is the Q & A:
His Q: "Your API team does not know how to filter by age?"
My A: "They may it's a HUGE organization and I may get stonewalled because of bureaucracy and want to prepare for the worst."
I worked in a project that we consumed an API and had to make more filters that the API allowed (the API wasn't ours). In that case what we decided was to create a cron script that consumed the API and registered the returned data in an database of our own. We had a lot of problems maintaining that (it was A LOT of data), but kinda worked for us (at least for the time I was working in the project).
I think if it's important to your application (and for your client) that you can age filter, that's a pretty good argument to convince the API team to allow that.

Find the count of the like on Graph Api

I use facebook graph api and I encountered a problem relating to likes.
My request:
My goal is to find the count of the like. but the query timeout. What is the solution?
Thanx
My goal is to find the count of the like
So you only want the overall number of likes, the counter, but not the individual likes?
Then you should ask for the summary via field expansion:
/{page_id}/feed?fields=likes.limit(0).summary(1)
For each feed item, you will get a likes data structure that looks like this:
"likes": {
"data": [
],
"summary": {
"total_count": 12345
}
}
Try the options below.
Uncheck all the boxes and make the request again, if you give no error, go marking one by one until you find the problem.
Try not to use the limit (999999) is very, I've had problems trying to get as much information in one query page.
Make sure your access token created this with all the necessary permissions to your query.
I confess that I have never seen this error in the Graph API, is very generic and it is difficult to give you a more accurate suggestion.

Backbone.js & REST API resources relationship & interraction

I have a small REST API that is being consumed by a single page web application powered by Backbone.js
There are two resource types that the API provides, and therefore, the Backbone app uses. These are articles and comments. These two resources have different endpoints and there is a link from each of the articles to the location of all the comments for that item.
The problem that I'm facing is that, on the article list in my web app I would like to be able to display the number of comments for each article. Given that that would only be possible if I also get the comments list, on the current setup, would require me to make one API request to get the the initial article list and another one for each of the articles to be able to count the number of comments. That becomes a problem if, for instance, there are 100 articles, and therefore 101 HTTP requests would be necessary to populate one single view.
The solutions I can think of right now are:
1. to include the comments data in the initial articles request like so
{
{
"id": 1,
"name": "Article 1",
...
"comments": {
{
"id": 1,
"text": "some comment"
},
{
"id": 2,
"text": "some comment"
},
...
}
},
}
The question in this case is: How is it possible to parse the "comments" as a separate comments collection and not include it into the article model?
2. to include some metadata inside the articles response like so:
{
{
"id": 1,
"name": "Article 1",
...
"comments": 13
},
}
Option that raises the question: how should I handle the parse of the model so that, on one hand the meta information is available, and on the other hand, the "comments" attribute is not one Backbone would try to perform updates on?
I feel there might be another solution, compliant with the REST philosophy, for this that I'm missing, so if you have any other suggestion please let me know.
I think your best bet is to go with your second option, include the number of comments for each article inside your article model.
Option that raises the question: how should I handle the parse of the model so that, on one hand the meta information is available, and on the other hand, the "comments" attribute is not one Backbone would try to perform updates on?
Not sure what your concern is here. Why would you be worried about the comments attribute getting updated?
I can't think of any other "RESTy" way of achieving your desired result.
I would suggest using alternative 2 and have the server return
a subset of the article attributes that are deemed useful for
applications when dealing with the article collection resource
(perhaps reachable at /articles).
The full article member resource with all its comments (whether
they are stored in separate tables in the backend) would be
available at /articles/:id).
From a Backbone.js point of view you probably want to put the
collection resource in a, say, ArticleCollection which will
convert each member (currently with a subset of the attributes)
to Article models.
When the user selects to view an article in full you pull it
out from the ArticleCollection and invoke fetch to populate
it in full.
Regarding what to do with extra/virtual attributes that are included
in the collection resource (/articles) like the comment count and
possibly other usefult aggregations, I see a few alternatives:
In Article#initialize you can pull those out from the attributes
and store them as meta-data on the article. This way the built-in
Backbone.Model#toJSON will not see them.
Keep them in the attributes section of each model and override
Backbone.Model#toJSON to exlcude them when "serializing" an Article.
In atlernative 1, an Article#commentCount() helper could return
this._commentCount || this.get('comments').length to make it work
on both partially and fully loaded articles.
For a fully loaded Article you would probably want to convert the
nested comments array into a full-blown CommentCollection anyway
and store that in this._comments so I don't think it is that unusual
to have your models store additional stuff directly on the model instance,
outside of its attributes hash.