Facebook Request(s): what counts as 1 request? - facebook

I am currently creating an application that polls facebook for data. First request a page in this fashion...
pageID/posts?fields=id,message,created_time,type&limit=250
This returns the top 250 posts from a page. I then check if there page next is set and if it is make another request for the next 250 posts. I continue this recursively until there are no more posts.
With each post that is returned I go out and fetch the post details from the graph api as well.
My question is if I had 500 posts on a page. Would that equate to 502 requests? (500 requests for each post + 2 for parsing through page data to get posts) or am I incorrect in my understanding of a "request". I know when batching calls each query included in the batch actually counts as 1 request. The goal is to avoid the 600 calls / 600 second rate limiting. Thanks!

Every API call is...well, 1 request. So every time you use the /posts endpoint with whatever limit, it will be 1 request. For example, if you do that call you posted, it will be one request that returns 250 elements.
Batch requests are just faster, but each call in the batch counts as a request. So if you combine 10 calls in a batch, it will be 10 requests. The benefit of batch calls is really just that they are a lot faster: as fast as the slowest call in the batch.
If you want to get 500 posts with that example of yours, you would only need 2 calls. First one with 250 returned elements, second one by using the API call defined in the "next" value to get another 250. Just keep in mind that the default is usually 25 elements, and you can´t use any limit you want. There is a max limit for calls and it gets changed from time to time afaik so don´t count on getting the same result every time.
Btw, don't be to fixated on that 600calls/600seconds limit, it's just a general limit. The real limit is dynamic and depends on many factors. It's not public, of course. But if you really hit the limit, you are doing something wrong anyway.

Related

Facebook API - reduce the amount of data you're asking for, then retry your request for 1 row

I have the following logic for my ad insights request:
If Facebook asks me to reduce the amount of data I'm requesting, I half the date range. If the date range is the same, I half the limit.
It gets to the point I send this request:
https://graph.facebook.com/v3.2/{account}/insights?level=ad&time_increment=1&limit=1&time_range={"since":"2019-03-29","until":"2019-03-29"}&breakdowns=country&after=MjMwNwZDZD
But I still get that error:
Please reduce the amount of data you're asking for, then retry your request
There is no more reducing I can do.
Note, that this only happens sometimes.
One way to avoid the error is when you only request 1 item (limit=1) to start splitting the fields and request half the fields in each request.
Another way is to run an async report, which should not have such a low time limit.
Official Facebook API team response:
It looks like you are requesting a lot of fields, this is likely the
cause of this error. This will cause the request to time-out.
Could you try using asynchronous requests as described here:
https://developers.facebook.com/docs/marketing-api/insights/best-practices/#asynchronous?
Async requests have a much longer time limit, this will likely resolve
your issue.

How to design a REST API to fetch a large (ephemeral) data stream?

Imagine a request that starts a long running process whose output is a large set of records.
We could start the process with a POST request:
POST /api/v1/long-computation
The output consists of a large sequence of numbered records, that must be sent to the client. Since the output is large, the server does not store everything, and so maintains a window of records with a upper limit on the size of the window. Let's say that it stores upto 1000 records (and pauses computation whenever this many records are available). When the client fetches records, the server may subsequently delete those records and so continue with generating more records (as more slots in the 1000-length window are free).
Let's say we fetch records with:
GET /api/v1/long-computation?ack=213
We can take this to mean that the server should return records starting from index 214. When the server receives this request, it can assume that the (well-behaved) client is acknowledging that records up to number 213 are received by the client and so it deletes them, and then returns records starting from number 214 to whatever is available at that time.
Next if the client requests:
GET /api/v1/long-computation?ack=214
the server would delete record 214 and return records starting from 215.
This seems like a reasonable design until it is noticed that GET requests need to be safe and idempotent (see section 9.1 in the HTTP RFC).
Questions:
Is there a better way to design this API?
Is it OK to keep it as GET even though it appears to violate the standard?
Would it be reasonable to make it a POST request such as:
POST /api/v1/long-computation/truncate-and-fetch?ack=213
One question I always feel like that needs to be asked is, are you sure that REST is the right approach for this problem? I'm a big fan and proponent REST, but try to only apply to to situations where it's applicable.
That being said, I don't think there's anything necessarily wrong with expiring resources after they have been used, but I think it's bad design to re-use the same url over and over again.
Instead, when I call the first set of results (maybe with):
GET /api/v1/long-computation
I'd expect that resource to give me a next link with the next set of results.
Although that particular url design does sort of tell me there's only 1 long-computation on the entire system going on at the same time. If this is not the case, I would also expect a bit more uniqueness in the url design.
The best solution here is to buy a bigger hard drive. I'm assuming you've pushed back and that's not in the cards.
I would consider your operation to be "unsafe" as defined by RFC 7231, so I would suggest not using GET. I would also strongly advise you to not delete records from the server without the client explicitly requesting it. One of the principles REST is built around is that the web is unreliable. Under your design, what happens if a response doesn't make it to the client for whatever reason? If they make another request, any records from the lost response will be destroyed.
I'm going to second #Evert's suggestion that you absolutely must keep this design, you instead pick a technology that's build around reliable delivery of information, such as a messaging queue. If you're going to stick with REST, you need to allow clients to tell you when it's safe to delete records.
For instance, is it possible to page records? You could do something like:
POST /long-running-operations?recordsPerPage=10
202 Accepted
Location: "/long-running-operations/12"
{
"status": "building next page",
"retry-after-seconds": 120
}
GET /long-running-operations/12
200 OK
{
"status": "next page available",
"current-page": "/pages/123"
}
-- or --
GET /long-running-operations/12
200 OK
{
"status": "building next page",
"retry-after-seconds": 120
}
-- or --
GET /long-running-operations/12
200 OK
{
"status": "complete"
}
GET /pages/123
{
// a page of records
}
DELETE /pages/123
// remove this page so new records can be made
You'll need to cap out page size at the number of records you support. If the client request is smaller than that limit, you can background more records while they process the first page.
That's just spitballing, but maybe you can start there. No promises on quality - this is totally off the top of my head. This approach is a little chatty, but it saves you from returning a 404 if the new page isn't ready yet.

FQL/GRAPH Api Logic

I have a logic problem I can't seem to solve (might be possible).
Example:
I am inside 100 facebook groups
I need the 10 lastest posts of EACH group I am in.
That's pretty much it but I can't seem to find a way to do this without making a foreach loop calling the api over and over again, if I had a couple hundred more groups it would be impossible.
PS: I'm using FQL atm but am able to use graph, I've coded this in like 3 different ways but no success.
This is the farthest I could get:
SELECT actor_id,source_id FROM stream WHERE source_id IN (select gid from group_member where uid = me())
It only returns from one page, maybe there's no way to return all of this without a foreach asking for each groups 10 lastest messages.
There's no need to use FQL of batching. This can be done with a simple Graph API request IMHO:
GET /me/groups?fields=id,name,feed.fields(id,message).limit(10)
This will return 10 posts for each of your groups. In case there too much data to be returned, try setting the limit parameter for the base query as well:
GET /me/groups?fields=id,name,feed.fields(id,message).limit(10)&limit=20
Then, you'll get a next field in the result JSON. By calling the URL contained in this field, you'll get your next results. Do this until the result is empty, then you reached the end.
You can use batch calls, described here https://developers.facebook.com/docs/graph-api/making-multiple-requests/
Using batch requests, you can request upto 50 calls in one go. Note than batch request doesn't increase the rate limits, so if you make 50 requests in batch, it will be considered as 50 calls, and not one. However you will get the response in a shorter time.
If you think you're making too many calls, you should put some delay in between calls and avoid rate limiting.

The limit of Facebook's graph api "limit" parameter

I'm fetching a large amount of comments from a public page using Facebook's Graph API.
By default facebook returns 25 comments per response, and uses paging. This causes the need for multiple requests, which is uneccesery as I know ahead there will be a lot of comments.
I read about the "limit" parameter that you can pass to ask for a certain amount of items per response.
I was wondering, what is the limit of that parameter? I'm assuming I can't pass &limit=10000.
There's a different way for fetching comments:
https://graph.facebook.com/<PAGE_ID>_<POST_ID>/comments?limit=500
The maximum value for the limit parameter is 500.
yes, with limit parameter you can pass what number of certain resource you want in one call. default limit is 25.
for ex. if you want 100 comment in one call for a post having id POST_ID, you can query like this:
https://graph.facebook.com/POST_ID?fields=comments.limit(100)
I think they have changed this. For /feed? I only get 200-225 posts back but for comments I get as many as 2000 back
Old question, but this is in the current Facebook documentation in case anyone finds this question via search (emphasis mine):
Some edges may also have a maximum on the limit value for performance reasons. In all cases, the API returns the correct pagination links.
In other words, even if you specify a limit above what's allowed by the endpoint, the "pagination.previous" and "pagination.next" elements will always provide the correct URL to resume where it left off.
I would recommend you to use FQL instead.
FQL provide a more flexible approach where you can combine data types (posts, users, pages, etc..) as you please. You can also query for comments belonging to a list of stories instead of just one limiting your number of requests even more.
There are a couple of drawbacks though:
1. There is a limit on 5000 comments. Here you would use a query looking something like: "SELECT id, ...... FROM comments, ... WHERE parent_id in (1,2,3....) ORDER BY time LIMIT 0, 5000". Even though you split this up in several queries with "LIMIT 0, 1000", "LIMIT 1000, 1000", LIMIT 2000, 1000, etc.., you would never get anything over 5000 comments("LIMIT 5000, 1000" would return empty).
2. All real requests made on Facebooks server counts as one request. You can send of something that is actually a combination of requests, this will be counted as multiple requests.
3. Facebook does not like to heavy requests. You can end up with getting blocked for a shorter time periods(minutes -> hours, not days). If this happens, act on it.

Maximum number of network updates retrieved per API call

Is there any restriction on the number of entries that are retrieved using a single call to the Network Updates API? I found this forum comment "The per-user limit is per call, so 300 requests with however many updates they have." on the thread
http://developer.linkedin.com/forum/increase-search-api-throttle-limit
I want to confirm that indeed there is no limit. I have received as many as 106 entries in a single call.
Thanks in advance.
The maximum number of updates returned from the Network Updates API appears to be 250. Performing the following query as an example:
http://api.linkedin.com/v1/people/~/network/updates?count=500
Even if I try to specify the start parameter at, say, 250, I can't get the next 250 updates from the API:
http://api.linkedin.com/v1/people/~/network/updates?count=250&start=250
So it looks like 250 is the max, with no ability to page beyond that.
UPDATE:
Have verified that 250 is the maximum number returned, either in a single call or via the paging parameters. Looks like the documentation has been updated to reflect this.