Facebook API - reduce the amount of data you're asking for, then retry your request for 1 row - facebook

I have the following logic for my ad insights request:
If Facebook asks me to reduce the amount of data I'm requesting, I half the date range. If the date range is the same, I half the limit.
It gets to the point I send this request:
https://graph.facebook.com/v3.2/{account}/insights?level=ad&time_increment=1&limit=1&time_range={"since":"2019-03-29","until":"2019-03-29"}&breakdowns=country&after=MjMwNwZDZD
But I still get that error:
Please reduce the amount of data you're asking for, then retry your request
There is no more reducing I can do.
Note, that this only happens sometimes.

One way to avoid the error is when you only request 1 item (limit=1) to start splitting the fields and request half the fields in each request.
Another way is to run an async report, which should not have such a low time limit.
Official Facebook API team response:
It looks like you are requesting a lot of fields, this is likely the
cause of this error. This will cause the request to time-out.
Could you try using asynchronous requests as described here:
https://developers.facebook.com/docs/marketing-api/insights/best-practices/#asynchronous?
Async requests have a much longer time limit, this will likely resolve
your issue.

Related

API error when requesting full size data from Alphavantage

I am working on analysing some historical stock market data for Australian shares. I am using Alphavantage as my API to get the actual data.
My problem relates specifically to the TIME_SERIES_DAILY function with FULL outputsize. For some shares, I receive an error message in response to an API call:
https://www.alphavantage.co/query?function=TIME_SERIES_DAILY&symbol=SUL.AUS&outputsize=full&apikey=XXXXXXXXXXXXXX
{
"Error Message": "Invalid API call. Please retry or visit the documentation (https://www.alphavantage.co/documentation/) for TIME_SERIES_DAILY."
}
If I change the outputsize argument to 'compact' it works but only returns a subset of data I am after.
The bizarre thing is that the full size response works for about 60% of the stocks I am after. After a bit of trial and error, I deduce that the API returns an error for specific shares everytime and not others.
I presume that there may be some feature about these specific shares that causes it to fail - I just don't know what.
This is a known error with Alpha Vantage. See here for more details.
When there is an issue with data past 100 days, instead of returning bad data, it scraps the return as to not throw off algos.

How to design a REST API to fetch a large (ephemeral) data stream?

Imagine a request that starts a long running process whose output is a large set of records.
We could start the process with a POST request:
POST /api/v1/long-computation
The output consists of a large sequence of numbered records, that must be sent to the client. Since the output is large, the server does not store everything, and so maintains a window of records with a upper limit on the size of the window. Let's say that it stores upto 1000 records (and pauses computation whenever this many records are available). When the client fetches records, the server may subsequently delete those records and so continue with generating more records (as more slots in the 1000-length window are free).
Let's say we fetch records with:
GET /api/v1/long-computation?ack=213
We can take this to mean that the server should return records starting from index 214. When the server receives this request, it can assume that the (well-behaved) client is acknowledging that records up to number 213 are received by the client and so it deletes them, and then returns records starting from number 214 to whatever is available at that time.
Next if the client requests:
GET /api/v1/long-computation?ack=214
the server would delete record 214 and return records starting from 215.
This seems like a reasonable design until it is noticed that GET requests need to be safe and idempotent (see section 9.1 in the HTTP RFC).
Questions:
Is there a better way to design this API?
Is it OK to keep it as GET even though it appears to violate the standard?
Would it be reasonable to make it a POST request such as:
POST /api/v1/long-computation/truncate-and-fetch?ack=213
One question I always feel like that needs to be asked is, are you sure that REST is the right approach for this problem? I'm a big fan and proponent REST, but try to only apply to to situations where it's applicable.
That being said, I don't think there's anything necessarily wrong with expiring resources after they have been used, but I think it's bad design to re-use the same url over and over again.
Instead, when I call the first set of results (maybe with):
GET /api/v1/long-computation
I'd expect that resource to give me a next link with the next set of results.
Although that particular url design does sort of tell me there's only 1 long-computation on the entire system going on at the same time. If this is not the case, I would also expect a bit more uniqueness in the url design.
The best solution here is to buy a bigger hard drive. I'm assuming you've pushed back and that's not in the cards.
I would consider your operation to be "unsafe" as defined by RFC 7231, so I would suggest not using GET. I would also strongly advise you to not delete records from the server without the client explicitly requesting it. One of the principles REST is built around is that the web is unreliable. Under your design, what happens if a response doesn't make it to the client for whatever reason? If they make another request, any records from the lost response will be destroyed.
I'm going to second #Evert's suggestion that you absolutely must keep this design, you instead pick a technology that's build around reliable delivery of information, such as a messaging queue. If you're going to stick with REST, you need to allow clients to tell you when it's safe to delete records.
For instance, is it possible to page records? You could do something like:
POST /long-running-operations?recordsPerPage=10
202 Accepted
Location: "/long-running-operations/12"
{
"status": "building next page",
"retry-after-seconds": 120
}
GET /long-running-operations/12
200 OK
{
"status": "next page available",
"current-page": "/pages/123"
}
-- or --
GET /long-running-operations/12
200 OK
{
"status": "building next page",
"retry-after-seconds": 120
}
-- or --
GET /long-running-operations/12
200 OK
{
"status": "complete"
}
GET /pages/123
{
// a page of records
}
DELETE /pages/123
// remove this page so new records can be made
You'll need to cap out page size at the number of records you support. If the client request is smaller than that limit, you can background more records while they process the first page.
That's just spitballing, but maybe you can start there. No promises on quality - this is totally off the top of my head. This approach is a little chatty, but it saves you from returning a 404 if the new page isn't ready yet.

OutSystems e-mail fails after 100s

I'm using OutSystems plataform and recently I'm getting timeout from a periodic e-mail. The timer responsible for this action has 20min timeout but the timer fails after 100s.
Some times the timer executes in 99s and the process finish successfully.
The error:
OutSystems.HubEdition.RuntimePlatform.EmailException: Error creating Email. The operation has timed out
How can I change this behavior to extend this 100s timeout?
You can increase the timeout setting on the Aggregate / Advanced query that you are using to retrieve the data.
Improving the query is always first prize, but increasing the timeout could by you some time.
UPDATE
According to the OutSystems documentation you cannot set the timeout for email rendering. You would have to speed up the rendering.
You could perhaps split your logic into an action that executes the query and stores the result for quick retrieval during the email preparation.
Probably the issue you're having is that the email is taking too long to render. You can check if this is the case by looking at the error log in Service Center. You should see something like:
Error creating Email. The operation has timed out
at System.Net.HttpWebRequest.GetResponse()
at OutSystems.HubEdition.RuntimePlatform.Email.EmailHelper.HttpGetContent(String ssUrl, String method, String contentType, String userAgent, Cookie cookie, QueryParameter[] parameters, String& ssContent, String& ssContentEncoding)
If this is the case, you need to optimize the email in order to render it faster. One good place to start looking is the Slow Queries report, maybe you have some long running query that's slowing down your email rendering...
Best of luck! If you want more details, you can check this community post.

Facebook Request(s): what counts as 1 request?

I am currently creating an application that polls facebook for data. First request a page in this fashion...
pageID/posts?fields=id,message,created_time,type&limit=250
This returns the top 250 posts from a page. I then check if there page next is set and if it is make another request for the next 250 posts. I continue this recursively until there are no more posts.
With each post that is returned I go out and fetch the post details from the graph api as well.
My question is if I had 500 posts on a page. Would that equate to 502 requests? (500 requests for each post + 2 for parsing through page data to get posts) or am I incorrect in my understanding of a "request". I know when batching calls each query included in the batch actually counts as 1 request. The goal is to avoid the 600 calls / 600 second rate limiting. Thanks!
Every API call is...well, 1 request. So every time you use the /posts endpoint with whatever limit, it will be 1 request. For example, if you do that call you posted, it will be one request that returns 250 elements.
Batch requests are just faster, but each call in the batch counts as a request. So if you combine 10 calls in a batch, it will be 10 requests. The benefit of batch calls is really just that they are a lot faster: as fast as the slowest call in the batch.
If you want to get 500 posts with that example of yours, you would only need 2 calls. First one with 250 returned elements, second one by using the API call defined in the "next" value to get another 250. Just keep in mind that the default is usually 25 elements, and you can´t use any limit you want. There is a max limit for calls and it gets changed from time to time afaik so don´t count on getting the same result every time.
Btw, don't be to fixated on that 600calls/600seconds limit, it's just a general limit. The real limit is dynamic and depends on many factors. It's not public, of course. But if you really hit the limit, you are doing something wrong anyway.

Facebook Graph API v1.0 data size limit for JSON return object?

Does Facebook's Graph API have some sort of limit on the size of the JSON object that is returned from its queries?
When I request a lot of a user's friends information, I sometimes get an error code of 1 - unknown error. This happens when I run the following query for a user that has a lot of Facebook friends (200 and up)
me/friends/?fields=id,name,gender,birthday,cover,significant_other,languages,education,work,
checkins.limit(1).fields(place,id,created_time),
likes.limit(5).fields(id,name,created_time),
statuses.limit(5).fields(message,updated_time),
movies.limit(5).fields(name,created_time,id),
music.limit(5).fields(name,created_time,id),
books.limit(5).fields(name,created_time,id),
games.limit(5).fields(name,created_time,id),
interests.limit(5).fields(name)
I tried this on the Graph Explorer and it returned this error
{
  "error": "Request failed"
}
If I run the same request with fewer friends (125 or so), I get back all the data I expect.
It seems like the error is happening because the number of bytes in the JSON that is returned is larger than some threshold, but I haven't seen anything in the docs to corroborate this.
Would what cause this error to happen? Has anyone faced this issue before? Any ideas of how to mitigate this?
Solutions I've Considered
Limit the number of friends returned, and if the error still occurs, lower that limit for the next batch, and if still the error occurs, lower the limit again, etc - this solution isn't ideal but will probably work for most cases
Split up the queries into multiple requests - this approach would increase the API calls significantly (risking throttling) since it is no longer part of one paged request
Use FQL instead of Graph API - I haven't done enough research into this, but I believe that I would have to query each entity (likes, checkins, etc) one at a time which would increase the API calls significantly and risk throttling
In the end, all of these solutions are still subject to same Unknown Error to some degree since I can't predict the size of the object that will be returned (a status message could be a few words or a few paragraphs). It would be ideal to get a handle as to why this error is happening before going off and implementing a work around.
Thanks in advance!