I'm creating async job for fetching device performance report at campaign level using /insights endpoint. After getting the async job_id, when i'm polling for completion, following is the response for some of the campaign's async_job_id:
{"id":"<<async_job_id>>","account_id":"<<account_id>>","time_ref":1490218806,"async_status":"Job Not Started","async_percent_completion":0,"date_start":"2017-02-21","date_stop":"2017-03-22"}
This is the response i'm receiving for some async jobs for duration of 1-2 hours. Post that job is getting completed.
Remaining async jobs gets completed in about 1-2 minutes.
Any way to avoid this?
No, there is no way to affect this behaviour as the time taken for an async job to complete is based on many factors.
If you need insights data returned immediately, you should try to request smaller data sets, and not use async jobs.
Related
Web application operations are generally meant to be quick to avoid long wait times to users. However, some operations the web application may perform may be computationally-intensive and take a fair bit of time. What is the best practice in REST to deal with such operations that may be take several minutes yet require an immediate response to users? Is it okay for the web application to take several minutes to return the response of the HTTP request, or is it better to return a 202 response, process in the background somewhere else, and then provide some form of notification to the user?
Is it okay for the web application to take several minutes to return the response of the HTTP request
No. Part of the problem with this approach is that if the server doesn't acknowledge the request in a timely fashion, the client won't know that it reached its intended destination.
is it better to return a 202 response, process in the background somewhere else, and then provide some form of notification to the user?
Yes. That's exactly what 202 Accepted is designed for
The 202 response is intentionally noncommittal. Its purpose is to allow a server to accept a request for some other process (perhaps a batch-oriented process that is only run once per day) without requiring that the user agent's connection to the server persist until the process is completed. The representation sent with this response ought to describe the request's current status and point to (or embed) a status monitor that can provide the user with an estimate of when the request will be fulfilled.
It can help, I think, to remember that we're talking about your integration domain; the client isn't talking to your app. It's instead talking to your API, which pretends to be a web site that the client can integrate with. So your client sends the request to the API, and the API responds with an accepted message accompanied by a bunch of links that will help the client continue with the protocol and eventually reach its goal.
I am trying to fetch the reportstats from our account. I need to make async calls because otherwise I would get and error that the data is to old.
When I create multiple requests I will get the error: "There have been too many calls from this ad-account. Wait a bit and try again."
I have only made about 30 request in a small time because of the way the async reports work. Is there a better way to fetch te reporting data? And if there is not is there a way to see the request score that is mentioned in the documentation?
And an other question will be, is there a difference in the amount of request when your app is on development access?
Thanks in advance,
Jorik
First point, according to access level docs here there is heavy rate limiting on the apps that are in development stage.
Second, To fetch reports there are multiple endpoints that, such as ad account wise reports, campaign wise reports, ad wise reports, here is a link to the docs for Insights API
available params are :
act_AD_ACCOUNT_ID/insights
CAMPAIGN_ID/insights
ADSET_ID/insights
AD_ID/insights
Lastly, about rate limiting in marketing api. It is done as a sliding window method which means there is no actual track of number of requests per day or something, its just that a lot of requests in short amount of time is not allowed.
two things you can do are,
first see the response of api and if the response is ratelimit error, stop the request.
second, use batch requests
Here is a gist from troubleshooting guide on limits
Troubleshooting
Timeouts
The most common issues causing failure at this endpoint are too many requests and time outs:
On /GET or synchronous requests, you can get out-of-memory or timeout errors.
On /POST or asynchronous requests, you can possibly get timeout errors. For asynchronous requests, it can take up to an hour to complete a request including retry attempts. For example if you make a query that tries to fetch large volume of data for many ad level objects.
Recommendations
There is no explicit limit for when a query will fail. When it times out, try to break down the query into smaller queries by putting in filters like date range.
Unique metrics are time consuming to compute. Try to query unique metrics in a separate call to improve performance of non-unique metrics.
Rate Limiting
The Facebook Insights API utilizes rate limiting to ensure an optimal reporting experience for all of our partners. For more information and suggestions, see our Insights API Limits & Best Practices.
Our API returns a user auth code which has a session expiry (30 mins of inactivity). So, if we make an api call using the auth token it renews the session to 30 mins from the time of the call.
After 30 mins of inactivity the api returns an error saying that the token has expired. At this point we should request a new auth token.
However, the obvious way to do this (show the user the log in screen and get them to log in again) will mean cutting the user off in the middle of some functions in the app.
For instance, we have various view controllers with options and inputs which aggregate and submit one whole API call at the end of the process. If the session expires on the server whilst the user is filling out these inputs and views then they will be logged out when the API call is made, and they will lose their progress in these views.
There are two possible work arounds for this:
We set timers in the app ourself to make sure the user is logged out after 30 mins inactivity in the app. This means that they won't get logged out during a set of inputs, however this poses the issues that: the server API may still expire even though we are running our own timer. This won't work therefore.
We poll every 10 seconds or so to the server to ask if the API auth token is still valid. This will eat battery, data and all sorts and just isn't a reasonable way to do something like this.
Does anyone have any ideas?
Thanks
Tom
From your description, it sounds like a classic failed transaction problem. Just in case you are not familiar with transaction processing, "Nuts and Bolts of Transaction Processing" is a primer on the topic.
If you have the ability to modify the back end system, you will want to ensure an ACID backend.
This could mean building up data on the client and not send the data to the server until you have a complete transaction. That way, if the session times out, the client still has all the data needed to complete the transaction. (leverage atomicity)
This could mean having a transaction token. As a new session is created the client could send the server the transaction token and the state of the transaction is restored within the new session. (leverage durability)
To me both of these options are better than wiping out the existing transaction and forcing the user to start over again.
Hope that helps.
Currently the only way to request keywordstats is for individual adgroups. I'm attempting to retrieve all of the keyword stats for thousands of ads. Is there any way to batch these calls up to speed things up and so I don't hammer the server with so many requests?
I have a running system that process short and long running operations with a Request-Response interface based on Agatha-RRSL.
Now we want to change a little in order to be able to send requests via website in Json format so i'm trying many REST server implementation that support Json.
REST server will be one module or "shelve" handled by Topshelf, another module will be the processing module and the last the NoSQL database runner module.
To talk between REST and processing module i'm thinking about a servicebus but we have two types of request: short requests that perform work in 1-2 seconds and long requests that do work in 1 minute..
Is servicebus the right choice for this work? I'm thinking about returning a "response" for long running op with a token that can be used to request operation status and results with a new request. The problem is that big part of the requests must be used like sync request in order to complete http response.
I think I have also problems with response size (on MSMQ message transport) when I have to return huge list of objects
Any hint?
NServiceBus is not really suitable for request-response messaging patterns. It's more suited to asynchronous publish-subscribe.
Edit: In order to implement a kind of request response, you would need to message in both directions, but consisting of three logical steps:
So your client sends a message requesting the data.
The server would receive the message, process it, construct a return message with the data, and send it to the client.
The client can then process the data.
Because each of these steps takes place in isolation and in an asynchronous manner there can be no meaningful SLA or timeout enforced between when a client sends a request and receives a response. But this works nicely for large processing job which may take several minutes to complete.
Additionally a common value which can be used to tie the request to the response will need to be present in both messages. Otherwise a client could send more than one request, and receive multiple responses and not know which response was for which request.
So you can do this with NServiceBus but it takes a little more thought.
Also NServiceBus uses MSMQ as the underlying transport, not http.