How to Tell if Dropbox API 503's are per-user or per-app - dropbox-api

I recently had a user who hit the Dropbox API at a very high rate and caused a large number of 503 responses. According to the Dropbox API documentation a 503 is caused when 'Your app is making too many requests and is being rate limited. 503s can trigger on a per-app or per-user basis.'
The JSON body of the 503 response was as follows:
{"error": "Service Unavailable"}
This doesn't give me much information about on what basis I'm being throttled; per-app or per-user. This could be very important as it will affect whether I attempt to back-off and throttle all of my applications requests to Dropbox, or only those for a specific user.
Is there any way to detect which basis such responses are occurring on?

Dropbox will help you work around or increase your rate limits if you ask. I think the were purposefully ambiguous about what the rate limits means. Rate limits exist for a reason and I don't think Dropbox, or anyone, wants to give away the secret to circumventing them.
From the core docs as of July 8th, 2014
https://www.dropbox.com/developers/core/docs
You'll receive a 503 if you're still using OAuth 1.0a as your authentication. You'll receive a 429 if you're using OAuth 2.0. But yea, they are unclear if it is per-app or per-user.
From best practices page they mention that you should contact the developer team if you need to work around these limits.

Related

Limit for REST API Calls towards Dropbox?

Is there a fix number for REST Calls towards Dropbox like 1000 requests per Day etc.
I cant find any Information on the Dropbox site.
Thank you!
The Dropbox API does have a rate limiting system, but there aren't any specific numbers documented. The limits operate on a per-user basis.
Also note that not all 429s and 503s indicate rate limiting, but in any case that you get a 429 or 503 the best practice is to retry the request, respecting the Retry-After header if given in the response, or using an exponential back-off, if not.

Increase Batch Quota in Google Core Reporting API

Does anyone know if there is a way to increase the quota limit of 10 queries when batching calls to the core reporting API?
This question/answer mentions the limit of 10: How can I combine/speed up multiple API calls to improve performance?
If I try to add more than 10 queries to the batch only the first ten are processed, each one after that contains a 403 quota exceeded error.
Is there a pay option? Would love to speed up the process of reporting on GA data for a bunch of URLs. I looked in my Google Developer's Console under the Analytics API where there is an option to increase the per-user limit and a link to request additional quota but I don't need total quota to increase, only allowed batch requests.
Thanks!
Quota is the number of requests you are allowed to make to a Google API without requesting permission to access more. Most of the Google APIs have a free quota, a number of requests Google lets you make without asking for permission to make more request. There are project based quotas and user based quotas.
Unless it says other wise APIs Quotas are projects based not user based.
User quota example
Per-user limit 10 requests/second/user
Some Quotas are user based, a user is normally the person that has authenticated the request. Every request sent to google contains information about who is making the request in the form of the IP address where the request came from. If you have your code running on a server the IP address is the same all the time so Google sees it as the same user. You can get around his by adding a random Quotauser to your request this will identify the request based upon different users.
If you send to many requests to fast from the same user you will see the following error.
userRateLimitExceeded The request failed because a per-user rate limit
has been reached.
The best way to get around this is to use QuotaUser in all of your requests, and identify different users to Google. Or just send a random number every time should also work.
Answer: You can't apply for an extension of the flood protection user rate limit. But you can get around it by using QuotaUser.
more info on quotas can be found on Google developers console APIs

There have been too many calls from this ad-account. Wait a bit and try again

I am trying to fetch the reportstats from our account. I need to make async calls because otherwise I would get and error that the data is to old.
When I create multiple requests I will get the error: "There have been too many calls from this ad-account. Wait a bit and try again."
I have only made about 30 request in a small time because of the way the async reports work. Is there a better way to fetch te reporting data? And if there is not is there a way to see the request score that is mentioned in the documentation?
And an other question will be, is there a difference in the amount of request when your app is on development access?
Thanks in advance,
Jorik
First point, according to access level docs here there is heavy rate limiting on the apps that are in development stage.
Second, To fetch reports there are multiple endpoints that, such as ad account wise reports, campaign wise reports, ad wise reports, here is a link to the docs for Insights API
available params are :
act_AD_ACCOUNT_ID/insights
CAMPAIGN_ID/insights
ADSET_ID/insights
AD_ID/insights
Lastly, about rate limiting in marketing api. It is done as a sliding window method which means there is no actual track of number of requests per day or something, its just that a lot of requests in short amount of time is not allowed.
two things you can do are,
first see the response of api and if the response is ratelimit error, stop the request.
second, use batch requests
Here is a gist from troubleshooting guide on limits
Troubleshooting
Timeouts
The most common issues causing failure at this endpoint are too many requests and time outs:
On /GET or synchronous requests, you can get out-of-memory or timeout errors.
On /POST or asynchronous requests, you can possibly get timeout errors. For asynchronous requests, it can take up to an hour to complete a request including retry attempts. For example if you make a query that tries to fetch large volume of data for many ad level objects.
Recommendations
There is no explicit limit for when a query will fail. When it times out, try to break down the query into smaller queries by putting in filters like date range.
Unique metrics are time consuming to compute. Try to query unique metrics in a separate call to improve performance of non-unique metrics.
Rate Limiting
The Facebook Insights API utilizes rate limiting to ensure an optimal reporting experience for all of our partners. For more information and suggestions, see our Insights API Limits & Best Practices.

REST HTTP Response Code when 3rd party ressource became unaviable

in one scenario i got some data from client. With this client i want to start a booking.
Now it could be possible that the booking can't be done. For example when the ressource is sold out und became unaviable.
What would be a good reponse code for this?
I tested some apis and found there is often 500, 400,404 as a result.
A 500 looks just weired for me.
Also 400 is strange, because the api didn't do any wrong.
404 doesn't feel right, because the ressource is there, it just can't be bought right now.
Any advice on best practice?
One of the possible http error codes to use for this is 410: Gone.
The explanation for this code:
Indicates that the resource requested is no longer available and will
not be available again. This should be used when a resource has been
intentionally removed and the resource should be purged. Upon
receiving a 410 status code, the client should not request the
resource again in the future. Clients such as search engines should
remove the resource from their indices. Most use cases do not require
clients and search engines to purge the resource, and a "404 Not
Found" may be used instead.

Application Request Limit issue (Occuring Random with Random Scenarios)

I have tried raising this concern on Facebook/Support/Bugs but they said I should post implementation issues here. I have read it everywhere and it seems to be quiet open issue till now. I am not sure, If this will be solved or not.
So, what we are doing is, we have clients - Android and iOS.
Apps on Android/iOS allows users to login into the app and generate the token on the basis of permissions set we have, and we are passing this token to server for fetching further data as and when required for client. As our userbase is increasing we are getting Application request limit reached quiet often.
We are fetching photos of users and their friends using FQL. So, when parallely fetching photos for around 8-10 different users, we are reaching the Application request limit sometimes, which is quiet random and we are not aware of the actual scenario when it breaks up and how. According to facebook the limit, which is 1M calls per day, but we are hitting around 80K - 1 Lac API calls in a day, but as users are increasing it is stretching a bit further, Less than or equal to 200 approax calls/user. We tried doing batch calls as well and we hit the application request limit as well.
If anyone of you could help us understand the complete concept of API limit and how this can be handled, then we will really appreciate the help. We want to understand how API limit is decided and it's rate is calculated over which interval so that we will be able to configure on our side accordingly.
Earlier in the day, we ran into a unique API call issue. Our server started to break for API calls for user tokens that are with us, we (on our systems, other than server) tried fetching the data for those tokens (Simple calls - /me or /me/home), and it was working alright for us but not for server, then we tried setting up another server and redirected the requests to our new server then this server works well for the same set of users. Not sure, what went wrong in this case and how it breaks up. Please help.
Many Thanks,
Reno Jones
Did you look at the Insights -> Developer section of developer.facebook.com for your app?
This will show you a breakdown per api call, including warnings and ones that are currently being throttled and why.
Also, are you sure you're using User token authorization and not just your App token?
Beyond that, we take the information from Insights to find api calls to cache on our side rather than hitting Facebook every time. You will likely have to do something similar if you're not already. They have limits for calling too often, as well as for requesting too much data. For those, we had to reduce the limits of historical data we requested.