User rate limit (403 error) with Google AdSense API - adsense

When making requests to the Google AdSense API I reach the user rate limit. It occurs when three users (different gmail accounts) with access to the same AdSense account are making requests at the same time. I have made sure that one user is only allowed to make 1 request per second and checked in Fiddler that it is actually working.
In the Google API console I find these quota limits:
Queries per day = 10 000
Queries per 100 seconds per user = 100
Queries per 100 seconds = 500
Have anyone else encountered this issue? Also does anyone know if the limit Queries per 100 seconds is per application or per account? I can't find any information about it in the API documentation. I am wondering if I might have reached that limit somehow.

Yesterday, I happen to run into the same issue.
Adsense management API v1.4 responses:
//when the API presumes "Queries per 100 seconds per user" limit breached.
{
"error": {
"errors": [{
"domain": "usageLimits",
"reason": "userRateLimitExceeded",
"message": "User Rate Limit Exceeded"
}],
"code": 403,
"message": "User Rate Limit Exceeded"
}
}
//when the API presumes "Queries per 100 seconds" limit breached.
{
"error": {
"errors": [{
"domain": "usageLimits",
"reason": "rateLimitExceeded",
"message": "Rate Limit Exceeded"
}],
"code": 403,
"message": "Rate Limit Exceeded"
}
}
For google, 1+1 requests might not be counted as exactly two.
You can force produce these limit breaches by reducing your quotas on google developer console.
I advise you to fix your code so that it retries after 2-3 minutes when you hit those limits again.

Related

I keep hitting `You have exceeded a secondary rate limit` on my Github API after just 4 request in a minute when using search

I am using the api endpoint https://api.github.com/search/code? and just after hitting 3 or 4 request in a minute the response is showing me that I have exceeded my limit.
According to the documentation we have a limit of 30 request in a minute.

Is it possible to log the request body size in AWS API Gateway?

I have an API Gateway -> SQS integration where every request writes the request body as an SQS message. Since the maximum size for SQS messages is 256kb I'd like to have an alert when the request size is larger than this threshold. Is that possible?
I see it's possible to log the response size but not the request.
Api Gateway Logs will show an error message , if we have enable Enable CloudWatch Logs and Log full requests/responses data at Api Gateway Stage.
{
"Error": {
"Code": "InvalidParameterValue",
"Message": "One or more parameters are invalid. Reason: Message must be shorter than 262144 bytes.",
"Type": "Sender"
},
"RequestId": "b410da48-7fee-5c2e-b046-82ed9a872753"
}
Then we can create a cloudwatch metric filter on Message must be shorter to trigger an alert.

Why do I keep getting "Quota exceededfor quota group 'AnalyticsDefaultGroup' and limit 'USER-100s'" Errors?

I am currently managing two Google Analytics Management Accounts with many clients and view_ids on each one. The task is to request client data via the Google Analytics Reporting API (v4) and store them to a SQL Backend on a daily basis via an Airflow DAG-structure.
For the first account everything works fine.
Just recently I added the second account to the data request routine.
The problem is that even though both accounts are set to the same "USER-100s" quota limits, I keep getting this error for the newly added account:
googleapiclient.errors.HttpError: <HttpError 429 when requesting https://analyticsreporting.googleapis.com/v4/reports:batchGet?alt=json returned "Quota exceeded for quota group 'AnalyticsDefaultGroup' and limit 'USER-100s' of service 'analyticsreporting.googleapis.com' for consumer 'project_number:XXXXXXXXXXXX'.">
I already set the quota limit "User-100s" from 100 to the maximum of 1000, as recommended in the official Google guidelines (https://developers.google.com/analytics/devguides/config/mgmt/v3/limits-quotas)
Also I checked the Google API Console and the number of requests for my project number, but I never exceeded the 1000 requests per 100 seconds so far (see request history account 2), while the first account always works(see request history account 1). Still the above error appeared.
Also I could rule out the possibility that the 2nd account's clients simply have more data.
request history account 1
request history account 2
I am now down to a try-except loop that keeps on requesting until the data is eventually queried successfully, like
success = False
data = None
while not success:
try:
data = query_data() # trying to receive data from the API
if data:
success = True
except HttpError as e:
print(e)
This is not elegant at all and bad for maintaining (like integration tests). In addition, it is very time and resource intensive, because the loop might sometimes run indefinitely. It can only be a workaround for a short time.
This is especially frustrating, because the same implementation works with the first account, that makes more requests, but fails with the second account.
If you know any solution to this, I would be very happy to know.
Cheers Tobi
I know this question is here for a while, but let me try to help you. :)
There are 3 standard request limits:
50k per day per project
2k per 100 seconds per project
100 per 100 seconds per user
As you showed in your image (https://i.stack.imgur.com/Tp76P.png)
The quota group "AnalyticsDefaultGroup" refers to your API project and the user quota is included in this limit.
Per your description, you are hitting the user quota and that usually happens when you don't provide the userIP or quotaUser in your requests.
So there is to main points you have to handle, to prevent those errors:
Include the quotaUser with a unique string in every request;
Keep 1 request per second
By your code, I will presume that you are using the default Google API Client for Python (https://github.com/googleapis/google-api-python-client), which don't have a global way to define the quotaUser.
To include the quotaUser
analytics.reports().batchGet(
body={
'reportRequests': [{
'viewId': 'your_view_id',
'dateRanges': [{'startDate': '2020-01-01', 'endDate': 'today'}],
'pageSize': '1000',
'pageToken': pageToken,
'metrics': [],
'dimensions': []
}]
},
quotaUser='my-user-1'
).execute()
That will make to Google API register you request for that user, using 1 of the 100 user limit, and not the same for your whole project.
Limit 1 request per second
If you plan to make a lot of requests, I suggest including a delay between every request using:
time.sleep(1)
right after a request on the API. That way you can keep under 100 requests per 100 seconds.
I hoped I helped. :)

Facebook Workplace Compliance Integration - Error code 960

I am writing a Compliance Integration using Python and the Facebook Graph API to search all user content in our Workplace community for given keywords. I have something that previously worked every time, however recently (over the last couple of days) one of the requests sent to Facebook will return a FacebookApiException with the error code 960 with a message "Request aborted. This could happen if a dependent request failed or the entire request timed out." after having already successfully received thousands of successful requests. This doesn't occur all the time, but more often than not it will fail.
{
"error": {
"message": "Request aborted. This could happen if a dependent request failed or the entire request timed out.",
"code": 960,
"type": "FacebookApiException",
"fbtrace_id": "B72L8jiCFZy"
}
}
For simplicity I haven't been using dependencies in my requests, so I can only think that it is timing out. My question is -- what is the timeout period for the Facebook Graph API? Is it timing out because I am taking too long to send a request, or is it timing out because the Facebook server is taking too long to respond to my request? Is there any way I can increase the timeout to stop the error message occurring?
TIA
This question is older, but in case anyone else is looking for an answer.
I can't answer what the timeout period is for the Facebook Graph Api, but I can point out a workaround for those who are running into timeout errors.
Facebook has documentation for how to deal with timouts:
https://developers.facebook.com/docs/graph-api/making-multiple-requests/#timeouts
Large or complex batches may timeout if it takes too long to complete all the requests within the batch. In such a circumstance, the result is a partially-completed batch. In partially-completed batches, responses from operations that complete successfully will look normal (see prior examples) whereas responses for operations that are not completed will be null.
The ordering of responses correspond with the ordering of operations in the request, so developers should process responses accordingly to determine which operations were successful and which should be retried in a subsequent operation.
So, according to their documentation, the response for a batch request that timed out should look something like this:
[
{ "code": 200,
"headers": [
{ "name":"Content-Type",
"value":"text/javascript; charset=UTF-8"}
],
"body":"{\"id\":\"…\"}"
},
null,null,null
]
Using their example, you should just need to re-queue the items in your batch request array that correspond with the null responses.

How does pagination affect rate limit

I am looking at http://stocktwits.com/developers/docs/parameters and am wondering if anyone has used pagination before.
The doc says there is a limit of 800 messages, how does that interact with the request limit? Could I in theory query 200 different stock tickers every hour and get back (up to) 800 messages?
If so that sounds like a great way to get around the 30 message limit.
The documentation is unclear on this and we are rolling out new documentation that explains this more clearly.
Every stream request will have a default and max limit of 30 messages per response, regardless of whether the cursor params are present or not. So you could query 200 different stock streams every hour and get up to 6,000 messages or 12,000 if sending your access token along with the request. 200 request per hour for non authenticated requests and 400 for authenticated requests.