Why do I keep getting "Quota exceededfor quota group 'AnalyticsDefaultGroup' and limit 'USER-100s'" Errors? - google-analytics-api

I am currently managing two Google Analytics Management Accounts with many clients and view_ids on each one. The task is to request client data via the Google Analytics Reporting API (v4) and store them to a SQL Backend on a daily basis via an Airflow DAG-structure.
For the first account everything works fine.
Just recently I added the second account to the data request routine.
The problem is that even though both accounts are set to the same "USER-100s" quota limits, I keep getting this error for the newly added account:
googleapiclient.errors.HttpError: <HttpError 429 when requesting https://analyticsreporting.googleapis.com/v4/reports:batchGet?alt=json returned "Quota exceeded for quota group 'AnalyticsDefaultGroup' and limit 'USER-100s' of service 'analyticsreporting.googleapis.com' for consumer 'project_number:XXXXXXXXXXXX'.">
I already set the quota limit "User-100s" from 100 to the maximum of 1000, as recommended in the official Google guidelines (https://developers.google.com/analytics/devguides/config/mgmt/v3/limits-quotas)
Also I checked the Google API Console and the number of requests for my project number, but I never exceeded the 1000 requests per 100 seconds so far (see request history account 2), while the first account always works(see request history account 1). Still the above error appeared.
Also I could rule out the possibility that the 2nd account's clients simply have more data.
request history account 1
request history account 2
I am now down to a try-except loop that keeps on requesting until the data is eventually queried successfully, like
success = False
data = None
while not success:
try:
data = query_data() # trying to receive data from the API
if data:
success = True
except HttpError as e:
print(e)
This is not elegant at all and bad for maintaining (like integration tests). In addition, it is very time and resource intensive, because the loop might sometimes run indefinitely. It can only be a workaround for a short time.
This is especially frustrating, because the same implementation works with the first account, that makes more requests, but fails with the second account.
If you know any solution to this, I would be very happy to know.
Cheers Tobi

I know this question is here for a while, but let me try to help you. :)
There are 3 standard request limits:
50k per day per project
2k per 100 seconds per project
100 per 100 seconds per user
As you showed in your image (https://i.stack.imgur.com/Tp76P.png)
The quota group "AnalyticsDefaultGroup" refers to your API project and the user quota is included in this limit.
Per your description, you are hitting the user quota and that usually happens when you don't provide the userIP or quotaUser in your requests.
So there is to main points you have to handle, to prevent those errors:
Include the quotaUser with a unique string in every request;
Keep 1 request per second
By your code, I will presume that you are using the default Google API Client for Python (https://github.com/googleapis/google-api-python-client), which don't have a global way to define the quotaUser.
To include the quotaUser
analytics.reports().batchGet(
body={
'reportRequests': [{
'viewId': 'your_view_id',
'dateRanges': [{'startDate': '2020-01-01', 'endDate': 'today'}],
'pageSize': '1000',
'pageToken': pageToken,
'metrics': [],
'dimensions': []
}]
},
quotaUser='my-user-1'
).execute()
That will make to Google API register you request for that user, using 1 of the 100 user limit, and not the same for your whole project.
Limit 1 request per second
If you plan to make a lot of requests, I suggest including a delay between every request using:
time.sleep(1)
right after a request on the API. That way you can keep under 100 requests per 100 seconds.
I hoped I helped. :)

Related

How does pagination affect rate limit

I am looking at http://stocktwits.com/developers/docs/parameters and am wondering if anyone has used pagination before.
The doc says there is a limit of 800 messages, how does that interact with the request limit? Could I in theory query 200 different stock tickers every hour and get back (up to) 800 messages?
If so that sounds like a great way to get around the 30 message limit.
The documentation is unclear on this and we are rolling out new documentation that explains this more clearly.
Every stream request will have a default and max limit of 30 messages per response, regardless of whether the cursor params are present or not. So you could query 200 different stock streams every hour and get up to 6,000 messages or 12,000 if sending your access token along with the request. 200 request per hour for non authenticated requests and 400 for authenticated requests.

Google Analytics Reporting API V4 batchGet quota

in the Reporting API V4 you can do a batchGet and send up to 5 requests at once.
How does this relate to the quota ? Does it count as one request even if i put multiple ones in the request ?
Limits and quotas
It depends on what limits and quotas you are talking about. Note you can always check the API specific quotas in the Developer Console.
Quota group for the Analytics Reporting API V4:
Each batchGet requests counts as one request against these quotas:
Requests per day per project: 50,000
Requests per 100 seconds per project: 2,000
Requests per 100 seconds per user per project: 100.
Meaning you can put up to 5 requests into each batchGet for a total of 250,000 request per day.
General reporting quotas
There are some quotas general reporting quotas, in which each individual request within a batchGet acts individually against your quota.
10,000 requests per view (profile) per day.
10 concurrent requests per view (profile).
This means if you put 5 requests in a single batchGet and make 2 batchGet requests at the same time you will be at the 10 concurrent requests per view limit, and if you continue to put 5 requests in each batchGet request throughout the day you will only be able to make 2,000 batchGet requests against a single view.
Analytics Reporting API V4 batchGet considerations
A note about the ReportRequest objects within a batchGet method.
Every ReportRequest within a batchGet method must contain the same:
viewId
dateRanges
samplingLevel
segments
cohortGroup

Getting QuotaExceededException - What are the operation quota limitations for Azure Notification Hubs?

I was doing some latency/performance testing for sending push notifications with Azure Notification Hub by consecutively sending many notifications in a foreach loop. It worked fine for 100 "SendNotification" requests, altough it was relatively slow (14s), but I got a QuotaExceededException for 1000 requests in a row:
[QuotaExceededException: The remote server returned an error: (403)
Forbidden. The request was terminated because the namespace
pushnotification-testing is being throttled. Please wait 60 seconds
and try again. TrackingId:...
Even when I don't wait for 60 seconds as advised, I can again execute 100 consecutive requests, but 1000 requests in a row always fail... Anything slightly above 100 consecutive requests fails most of the time...
I couldn't find any documentation on these limitations. This should be documented somewhere, so I can be sure Azure Notification Hubs will fit my needs.
The answer to this question says
There is a throttling for CRUD operation's rate. Quotas depend on tire
your are but it is not going to be less then 2000 operations per
minute per namespace any way. If quota is exceed then service returns
403.
For me, it seems to be less then 2000 operations. By the way, I'm using "FREE" tier for testing, but I guess we would switch to "STANDARD" for production.
Has anyone similar experiences or knows where to look for more information?
In particular, what are the operation quota limitations per timefram for the different tiers of Azure Notification Hubs?
UPDATE1: It's weird, but I sending 1000 requests in parallel works most of the time, but consecutively it fails on the 101st request.
For my best knowledge for right now NH has following limitations on number of SENDS (not registrations) per namespace per minute per NH machine:
Free tire: 100
Basic tire: 900
Standard tire: 11500
Massive sending in parallel allows to send more because calls are very likely to be routed on different machines.

azure mobile service custom api script http request timeout

I implemented exports.get = function(request,response) of a custom api on a mobile service of azure. I download 5 thousands records from the rest service and then i prepare the json for the output. The problem is that the time of downloading of all records is too long, for that script exceeds the default timeout of 30 secs. I was thinking if there is a way to increase the timeout of the response.
I don't believe you can have a timeout greater than 30 seconds, as I have encountered this problem myself with azure custom APIs. According to this link https://msdn.microsoft.com/en-us/library/azure/dd894042.aspx, Table operations are limited to 30 seconds, but it's not clear if that applies to custom apis, but it certainly appears to be.
What I would recommend is to implement pagination and return a limited number of records at a time. Your parameters should include the the start index and amount of records to return, and your response should include how many records in total so you can determine how many records to fetch with each request.

Facebook get shares count for each post, got (#613) Calls to stream have exceeded the rate of 600 calls per 600 seconds

We are using graph API to get number of shares for all post on each page of our client, running once per day, we use graph.facebook.com/post_id, but we offen get
(#613) Calls to stream have exceeded the rate of 600 calls per 600 seconds
I tried using batch request, it seems each request in the batch got counted for the limit. Any suggestions?
Here are our findings so far:
FQL stream table doesn't have a field for "shares".
Post insights have no metric matching the "#shares" as show on page wall.
Graph API call for post will reach limit quickly.
Make fewer calls - that's the only real answer here, assuming you've already taken other optimisations, like asking for multiple posts' details in a single call (via ?ids=X,Y,Z syntax mentioned on the homepage of the Graph API documentation)
Why does it need to be done 'once per day'? Why not spread the calls out over a few hours?
It doesn't matter if you request by batch, each item will still be counted as one hit and you will reach the same limit. It's indicated in the FB docs
https://developers.facebook.com/docs/graph-api/advanced/rate-limiting
You can try distributing your load by timeout or delay in your cron job or something. Or execute the first batch and the next batch in an hour is probably the safest.