I keep hitting `You have exceeded a secondary rate limit` on my Github API after just 4 request in a minute when using search - github

I am using the api endpoint https://api.github.com/search/code? and just after hitting 3 or 4 request in a minute the response is showing me that I have exceeded my limit.
According to the documentation we have a limit of 30 request in a minute.

Related

Why do I keep getting "Quota exceededfor quota group 'AnalyticsDefaultGroup' and limit 'USER-100s'" Errors?

I am currently managing two Google Analytics Management Accounts with many clients and view_ids on each one. The task is to request client data via the Google Analytics Reporting API (v4) and store them to a SQL Backend on a daily basis via an Airflow DAG-structure.
For the first account everything works fine.
Just recently I added the second account to the data request routine.
The problem is that even though both accounts are set to the same "USER-100s" quota limits, I keep getting this error for the newly added account:
googleapiclient.errors.HttpError: <HttpError 429 when requesting https://analyticsreporting.googleapis.com/v4/reports:batchGet?alt=json returned "Quota exceeded for quota group 'AnalyticsDefaultGroup' and limit 'USER-100s' of service 'analyticsreporting.googleapis.com' for consumer 'project_number:XXXXXXXXXXXX'.">
I already set the quota limit "User-100s" from 100 to the maximum of 1000, as recommended in the official Google guidelines (https://developers.google.com/analytics/devguides/config/mgmt/v3/limits-quotas)
Also I checked the Google API Console and the number of requests for my project number, but I never exceeded the 1000 requests per 100 seconds so far (see request history account 2), while the first account always works(see request history account 1). Still the above error appeared.
Also I could rule out the possibility that the 2nd account's clients simply have more data.
request history account 1
request history account 2
I am now down to a try-except loop that keeps on requesting until the data is eventually queried successfully, like
success = False
data = None
while not success:
try:
data = query_data() # trying to receive data from the API
if data:
success = True
except HttpError as e:
print(e)
This is not elegant at all and bad for maintaining (like integration tests). In addition, it is very time and resource intensive, because the loop might sometimes run indefinitely. It can only be a workaround for a short time.
This is especially frustrating, because the same implementation works with the first account, that makes more requests, but fails with the second account.
If you know any solution to this, I would be very happy to know.
Cheers Tobi
I know this question is here for a while, but let me try to help you. :)
There are 3 standard request limits:
50k per day per project
2k per 100 seconds per project
100 per 100 seconds per user
As you showed in your image (https://i.stack.imgur.com/Tp76P.png)
The quota group "AnalyticsDefaultGroup" refers to your API project and the user quota is included in this limit.
Per your description, you are hitting the user quota and that usually happens when you don't provide the userIP or quotaUser in your requests.
So there is to main points you have to handle, to prevent those errors:
Include the quotaUser with a unique string in every request;
Keep 1 request per second
By your code, I will presume that you are using the default Google API Client for Python (https://github.com/googleapis/google-api-python-client), which don't have a global way to define the quotaUser.
To include the quotaUser
analytics.reports().batchGet(
body={
'reportRequests': [{
'viewId': 'your_view_id',
'dateRanges': [{'startDate': '2020-01-01', 'endDate': 'today'}],
'pageSize': '1000',
'pageToken': pageToken,
'metrics': [],
'dimensions': []
}]
},
quotaUser='my-user-1'
).execute()
That will make to Google API register you request for that user, using 1 of the 100 user limit, and not the same for your whole project.
Limit 1 request per second
If you plan to make a lot of requests, I suggest including a delay between every request using:
time.sleep(1)
right after a request on the API. That way you can keep under 100 requests per 100 seconds.
I hoped I helped. :)

How does pagination affect rate limit

I am looking at http://stocktwits.com/developers/docs/parameters and am wondering if anyone has used pagination before.
The doc says there is a limit of 800 messages, how does that interact with the request limit? Could I in theory query 200 different stock tickers every hour and get back (up to) 800 messages?
If so that sounds like a great way to get around the 30 message limit.
The documentation is unclear on this and we are rolling out new documentation that explains this more clearly.
Every stream request will have a default and max limit of 30 messages per response, regardless of whether the cursor params are present or not. So you could query 200 different stock streams every hour and get up to 6,000 messages or 12,000 if sending your access token along with the request. 200 request per hour for non authenticated requests and 400 for authenticated requests.

Google Analytics Reporting API V4 batchGet quota

in the Reporting API V4 you can do a batchGet and send up to 5 requests at once.
How does this relate to the quota ? Does it count as one request even if i put multiple ones in the request ?
Limits and quotas
It depends on what limits and quotas you are talking about. Note you can always check the API specific quotas in the Developer Console.
Quota group for the Analytics Reporting API V4:
Each batchGet requests counts as one request against these quotas:
Requests per day per project: 50,000
Requests per 100 seconds per project: 2,000
Requests per 100 seconds per user per project: 100.
Meaning you can put up to 5 requests into each batchGet for a total of 250,000 request per day.
General reporting quotas
There are some quotas general reporting quotas, in which each individual request within a batchGet acts individually against your quota.
10,000 requests per view (profile) per day.
10 concurrent requests per view (profile).
This means if you put 5 requests in a single batchGet and make 2 batchGet requests at the same time you will be at the 10 concurrent requests per view limit, and if you continue to put 5 requests in each batchGet request throughout the day you will only be able to make 2,000 batchGet requests against a single view.
Analytics Reporting API V4 batchGet considerations
A note about the ReportRequest objects within a batchGet method.
Every ReportRequest within a batchGet method must contain the same:
viewId
dateRanges
samplingLevel
segments
cohortGroup

Facebook get shares count for each post, got (#613) Calls to stream have exceeded the rate of 600 calls per 600 seconds

We are using graph API to get number of shares for all post on each page of our client, running once per day, we use graph.facebook.com/post_id, but we offen get
(#613) Calls to stream have exceeded the rate of 600 calls per 600 seconds
I tried using batch request, it seems each request in the batch got counted for the limit. Any suggestions?
Here are our findings so far:
FQL stream table doesn't have a field for "shares".
Post insights have no metric matching the "#shares" as show on page wall.
Graph API call for post will reach limit quickly.
Make fewer calls - that's the only real answer here, assuming you've already taken other optimisations, like asking for multiple posts' details in a single call (via ?ids=X,Y,Z syntax mentioned on the homepage of the Graph API documentation)
Why does it need to be done 'once per day'? Why not spread the calls out over a few hours?
It doesn't matter if you request by batch, each item will still be counted as one hit and you will reach the same limit. It's indicated in the FB docs
https://developers.facebook.com/docs/graph-api/advanced/rate-limiting
You can try distributing your load by timeout or delay in your cron job or something. Or execute the first batch and the next batch in an hour is probably the safest.

GraphAPIError: (#613) Calls to mailbox_fql have exceeded the rate of 300 calls per 600 seconds

I'm writing an application that access Facebook's inbox every 30 seconds.
The first few calls work - but after that I keep getting the "GraphAPIError: (#613) Calls to mailbox_fql have exceeded the rate of 300 calls per 600 seconds." error.
There's no way I'm accessing the inbox 300 times in under 10 minutes.
Why is this happening?
Are you accessing multiple accounts? That rate limit is per application not per user.