I am using the beta endpoint Office365 Outlook REST API to synchronize a large Office365 Outlook folder, see doc here.
The response is paginated... and after many calls to the first synchronization of this big folder, I received this error:
{"error":{"code":"LocalTime","message":"This operation exceeds the throttling budget for policy part 'LocalTime', policy value '0', Budget type: 'Ews'. Suggested backoff time 299499 ms."}}
Looks like I have requested too much the API. What is the best way to handle it? Should I implement some kind of retry policy?
Yes, this is our current throttling mechanism, which is a temporary measure while our "real" throttling implementation is being deployed. To handle this, you'll need to do a retry after about 5 minutes.
Related
When using GoogleCloudStorageComposeOperator in Google's Cloud Composer we've started hitting TooManyRequests, HTTP 429.
The rate of change requests to the object path/file.csv exceeds the rate limit. Please reduce the rate of create, update, and delete requests.
What limit are we hitting? I think it's this limit but I'm not sure:
There is a write limit to the same object name of once per second, so rapid writes to the same object name won't scale.
Does anyone have a sane way around this issue? On retry it usually works, but would be neat to not rely on it working on retry.
It's hard to say, without details, but this is rather Storage than Composer Issue. It is described in Troubleshooting guide for Cloud Storage.
There you can find some more references to dig more about it. On Quotas and Limit page I have found:
When a project's bandwidth exceeds quota in a location, requests to
affected buckets can be rejected with a retryable 429 error or can be
throttled. See Bandwidth usage for information about monitoring your
bandwidth.
It seems that this error is intended to be retried, so I think implementation of try/catch mechanism might be a solution.
I receive the following in the portal:
There was an error while deleting [THUMBPRINT HERE]. The server
returned 500 error. Do you want to try again?
I suspect that there is an azure batch pool / node hanging on to the certificate, however the pool / nodes using that certificate have been deleted already (at least they are not visible in the portal).
Is there a way to force delete the certificate, in normal operation my release pipeline is reliant on being able to delete the certificate.
Intercepting azure powershell with fiddler, I can see this in the http response, so it appears to be timing out.
{
"odata.metadata":"https://ttmdpdev.northeurope.batch.azure.com/$metadata#Microsoft.Azure.Batch.Protocol.Entities.Container.errors/#Element","code":"OperationTimedOut","message":{
"lang":"en-US","value":"Operation could not be completed within the specified time.\nRequestId:[REQUEST ID HERE]\nTime:2017-08-23T16:54:23.1811814Z"
}
}
I have also deleted any corresponding tasks and schedules, still no luck.
(Disclosure: At the time of writing, I work on the Azure Batch team, though not on the core service.)
500 errors are usually transient and may represent heavy load on Batch internals (as opposed to 503s which represent heavy load on the Batch API itself). The internal timeout error reflects this. It's possible there was an unexpected spike in demand on specific APIs which are high-cost but are normally low-usage. We monitor and mitigate these, but sometimes an extremely high load with an unusual usage pattern can impact service responsiveness. I'd suggest you keep trying every 10-15 minutes, and if it doesn't clear itself in a few hours then try raising a support ticket.
There is currently no way to force-delete the certificate. This is an internal safety mechanism to ensure that Batch is never in a position where it has to deploy a certificate of which it no longer has a copy. You could request such a feature via the Batch UserVoice.
Finally, regarding your specific scenario, you could see whether it's feasible to rejig your workflow so it doesn't have the dependency on certificate deletion. You could, for example, have a garbage collection tool (perhaps running using Azure Functions or Azure Scheduler) that periodically cleans out old certificates. Arguable this adds more complexity (and arguably shouldn't be necessary) but it improves resilience and in other ways simplifies the solution as your main path no longer needs to worry so much about delays and timeouts. If you want to explore this path then perhaps post on the Batch forums and kick off a discussion with the team about possible design approaches.
Is there any way to get all contributors from organisation on GitHub using GitHub API or any external service?
I am trying to get all contributors from angular organisation using GitHub API.
I've found only one solution:
Get all repos from angular organisation using this request:
GET https://api.github.com/orgs/angular/repos
For each repo, get all its contributors by this request:
GET https://api.github.com/repos/angular/:repo/contributors
Merge all derived data to one array.
It seems to work, but I think this solution very cumbersome. I'm sending around 300 requests this way, and they are processing around 20 seconds(app will be frozen until all requests are not finished).
Questions:
Are there any alternatives to this approach?
Is it ok for github registered app to handle such many requests? I mention, these 300 requests are sending each time application starts.
Are there any alternatives to this approach?
No, not really -- I can't think of a better approach for this.
Is it ok for github registered app to handle such many requests? I mention, these 300 requests are sending each time application starts.
You should be fine as long as you respect the primary and secondary GitHub API rate limits.
https://developer.github.com/v3/#rate-limiting
https://developer.github.com/guides/best-practices-for-integrators/#dealing-with-abuse-rate-limits
The primary limits allow you to make 5000 authenticated requests per hour per user. The secondary limits will be triggered if you start making lots of concurrent requests (e.g. hundreds of requests per second for more than several second). So, you should be fine if you need to make 300 requests, just make sure you dial down the concurrency.
It would be even better if the application cached some of this information so that it can make conditional requests:
https://developer.github.com/v3/#conditional-requests
I am the manager of an iOS application and it uses Google Places API. Right now I am limited to 100,000 requests and during our testing, one or two users could use up to 2000 requests per day (without autocomplete). This means that only about 50 to 200 people will be able to use the app per day before I run out of quota. I know I will need to fill out the uplift request form when the app launches to get more quota but I still feel that I will need a very large quota based on these test results. Can anyone help me with this issue?
Note: I do not want to launch the app until I know I will be able to get a larger quota.
First up, put your review request in sooner rather than later so I have time to review it and make sure it complies with our Terms of Service.
Secondly, how are your users burning 2k requests per day? Would caching results help you lower your request count?
I'm facing the same problem!
Is it possible to use Places library of the Google Maps Javascript API which gives the quota on each end user instead of an API key so that the quota will grow as user grows. See here
Theoretically I think it's possible to do that since it just need a webView or javascript runtime to use the library, but didn't see anyone seems to use this approach.
In the documentation, it states that the API is limited to one email per user, and that we should create threads and process multiple users at once.
Does any one know if the is any other type of limitation? How many GB/Hour?
I have to plan a migration tens of thousands of accounts, hardware resources is practically unlimited, will I reaise a flag somewhere or get blocked if I start migrating over 1,000 users at a time?
Thanks
The limits for the API are posted at https://developers.google.com/google-apps/email-migration/limits. There is a per-user rate limit in place of one request per second per user. If you exceed this you will start seeing 503 errors returned. The best way to deal with this is to implement an exponential backoff algorithm to handle the errors and retry the request.