Hitting TooManyRequests 429 when using GoogleCloudStorageComposeOperator - google-cloud-storage

When using GoogleCloudStorageComposeOperator in Google's Cloud Composer we've started hitting TooManyRequests, HTTP 429.
The rate of change requests to the object path/file.csv exceeds the rate limit. Please reduce the rate of create, update, and delete requests.
What limit are we hitting? I think it's this limit but I'm not sure:
There is a write limit to the same object name of once per second, so rapid writes to the same object name won't scale.
Does anyone have a sane way around this issue? On retry it usually works, but would be neat to not rely on it working on retry.

It's hard to say, without details, but this is rather Storage than Composer Issue. It is described in Troubleshooting guide for Cloud Storage.
There you can find some more references to dig more about it. On Quotas and Limit page I have found:
When a project's bandwidth exceeds quota in a location, requests to
affected buckets can be rejected with a retryable 429 error or can be
throttled. See Bandwidth usage for information about monitoring your
bandwidth.
It seems that this error is intended to be retried, so I think implementation of try/catch mechanism might be a solution.

Related

Google Cloud Storage quota hit - how?

When my app is trying to access files in a bucket using a SignedURL, a 429 response is received:
<Error>
<Code>InsufficientQuota</Code>
<Message>
The App Engine application does not have enough quota.
</Message>
<Details>App s~[myappname] not have enough quota</Details>
</Error>
This error continues until the end of the day, when the quota is apparently reset, then I can use storage again. It's only a small app and does not have much usage. The project that contains the storage is set up to use billing. The files are being accessed from another project, which is also set up to use billing.
I'm not aware that Google Cloud Storage has any quotas that could be hit in this fashion. The only ones I know of are the ones here: https://cloud.google.com/storage/quotas but as far as I am aware, none of them apply.
Buckets are not being created or destroyed.
Updates are not being made to buckets.
There are only a couple of IAM identities.
There are no Pub/Sub notifications.
Objects stored in the buckets are small.
Is there any way I can find out why the quota is being exceeded?
It turns out it was because of a spending limit I had set on app engine. I didn't think those spending limits applied any more, but it turns out that's for new projects. Spending limits that have already been set on existing projects are effective, and I can personally attest that they do work!
Thanks for the comments #KevinQuinzel and #gso_gabriel.

Uber sandbox-API returns 429

I am unable to hit Uber Sandbox-API . It is returning 429 Error and its rate limit. I have experimented some scenarios need to demo to my Leadership team . Budget allocation will be done based on this . Can you increase the limit now and is there any contact person i can talk
Per our direct discussion, Uber believes there is potentially a background process your app is running that is causing the rate limits. From our traffic logs we see your app being correctly rate limited even after we have increased the limits. Let's continue your specific discussion on that direct thread. If we discover anything that is generally applicable or something that would be helpful for this wider audience, I will add it to this thread.
Thanks,
Kyle

"Synchronize message" with a Large folder with Outlook REST API

I am using the beta endpoint Office365 Outlook REST API to synchronize a large Office365 Outlook folder, see doc here.
The response is paginated... and after many calls to the first synchronization of this big folder, I received this error:
{"error":{"code":"LocalTime","message":"This operation exceeds the throttling budget for policy part 'LocalTime', policy value '0', Budget type: 'Ews'. Suggested backoff time 299499 ms."}}
Looks like I have requested too much the API. What is the best way to handle it? Should I implement some kind of retry policy?
Yes, this is our current throttling mechanism, which is a temporary measure while our "real" throttling implementation is being deployed. To handle this, you'll need to do a retry after about 5 minutes.

Google Places API - How much can I uplift the quota with uplift quota request form?

I am the manager of an iOS application and it uses Google Places API. Right now I am limited to 100,000 requests and during our testing, one or two users could use up to 2000 requests per day (without autocomplete). This means that only about 50 to 200 people will be able to use the app per day before I run out of quota. I know I will need to fill out the uplift request form when the app launches to get more quota but I still feel that I will need a very large quota based on these test results. Can anyone help me with this issue?
Note: I do not want to launch the app until I know I will be able to get a larger quota.
First up, put your review request in sooner rather than later so I have time to review it and make sure it complies with our Terms of Service.
Secondly, how are your users burning 2k requests per day? Would caching results help you lower your request count?
I'm facing the same problem!
Is it possible to use Places library of the Google Maps Javascript API which gives the quota on each end user instead of an API key so that the quota will grow as user grows. See here
Theoretically I think it's possible to do that since it just need a webView or javascript runtime to use the library, but didn't see anyone seems to use this approach.

Email migration API limits

In the documentation, it states that the API is limited to one email per user, and that we should create threads and process multiple users at once.
Does any one know if the is any other type of limitation? How many GB/Hour?
I have to plan a migration tens of thousands of accounts, hardware resources is practically unlimited, will I reaise a flag somewhere or get blocked if I start migrating over 1,000 users at a time?
Thanks
The limits for the API are posted at https://developers.google.com/google-apps/email-migration/limits. There is a per-user rate limit in place of one request per second per user. If you exceed this you will start seeing 503 errors returned. The best way to deal with this is to implement an exponential backoff algorithm to handle the errors and retry the request.