Uber sandbox-API returns 429 - uber-api

I am unable to hit Uber Sandbox-API . It is returning 429 Error and its rate limit. I have experimented some scenarios need to demo to my Leadership team . Budget allocation will be done based on this . Can you increase the limit now and is there any contact person i can talk

Per our direct discussion, Uber believes there is potentially a background process your app is running that is causing the rate limits. From our traffic logs we see your app being correctly rate limited even after we have increased the limits. Let's continue your specific discussion on that direct thread. If we discover anything that is generally applicable or something that would be helpful for this wider audience, I will add it to this thread.
Thanks,
Kyle

Related

Hitting TooManyRequests 429 when using GoogleCloudStorageComposeOperator

When using GoogleCloudStorageComposeOperator in Google's Cloud Composer we've started hitting TooManyRequests, HTTP 429.
The rate of change requests to the object path/file.csv exceeds the rate limit. Please reduce the rate of create, update, and delete requests.
What limit are we hitting? I think it's this limit but I'm not sure:
There is a write limit to the same object name of once per second, so rapid writes to the same object name won't scale.
Does anyone have a sane way around this issue? On retry it usually works, but would be neat to not rely on it working on retry.
It's hard to say, without details, but this is rather Storage than Composer Issue. It is described in Troubleshooting guide for Cloud Storage.
There you can find some more references to dig more about it. On Quotas and Limit page I have found:
When a project's bandwidth exceeds quota in a location, requests to
affected buckets can be rejected with a retryable 429 error or can be
throttled. See Bandwidth usage for information about monitoring your
bandwidth.
It seems that this error is intended to be retried, so I think implementation of try/catch mechanism might be a solution.

How long does an app in development get banned from Facebooks if it exceeds limits?

I have an app I'm developing against Facebook that timed out a few hours ago during my first production use. Of course I tried to get it do too much and the http call timed out. So, I rewrote what I was doing to use threaded connections, which sped up the interaction significantly! However, I was so engrossed in getting my interaction to speed up (it equated to about 25-50 calls, not exactly sure, I was expecting 25 but some of my results show it was 50 times), I didn't even stop to think about how fast I was hitting facebook.
So, I started getting the "Uncaught OAuthException: It looks like you were misusing this feature by going too fast. You窶况e been blocked from using it." which is what I now get even if I try to run my program with only 1 hit. I've added a sleep into my system to limit the hits at 1/second, but I'm concerned that my app (that was not making public posts so no one could have been bothered by them) is now forever banned from facebook, as it says I'm banned from the feature with a reference to learn about blocks in the Help Center; except I can't find any reference in the Help Center to my specific situation.
Does anyone know how long my app is out of commission?
And what are the specific (reference please, because I've search the hell out of fb and can't find one) limits regarding speed at which you can access facebook?
It depends on what has blocked you. In this case it was a spam bot that stopped me from posting comments into a group. Apparently there is a non-specific number of times you can post comments in a group in a short amount of time. The amount varies, but hovers around 150ish give or take 50 (at the time of my tests).
The ban appeared to be consistently set to about 19 hours at that time (May 2014). I've confirmed by continued testing in test groups and subsequent bans. However, Facebook developers are unable to give a solid set of numbers as they say it's controlled by a spam algorithm which changes based on server usage. So, 150 comments within about 3 minutes = ban for about 19 hours.

Google Places API - How much can I uplift the quota with uplift quota request form?

I am the manager of an iOS application and it uses Google Places API. Right now I am limited to 100,000 requests and during our testing, one or two users could use up to 2000 requests per day (without autocomplete). This means that only about 50 to 200 people will be able to use the app per day before I run out of quota. I know I will need to fill out the uplift request form when the app launches to get more quota but I still feel that I will need a very large quota based on these test results. Can anyone help me with this issue?
Note: I do not want to launch the app until I know I will be able to get a larger quota.
First up, put your review request in sooner rather than later so I have time to review it and make sure it complies with our Terms of Service.
Secondly, how are your users burning 2k requests per day? Would caching results help you lower your request count?
I'm facing the same problem!
Is it possible to use Places library of the Google Maps Javascript API which gives the quota on each end user instead of an API key so that the quota will grow as user grows. See here
Theoretically I think it's possible to do that since it just need a webView or javascript runtime to use the library, but didn't see anyone seems to use this approach.

Email migration API limits

In the documentation, it states that the API is limited to one email per user, and that we should create threads and process multiple users at once.
Does any one know if the is any other type of limitation? How many GB/Hour?
I have to plan a migration tens of thousands of accounts, hardware resources is practically unlimited, will I reaise a flag somewhere or get blocked if I start migrating over 1,000 users at a time?
Thanks
The limits for the API are posted at https://developers.google.com/google-apps/email-migration/limits. There is a per-user rate limit in place of one request per second per user. If you exceed this you will start seeing 503 errors returned. The best way to deal with this is to implement an exponential backoff algorithm to handle the errors and retry the request.

How should I benchmark a system to determine the overall best architecture choice?

This is a bit of an open ended question, but I'm looking for an open ended answer. I'm looking for a resource that can help explain how to benchmark different systems, but more importantly how to analyze the data and make intelligent choices based on the results.
In my specific case, I have a 4 server setup that includes mongo that serves as the backend for an iOS game. All servers are running Ubuntu 11.10. I've read numerous articles that make suggestions like "if CPU utilization is high, make this change." As a new-comer to backend architecture, I have no concept of what "high CPU utilization" is.
I am using Mongo's monitoring service (MMS), and I am gathering some information about it, but I don't know how to make choices or identify bottlenecks. Other servers serve requests from the game client to mongo and back, but I'm not quite sure how I should be benchmarking or logging important information from them. I'm also using Amazon's EC2 to host all of my instances, which also provides some information.
So, some questions:
What statistics are important to log on a backend setup? (CPU, RAM, etc)
What is a good way to monitor those statistics?
How do I analyze the statistics? (RAM usage is high/read requests are low, etc)
What tips should I know before trying to create a stress-test or benchmarking script for my architecture?
Again, if there is a resource that answers many of these questions, I don't need an explanation here, I was just unable to find one on my own.
If more details regarding my setup are helpful, I can provide those as well.
Thanks!
I like to think of performance testing as a mini-project that is undertaken because there is a real-world need. Start with the problem to be solved: is the concern that users will have a poor gaming experience if the response time is too slow? Or is the concern that too much money will be spent on unnecessary server hardware?
In short, what is driving the need for the performance testing? This exercise is sometimes called "establishing the problem to be solved." It is about the goal to be achieved-- because if there is not goal, why go through all the work of testing the performance? Establishing the problem to be solved will eventually drive what to measure and how to measure it.
After the problem is established, a next set is to write down what questions have to be answered to know when the goal is met. For example, if the goal is to ensure the response times are low enough to provide a good gaming experience, some questions that come to mind are:
What is the maximum response time before the gaming experience becomes unacceptably bad?
What is the maximum response time that is indistinguishable from zero? That is, if 200 ms response time feels the same to a user as a 1 ms response time, then the lower bound for response time is 200 ms.
What client hardware must be considered? For example, if the game only runs on iOS 5 devices, then testing an original iPhone is not necessary because the original iPhone cannot run iOS 5.
These are just a few question I came up with as examples. A full, thoughtful list might look a lot different.
After writing down the questions, the next step is decide what metrics will provide answers to the questions. You have probably comes across a lot metrics already: response time, transaction per second, RAM usage, CPU utilization, and so on.
After choosing some appropriate metrics, write some test scenarios. These are the plain English descriptions of the tests. For example, a test scenario might involve simulating a certain number of games simultaneously with specific devices or specific versions of iOS for a particular combination of game settings on a particular level of the game.
Once the scenarios are written, consider writing the test scripts for whatever tool is simulating the server work loads. Then run the scripts to establish a baseline for the selected metrics.
After a baseline is established, change parameters and chart the results. For example, if one of the selected metrics is CPU utilization versus the number of of TCP packets entering the server second, make a graph to find out how utilization changes as packets/second goes from 0 to 10,000.
In general, observe what happens to performance as the independent variables of the experiment are adjusted. Use this hard data to answer the questions created earlier in the process.
I did a Google search on "software performance testing methodology" and found a couple of good links:
Check out this white paper Performance Testing Methodology by Johann du Plessis
Have a look at the Methodology section of this Wikipedia article.