i've been looking on the RT site but cannot find any details, i'm just patching it together from what i've read on forums:
It appears the rottentomatoes' API is limited to 10k calls per day (1 call each 8.64secs), per IP address. Eg with the one API key on two separate computers (different IPs), they will not affect each other's limits.
Is this true? Anyone know? It is for an iphone app to get the background.
Thanks
Have taken this question to the RT forum, close-voters can get busy closing this thread if you wish:
http://developer.rottentomatoes.com/forum/read/123466
Related
I'm trying to interpret the Rate limits for GitHub Apps documentation. Is the limit for your app, no matter how many orgs where it is installed, or is the limit for each organization that installs it?
For example, let's say I created the GitHub app "foo". GitHub organizations "bar" and "biz" install my app "foo".
Do I have 5,000 minimum rate limit for API calls against the GitHub org "bar", and a separate 5,000 minimum rate limit for API calls against the GitHub org "biz"?
Or do I have a 5,000 minimum rate limit against the orgs "bar" and "biz" combined?
The way I read the doc, it sounds like the first case. But we seeing failures in a prod environment that seem to indicate it might be the second case. We're still trying to narrow it down, but checking if anybody knows for sure.
Thanks,
Charles
I had asked the same question to GitHub support and got an answer -- the answer is number 1 -- the limit is per installation target.
We have a lot of doubts concerning the changes in the Messenger Platform’s policies.
There is HUMAN_AGENT tag (for which we have already asked permission) which seems to be the one that adapts the most to our processes, but 7 days is still insufficient for us. Could we answer with this “message_tag” 20 days after a user message? What can we do in this case? We have to find a way not to leave the user without an answer.
We plan on using one of the above-mentioned CONFIRMED_EVENT_UPDATE to answer all user messages outside of the 24 hour window. Are there any penalties for us doing so? If there are, what are the penalties? Are they applied at the company level or the page level? None of the messages sent by our company contain what you want to avoid (spams, special offers, discounts, etc.) so we don’t think we should recieve any penalty even when using “message_tags”.
We have thought about using the normal answer and, if the “This message is sent outside of the allowed window” error message appears, we will answer using “message_tags”. Is there any problem for using the first call on a recurrent basis giving errors or should we avoid it? Avoiding it might cause to send unnecesary “message_tags”. Could we answer all private messages using HUMAN_AGENT when it is approved (our answers are always given by a customer service agent)?
Best regards
You do not mention your actual use case, so nobody can suggest any message tags that would match that use case.
Without knowing that use case the answer to your questions can only be:
1) There is no way to extend the 7 days window for human agent tag. If you get approved for it you have a 7 days windows, not 8 and not 20. However most user actions reset that window you should follow up within that window and and make sure the user engages with your bot so the window is reset and you have another 7 days for another update.
2) Abusing tags will most likely result in your page being restricted, make sure to only use them for the allowed use cases as listed in the docs: https://developers.facebook.com/docs/messenger-platform/send-messages/message-tags/
I just heard that there is a limitation on the Google Cloud Storage, so that you can only access it with a request once per second. I searched through the internet, but didn't find any appropriate answer to this.
Is this right, or can i access it more then once per second? Just want to know for an webapplication i write at the moment, that can up- and download images on the Storage. If there is an limitation, it would cause some delay, if more requests per second are send from different users.
You may be referring to the limitation that you can update or overwrite the same object up to once per second. There's no limit to the number of times you can update across different objects, or to the number of reads you can do to any object.
https://cloud.google.com/storage/docs/concepts-techniques#object-updates
I am the manager of an iOS application and it uses Google Places API. Right now I am limited to 100,000 requests and during our testing, one or two users could use up to 2000 requests per day (without autocomplete). This means that only about 50 to 200 people will be able to use the app per day before I run out of quota. I know I will need to fill out the uplift request form when the app launches to get more quota but I still feel that I will need a very large quota based on these test results. Can anyone help me with this issue?
Note: I do not want to launch the app until I know I will be able to get a larger quota.
First up, put your review request in sooner rather than later so I have time to review it and make sure it complies with our Terms of Service.
Secondly, how are your users burning 2k requests per day? Would caching results help you lower your request count?
I'm facing the same problem!
Is it possible to use Places library of the Google Maps Javascript API which gives the quota on each end user instead of an API key so that the quota will grow as user grows. See here
Theoretically I think it's possible to do that since it just need a webView or javascript runtime to use the library, but didn't see anyone seems to use this approach.
I'm developing the application which gets the top 20 of pages from all letters. Basically, at this time there's no problem with limitation. But I need to know what's the exact number of requests from one IP address per second ?
Best regards,
There is no exact number per second. Like any other site, if you do too many you will likely get blocked as a denial of service attack. If you are doing too many of an extended period of time, Facebook will likely block you, at least temporarily.
If you are trying to crawl Facebook, then you should obey the rules defined in their robots.txt file like any other crawler/spider should.
https://www.facebook.com/robots.txt
http://www.facebook.com/apps/site_scraping_tos_terms.php
That said, I've done around 15 million update requests per day back when they have profile boxes. Never had a problem.