The exact limitation of request from one IP - facebook

I'm developing the application which gets the top 20 of pages from all letters. Basically, at this time there's no problem with limitation. But I need to know what's the exact number of requests from one IP address per second ?
Best regards,

There is no exact number per second. Like any other site, if you do too many you will likely get blocked as a denial of service attack. If you are doing too many of an extended period of time, Facebook will likely block you, at least temporarily.
If you are trying to crawl Facebook, then you should obey the rules defined in their robots.txt file like any other crawler/spider should.
https://www.facebook.com/robots.txt
http://www.facebook.com/apps/site_scraping_tos_terms.php
That said, I've done around 15 million update requests per day back when they have profile boxes. Never had a problem.

Related

No perfect tag for Messenger Platform’s policies

We have a lot of doubts concerning the changes in the Messenger Platform’s policies.
There is HUMAN_AGENT tag (for which we have already asked permission) which seems to be the one that adapts the most to our processes, but 7 days is still insufficient for us. Could we answer with this “message_tag” 20 days after a user message? What can we do in this case? We have to find a way not to leave the user without an answer.
We plan on using one of the above-mentioned CONFIRMED_EVENT_UPDATE to answer all user messages outside of the 24 hour window. Are there any penalties for us doing so? If there are, what are the penalties? Are they applied at the company level or the page level? None of the messages sent by our company contain what you want to avoid (spams, special offers, discounts, etc.) so we don’t think we should recieve any penalty even when using “message_tags”.
We have thought about using the normal answer and, if the “This message is sent outside of the allowed window” error message appears, we will answer using “message_tags”. Is there any problem for using the first call on a recurrent basis giving errors or should we avoid it? Avoiding it might cause to send unnecesary “message_tags”. Could we answer all private messages using HUMAN_AGENT when it is approved (our answers are always given by a customer service agent)?
Best regards
You do not mention your actual use case, so nobody can suggest any message tags that would match that use case.
Without knowing that use case the answer to your questions can only be:
1) There is no way to extend the 7 days window for human agent tag. If you get approved for it you have a 7 days windows, not 8 and not 20. However most user actions reset that window you should follow up within that window and and make sure the user engages with your bot so the window is reset and you have another 7 days for another update.
2) Abusing tags will most likely result in your page being restricted, make sure to only use them for the allowed use cases as listed in the docs: https://developers.facebook.com/docs/messenger-platform/send-messages/message-tags/

How long does an app in development get banned from Facebooks if it exceeds limits?

I have an app I'm developing against Facebook that timed out a few hours ago during my first production use. Of course I tried to get it do too much and the http call timed out. So, I rewrote what I was doing to use threaded connections, which sped up the interaction significantly! However, I was so engrossed in getting my interaction to speed up (it equated to about 25-50 calls, not exactly sure, I was expecting 25 but some of my results show it was 50 times), I didn't even stop to think about how fast I was hitting facebook.
So, I started getting the "Uncaught OAuthException: It looks like you were misusing this feature by going too fast. You窶况e been blocked from using it." which is what I now get even if I try to run my program with only 1 hit. I've added a sleep into my system to limit the hits at 1/second, but I'm concerned that my app (that was not making public posts so no one could have been bothered by them) is now forever banned from facebook, as it says I'm banned from the feature with a reference to learn about blocks in the Help Center; except I can't find any reference in the Help Center to my specific situation.
Does anyone know how long my app is out of commission?
And what are the specific (reference please, because I've search the hell out of fb and can't find one) limits regarding speed at which you can access facebook?
It depends on what has blocked you. In this case it was a spam bot that stopped me from posting comments into a group. Apparently there is a non-specific number of times you can post comments in a group in a short amount of time. The amount varies, but hovers around 150ish give or take 50 (at the time of my tests).
The ban appeared to be consistently set to about 19 hours at that time (May 2014). I've confirmed by continued testing in test groups and subsequent bans. However, Facebook developers are unable to give a solid set of numbers as they say it's controlled by a spam algorithm which changes based on server usage. So, 150 comments within about 3 minutes = ban for about 19 hours.

Google Places API - How much can I uplift the quota with uplift quota request form?

I am the manager of an iOS application and it uses Google Places API. Right now I am limited to 100,000 requests and during our testing, one or two users could use up to 2000 requests per day (without autocomplete). This means that only about 50 to 200 people will be able to use the app per day before I run out of quota. I know I will need to fill out the uplift request form when the app launches to get more quota but I still feel that I will need a very large quota based on these test results. Can anyone help me with this issue?
Note: I do not want to launch the app until I know I will be able to get a larger quota.
First up, put your review request in sooner rather than later so I have time to review it and make sure it complies with our Terms of Service.
Secondly, how are your users burning 2k requests per day? Would caching results help you lower your request count?
I'm facing the same problem!
Is it possible to use Places library of the Google Maps Javascript API which gives the quota on each end user instead of an API key so that the quota will grow as user grows. See here
Theoretically I think it's possible to do that since it just need a webView or javascript runtime to use the library, but didn't see anyone seems to use this approach.

Facebook image URLs - how are they kept from un-authorised users?

I'm interested in social networks and have stumbled upon something which makes me curious.
How does facebook keep people from playing with URLs and gaining access to photos they should not?
Let me expand, here's an altered example of a facebook image URL that came up on my feed-
https://fbcdn-sphotos-g-a.akamaihd.net/hphotos-ak-prn1/s480x480/{five_digit_number}_{twelve_digit_number}_{ten_digit_number}_n.jpg
So, those with more web application experience will presumably know the answer to this, I suspect it's well understood, but what is to stop me from changing the numbers and seeing other people's photos that possibly I'm not supposed to?
[I understand that this doesn't work, I'm just trying to understand how they maintain security and avoid this problem]
Many thanks in advance,
Nick
There's a couple ways you can achieve it.
The first is link to a script or action that authenticates the request, and then returns an image. You can find an example with ASP.NET MVC here. The downside is it's pretty inefficient, and you run the risk of twice the bandwidth for each request (once so your server can grab the image from wherever it's stored, and once to serve it to your users).
The second option, you can do like Facebook and just generate obscure url's for each photo. As Thomas said in his comment, you're not going to guess a 27 digit number.
The third option I think is the best, especially if you're using something like Microsoft Azure or Amazon S3. Azure Blob Storage supports Shared Access Signatures, which let's you generate temporary url's for private files. These can be set to expire in a few minutes, or last a lifetime. The files are served directly to the user, and there's no risk if the url leaks after the expiration period.
Amazon S3 has something similar with Query String Authentication.
Ultimately, you need to figure out your threat model, and make a decision weighing the pros and cons of each approach. On Facebook, these are images that have presumably been shared with hundreds of friends. There's a significantly lower expectation of privacy, and so maybe authenticating every request is overkill. A random, hard to guess URL is probably sufficient, and let's them serve data through their CDN, and minimizes the amount of processing per request. With Option 3, you're still going to have overhead of generating those signed URL's.

Rottentomatoes' API limits

i've been looking on the RT site but cannot find any details, i'm just patching it together from what i've read on forums:
It appears the rottentomatoes' API is limited to 10k calls per day (1 call each 8.64secs), per IP address. Eg with the one API key on two separate computers (different IPs), they will not affect each other's limits.
Is this true? Anyone know? It is for an iphone app to get the background.
Thanks
Have taken this question to the RT forum, close-voters can get busy closing this thread if you wish:
http://developer.rottentomatoes.com/forum/read/123466