How to deal with huge queries to the Facebook Graph? - facebook

I'd like to get every status update for every friend. Given I have say 500 friends, each with 200 statuses, this could be 100,000 statuses. How would you approach this from the query point of view?
What query would you write? Would Facebook allow this much data to come through in a single go? If not is there a best practice paging or offsetting solution?

Would Facebook allow this much data to come through in a single go?
No. Facebook will throw exception of too much data. Also there is automated system in place which will block time-consuming requests as well as it will block your app if it is making too much queries too frequently on a single table - API Throttling Warnings.
If not is there a best practice paging or offsetting solution?
You can do paging in FQL and when querying connections in graph. It is best practice.

From their policy:
If you exceed, or plan to exceed, any of the following thresholds please contact us as you may be subject to additional terms: (>5M MAU) or (>100M API calls per day) or (>50M impressions per day).
http://developers.facebook.com/policy/
It means that 100k is not so big deal. However, it depends. You may have to consider,
Do you REALLY need every status?
Can't they be downloaded later?
Do you need these posts/stories from every friend?

Related

Pagination and listing in APIs

I wanna ask you about lists and pagination in APIs
I want to build a long list in home screen that's mean this request will have a lot of traffic because it's the main screen and I want to build it in a good way to handle the traffic
After I searched about the way of how I gonna implement it
Can I depend on postgresql in pagination ? Or I need to use search engine like solr
If I depend on the database and users started to visit the app, then this request gonna submit a lot of queries on the database is this gonna kill the database ?
Also I'm using Redis to Cache Some data and this gonna handle some traffic but the problem with home screen the response it too large and I can't cache all of this response in one key in Redis
Can anyone explain to me what is the best way to implement this request for pagination .. the only thing I want is pagination I'm not looking to implement a full text search but to handle the traffic I read that search engine will handle it to not affect the database or kill it
Thanks a lot :D
You can do this seamlessly with the pagination technology we know in PostgreSQL. PostgreSQL has enough functions and capabilities to do this. (limit, offset, fetch)
But let me give you a recommendation.
There are several types of pagination.
The first type is that the count of pages must be known in advance. This technology is outdated and is not recommended. Because at this time you need to know the count of records in the table. But calculating count of records is a very slowing process, mainly in large tables.
The second type is that the number of pages is not known in advance. Information from the next page is brought in parts only if necessary. Just like Google, LinkedIn and other big companies use it. In this case, it is not necessary to calculate the count of any table.

Facebook Open Graph rate limit

My app is making exactly 268 calls per user ( not yet live) , but as per new facebook rate limitations its just 200 calls per user. So, caching and making less calls is one option or is there any other? and is that 200 calls per user, is really in implementation now?
Caching and making less calls is the only option. You can test if it´s just a guideline, i assume you can make a bit more too - but in general you should never even remotely hit the limit in a serious App. 200 calls per user per hour is a lot.
More information: https://developers.facebook.com/docs/graph-api/advanced/rate-limiting

Google Places API - How much can I uplift the quota with uplift quota request form?

I am the manager of an iOS application and it uses Google Places API. Right now I am limited to 100,000 requests and during our testing, one or two users could use up to 2000 requests per day (without autocomplete). This means that only about 50 to 200 people will be able to use the app per day before I run out of quota. I know I will need to fill out the uplift request form when the app launches to get more quota but I still feel that I will need a very large quota based on these test results. Can anyone help me with this issue?
Note: I do not want to launch the app until I know I will be able to get a larger quota.
First up, put your review request in sooner rather than later so I have time to review it and make sure it complies with our Terms of Service.
Secondly, how are your users burning 2k requests per day? Would caching results help you lower your request count?
I'm facing the same problem!
Is it possible to use Places library of the Google Maps Javascript API which gives the quota on each end user instead of an API key so that the quota will grow as user grows. See here
Theoretically I think it's possible to do that since it just need a webView or javascript runtime to use the library, but didn't see anyone seems to use this approach.

Email migration API limits

In the documentation, it states that the API is limited to one email per user, and that we should create threads and process multiple users at once.
Does any one know if the is any other type of limitation? How many GB/Hour?
I have to plan a migration tens of thousands of accounts, hardware resources is practically unlimited, will I reaise a flag somewhere or get blocked if I start migrating over 1,000 users at a time?
Thanks
The limits for the API are posted at https://developers.google.com/google-apps/email-migration/limits. There is a per-user rate limit in place of one request per second per user. If you exceed this you will start seeing 503 errors returned. The best way to deal with this is to implement an exponential backoff algorithm to handle the errors and retry the request.

Meaning of "Calls Access Too Much Data" in Facebook App Diagnostics

On our Facebook Application's Diagnostics page, there is a message saying that for two different queries, our "Calls Access Too Much Data". Does anyone know exactly what this means?
Specifically, does this mean that our average request size per call is too large, or that we are accessing too much data in aggregate? We make great effort to batch queries together as much as possible (both for GraphAPI and FQL queries), so that we can limit our total number of queries per user. However, if these two calls are really "accessing too much data", does that mean that we should be less agressive about batching these queries?
It looks like some of your queries are trying to pull too much information in a single call. If you can try to break it down into smaller batches, you should see these errors decrease.
I'll look into getting the documentation around the limits made a little clearer.