Setting subscriptions on multiple microsoft graph objects - rest

With Microsoft Graph, I can set a subscription on a resource. In my case an event. I am going to be using an admin authenticated account to access multiple calendars.
Is there a way to set a subscription to get notifications on all the calendars the admin can see?
If not, is there a way to send in a block of subscriptions with a single request? Because we are limited to how many requests we can specify in a specific timeframe. (I'm not sure what the limit is) but if I have 500 calendars I need to set subscriptions on so I get notifications of changes, how are you supposed to do this and not get hit by the request per timeframe limit?

Currently, there isn't a way to send multiple subscription creation requests in the same HTTP REST call. Every different resource for which a subscription is being created would have its own HTTP call into the Graph REST API.
You can recommend a "batching" feature (so multiple REST requests can be processed in the same HTTP call to the Graph API) on UserVoice: https://officespdev.uservoice.com/

There is also a consideration that, in my experience, the number of simultaneous subscriptions allowed is around 20, so 500 subscriptions might be out of the question. The best advice I've been given on the subject is to loop through all the objects one at a time to refresh them in sequence. The throttling that follows is a different issue altogether.
When a 429/"Unknown Error" comes back (ie throttling), it comes with a retry-after header which should be observed. I might point out that throttling, for me, is still a huge issue.

Related

Increase Batch Quota in Google Core Reporting API

Does anyone know if there is a way to increase the quota limit of 10 queries when batching calls to the core reporting API?
This question/answer mentions the limit of 10: How can I combine/speed up multiple API calls to improve performance?
If I try to add more than 10 queries to the batch only the first ten are processed, each one after that contains a 403 quota exceeded error.
Is there a pay option? Would love to speed up the process of reporting on GA data for a bunch of URLs. I looked in my Google Developer's Console under the Analytics API where there is an option to increase the per-user limit and a link to request additional quota but I don't need total quota to increase, only allowed batch requests.
Thanks!
Quota is the number of requests you are allowed to make to a Google API without requesting permission to access more. Most of the Google APIs have a free quota, a number of requests Google lets you make without asking for permission to make more request. There are project based quotas and user based quotas.
Unless it says other wise APIs Quotas are projects based not user based.
User quota example
Per-user limit 10 requests/second/user
Some Quotas are user based, a user is normally the person that has authenticated the request. Every request sent to google contains information about who is making the request in the form of the IP address where the request came from. If you have your code running on a server the IP address is the same all the time so Google sees it as the same user. You can get around his by adding a random Quotauser to your request this will identify the request based upon different users.
If you send to many requests to fast from the same user you will see the following error.
userRateLimitExceeded The request failed because a per-user rate limit
has been reached.
The best way to get around this is to use QuotaUser in all of your requests, and identify different users to Google. Or just send a random number every time should also work.
Answer: You can't apply for an extension of the flood protection user rate limit. But you can get around it by using QuotaUser.
more info on quotas can be found on Google developers console APIs

How to see if a post on a FB page is updated

I am using the graph api to get data about pages and the posts in the pages.
When a post is published, it gets liked, commented upon and shared over time. When I read the data next time how can I get the posts that have those changes alone?
the best way is really to set up a server to receive real time updates. Any other way would mean polling facebook endpoints. At a certain point, a single user access token would be rate limited, and would block you from making a call for a certain amount of time. Also, there would be more work to compare each post to the one you stored to see if anything has changed.
Really the most efficient way is to use real time updates in which you set up an endpoint on your server to receive messages from facebook whenever something on a page (or user) has changed. If cost of keeping a server running is your roadblock, I would recommend to setup a free Parse.com account in which you can set up a server to handle Facebook's incoming requests and act on that.
I hope that makes sense! More information on realtime updates here: https://developers.facebook.com/docs/graph-api/real-time-updates/v2.2

How many parallel requests can be made using a single session token in a REST API

I am working on an application which is going to be heavily dependent on Sabre API. The critical factor for the application is going to be performance when around a million users are accessing the API simultaneously.
After speaking to Sabre API support , all they told me is that they will provide max 50 session tokens at a time and you have to manage sessions at your end.
This leaves my question unanswered - will they be able to handle a million parallel requests?
So basically will we be able to make multiple requests using the same session token unless it expires?
Please help me understand their response.Below is the series of email conversation I had with the Sabre API support.
Hello Karam,
The limit will be the simultaneous sessions that is setup for your PCC. By default you can create up to 50 simultaneous tokens in CERT (50 simultaneous sessions) but the answer to your question is no, processing time from our side will not be impacted.
Regards,
Hello Sebastian
Thank you very much for being with me and helping me out with this.
So as you have mentioned that we can have 50 session tokens at a time, is it possible to make more than 1 simultaneous requests (asynchronous requests) using a single session token?
For example , we get a session token and store it at our end and use it to make multiple requests.
I ask this because , if not , then it would mean we can only make 50 parallell requests at a time (1 request per session token).
And if that is true then we might have to implement a request queue which will delay the responses for the end users.
Thanks
Karam
Hello Karam,
Please see below my answers to your inquiries:
So as you have mentioned that we can have 50 session tokens at a time, is it possible to make more than 1 simultaneous requests (asynchronous requests) using a single session token?
For example , we get a session token and store it at our end and use it to make multiple requests.
It is not possible, It is actually not a Sabre Web Services related behavior but how Sabre host works. Sabre is a synchronous system, once a request has been sent, you need to wait until receiving a response back in order to run a second call. Otherwise you will receive a message like “PREVIOUS ENTRY ACTIVE” or similar.
I ask this because , if not , then it would mean we can only make 50 parallell requests at a time (1 request per session token).
And if that is true then we might have to implement a request queue which will delay the responses for the end users.
It will depend on the session manager and the customer’s needs but most of our customers don’t need to consume 1000 simultaneous sessions. In any case, once you are a webservices subscriber you can define and request to your account executive the amount of tokens that best meets your needs.
Hope this helps!
Best regards,
It is correct, you cannot use the same session/token for multiple parallel requests...(Sabre keeps the session state, and that affects the result of your next request)
What they recommend is to create a session manager, so you'll have your session queue and use them and "ignore" them as you need them. That way you can have sessions for query only and sessions for touching a PNR, you can also manage your own expiration time, or "keep alive" routine.

There have been too many calls from this ad-account. Wait a bit and try again

I am trying to fetch the reportstats from our account. I need to make async calls because otherwise I would get and error that the data is to old.
When I create multiple requests I will get the error: "There have been too many calls from this ad-account. Wait a bit and try again."
I have only made about 30 request in a small time because of the way the async reports work. Is there a better way to fetch te reporting data? And if there is not is there a way to see the request score that is mentioned in the documentation?
And an other question will be, is there a difference in the amount of request when your app is on development access?
Thanks in advance,
Jorik
First point, according to access level docs here there is heavy rate limiting on the apps that are in development stage.
Second, To fetch reports there are multiple endpoints that, such as ad account wise reports, campaign wise reports, ad wise reports, here is a link to the docs for Insights API
available params are :
act_AD_ACCOUNT_ID/insights
CAMPAIGN_ID/insights
ADSET_ID/insights
AD_ID/insights
Lastly, about rate limiting in marketing api. It is done as a sliding window method which means there is no actual track of number of requests per day or something, its just that a lot of requests in short amount of time is not allowed.
two things you can do are,
first see the response of api and if the response is ratelimit error, stop the request.
second, use batch requests
Here is a gist from troubleshooting guide on limits
Troubleshooting
Timeouts
The most common issues causing failure at this endpoint are too many requests and time outs:
On /GET or synchronous requests, you can get out-of-memory or timeout errors.
On /POST or asynchronous requests, you can possibly get timeout errors. For asynchronous requests, it can take up to an hour to complete a request including retry attempts. For example if you make a query that tries to fetch large volume of data for many ad level objects.
Recommendations
There is no explicit limit for when a query will fail. When it times out, try to break down the query into smaller queries by putting in filters like date range.
Unique metrics are time consuming to compute. Try to query unique metrics in a separate call to improve performance of non-unique metrics.
Rate Limiting
The Facebook Insights API utilizes rate limiting to ensure an optimal reporting experience for all of our partners. For more information and suggestions, see our Insights API Limits & Best Practices.

Application Request Limit issue (Occuring Random with Random Scenarios)

I have tried raising this concern on Facebook/Support/Bugs but they said I should post implementation issues here. I have read it everywhere and it seems to be quiet open issue till now. I am not sure, If this will be solved or not.
So, what we are doing is, we have clients - Android and iOS.
Apps on Android/iOS allows users to login into the app and generate the token on the basis of permissions set we have, and we are passing this token to server for fetching further data as and when required for client. As our userbase is increasing we are getting Application request limit reached quiet often.
We are fetching photos of users and their friends using FQL. So, when parallely fetching photos for around 8-10 different users, we are reaching the Application request limit sometimes, which is quiet random and we are not aware of the actual scenario when it breaks up and how. According to facebook the limit, which is 1M calls per day, but we are hitting around 80K - 1 Lac API calls in a day, but as users are increasing it is stretching a bit further, Less than or equal to 200 approax calls/user. We tried doing batch calls as well and we hit the application request limit as well.
If anyone of you could help us understand the complete concept of API limit and how this can be handled, then we will really appreciate the help. We want to understand how API limit is decided and it's rate is calculated over which interval so that we will be able to configure on our side accordingly.
Earlier in the day, we ran into a unique API call issue. Our server started to break for API calls for user tokens that are with us, we (on our systems, other than server) tried fetching the data for those tokens (Simple calls - /me or /me/home), and it was working alright for us but not for server, then we tried setting up another server and redirected the requests to our new server then this server works well for the same set of users. Not sure, what went wrong in this case and how it breaks up. Please help.
Many Thanks,
Reno Jones
Did you look at the Insights -> Developer section of developer.facebook.com for your app?
This will show you a breakdown per api call, including warnings and ones that are currently being throttled and why.
Also, are you sure you're using User token authorization and not just your App token?
Beyond that, we take the information from Insights to find api calls to cache on our side rather than hitting Facebook every time. You will likely have to do something similar if you're not already. They have limits for calling too often, as well as for requesting too much data. For those, we had to reduce the limits of historical data we requested.