How many parallel requests can be made using a single session token in a REST API - rest

I am working on an application which is going to be heavily dependent on Sabre API. The critical factor for the application is going to be performance when around a million users are accessing the API simultaneously.
After speaking to Sabre API support , all they told me is that they will provide max 50 session tokens at a time and you have to manage sessions at your end.
This leaves my question unanswered - will they be able to handle a million parallel requests?
So basically will we be able to make multiple requests using the same session token unless it expires?
Please help me understand their response.Below is the series of email conversation I had with the Sabre API support.
Hello Karam,
The limit will be the simultaneous sessions that is setup for your PCC. By default you can create up to 50 simultaneous tokens in CERT (50 simultaneous sessions) but the answer to your question is no, processing time from our side will not be impacted.
Regards,
Hello Sebastian
Thank you very much for being with me and helping me out with this.
So as you have mentioned that we can have 50 session tokens at a time, is it possible to make more than 1 simultaneous requests (asynchronous requests) using a single session token?
For example , we get a session token and store it at our end and use it to make multiple requests.
I ask this because , if not , then it would mean we can only make 50 parallell requests at a time (1 request per session token).
And if that is true then we might have to implement a request queue which will delay the responses for the end users.
Thanks
Karam
Hello Karam,
Please see below my answers to your inquiries:
So as you have mentioned that we can have 50 session tokens at a time, is it possible to make more than 1 simultaneous requests (asynchronous requests) using a single session token?
For example , we get a session token and store it at our end and use it to make multiple requests.
It is not possible, It is actually not a Sabre Web Services related behavior but how Sabre host works. Sabre is a synchronous system, once a request has been sent, you need to wait until receiving a response back in order to run a second call. Otherwise you will receive a message like “PREVIOUS ENTRY ACTIVE” or similar.
I ask this because , if not , then it would mean we can only make 50 parallell requests at a time (1 request per session token).
And if that is true then we might have to implement a request queue which will delay the responses for the end users.
It will depend on the session manager and the customer’s needs but most of our customers don’t need to consume 1000 simultaneous sessions. In any case, once you are a webservices subscriber you can define and request to your account executive the amount of tokens that best meets your needs.
Hope this helps!
Best regards,

It is correct, you cannot use the same session/token for multiple parallel requests...(Sabre keeps the session state, and that affects the result of your next request)
What they recommend is to create a session manager, so you'll have your session queue and use them and "ignore" them as you need them. That way you can have sessions for query only and sessions for touching a PNR, you can also manage your own expiration time, or "keep alive" routine.

Related

Long-running operations in web-application

Web application operations are generally meant to be quick to avoid long wait times to users. However, some operations the web application may perform may be computationally-intensive and take a fair bit of time. What is the best practice in REST to deal with such operations that may be take several minutes yet require an immediate response to users? Is it okay for the web application to take several minutes to return the response of the HTTP request, or is it better to return a 202 response, process in the background somewhere else, and then provide some form of notification to the user?
Is it okay for the web application to take several minutes to return the response of the HTTP request
No. Part of the problem with this approach is that if the server doesn't acknowledge the request in a timely fashion, the client won't know that it reached its intended destination.
is it better to return a 202 response, process in the background somewhere else, and then provide some form of notification to the user?
Yes. That's exactly what 202 Accepted is designed for
The 202 response is intentionally noncommittal. Its purpose is to allow a server to accept a request for some other process (perhaps a batch-oriented process that is only run once per day) without requiring that the user agent's connection to the server persist until the process is completed. The representation sent with this response ought to describe the request's current status and point to (or embed) a status monitor that can provide the user with an estimate of when the request will be fulfilled.
It can help, I think, to remember that we're talking about your integration domain; the client isn't talking to your app. It's instead talking to your API, which pretends to be a web site that the client can integrate with. So your client sends the request to the API, and the API responds with an accepted message accompanied by a bunch of links that will help the client continue with the protocol and eventually reach its goal.

Setting subscriptions on multiple microsoft graph objects

With Microsoft Graph, I can set a subscription on a resource. In my case an event. I am going to be using an admin authenticated account to access multiple calendars.
Is there a way to set a subscription to get notifications on all the calendars the admin can see?
If not, is there a way to send in a block of subscriptions with a single request? Because we are limited to how many requests we can specify in a specific timeframe. (I'm not sure what the limit is) but if I have 500 calendars I need to set subscriptions on so I get notifications of changes, how are you supposed to do this and not get hit by the request per timeframe limit?
Currently, there isn't a way to send multiple subscription creation requests in the same HTTP REST call. Every different resource for which a subscription is being created would have its own HTTP call into the Graph REST API.
You can recommend a "batching" feature (so multiple REST requests can be processed in the same HTTP call to the Graph API) on UserVoice: https://officespdev.uservoice.com/
There is also a consideration that, in my experience, the number of simultaneous subscriptions allowed is around 20, so 500 subscriptions might be out of the question. The best advice I've been given on the subject is to loop through all the objects one at a time to refresh them in sequence. The throttling that follows is a different issue altogether.
When a 429/"Unknown Error" comes back (ie throttling), it comes with a retry-after header which should be observed. I might point out that throttling, for me, is still a huge issue.

There have been too many calls from this ad-account. Wait a bit and try again

I am trying to fetch the reportstats from our account. I need to make async calls because otherwise I would get and error that the data is to old.
When I create multiple requests I will get the error: "There have been too many calls from this ad-account. Wait a bit and try again."
I have only made about 30 request in a small time because of the way the async reports work. Is there a better way to fetch te reporting data? And if there is not is there a way to see the request score that is mentioned in the documentation?
And an other question will be, is there a difference in the amount of request when your app is on development access?
Thanks in advance,
Jorik
First point, according to access level docs here there is heavy rate limiting on the apps that are in development stage.
Second, To fetch reports there are multiple endpoints that, such as ad account wise reports, campaign wise reports, ad wise reports, here is a link to the docs for Insights API
available params are :
act_AD_ACCOUNT_ID/insights
CAMPAIGN_ID/insights
ADSET_ID/insights
AD_ID/insights
Lastly, about rate limiting in marketing api. It is done as a sliding window method which means there is no actual track of number of requests per day or something, its just that a lot of requests in short amount of time is not allowed.
two things you can do are,
first see the response of api and if the response is ratelimit error, stop the request.
second, use batch requests
Here is a gist from troubleshooting guide on limits
Troubleshooting
Timeouts
The most common issues causing failure at this endpoint are too many requests and time outs:
On /GET or synchronous requests, you can get out-of-memory or timeout errors.
On /POST or asynchronous requests, you can possibly get timeout errors. For asynchronous requests, it can take up to an hour to complete a request including retry attempts. For example if you make a query that tries to fetch large volume of data for many ad level objects.
Recommendations
There is no explicit limit for when a query will fail. When it times out, try to break down the query into smaller queries by putting in filters like date range.
Unique metrics are time consuming to compute. Try to query unique metrics in a separate call to improve performance of non-unique metrics.
Rate Limiting
The Facebook Insights API utilizes rate limiting to ensure an optimal reporting experience for all of our partners. For more information and suggestions, see our Insights API Limits & Best Practices.

Application Request Limit issue (Occuring Random with Random Scenarios)

I have tried raising this concern on Facebook/Support/Bugs but they said I should post implementation issues here. I have read it everywhere and it seems to be quiet open issue till now. I am not sure, If this will be solved or not.
So, what we are doing is, we have clients - Android and iOS.
Apps on Android/iOS allows users to login into the app and generate the token on the basis of permissions set we have, and we are passing this token to server for fetching further data as and when required for client. As our userbase is increasing we are getting Application request limit reached quiet often.
We are fetching photos of users and their friends using FQL. So, when parallely fetching photos for around 8-10 different users, we are reaching the Application request limit sometimes, which is quiet random and we are not aware of the actual scenario when it breaks up and how. According to facebook the limit, which is 1M calls per day, but we are hitting around 80K - 1 Lac API calls in a day, but as users are increasing it is stretching a bit further, Less than or equal to 200 approax calls/user. We tried doing batch calls as well and we hit the application request limit as well.
If anyone of you could help us understand the complete concept of API limit and how this can be handled, then we will really appreciate the help. We want to understand how API limit is decided and it's rate is calculated over which interval so that we will be able to configure on our side accordingly.
Earlier in the day, we ran into a unique API call issue. Our server started to break for API calls for user tokens that are with us, we (on our systems, other than server) tried fetching the data for those tokens (Simple calls - /me or /me/home), and it was working alright for us but not for server, then we tried setting up another server and redirected the requests to our new server then this server works well for the same set of users. Not sure, what went wrong in this case and how it breaks up. Please help.
Many Thanks,
Reno Jones
Did you look at the Insights -> Developer section of developer.facebook.com for your app?
This will show you a breakdown per api call, including warnings and ones that are currently being throttled and why.
Also, are you sure you're using User token authorization and not just your App token?
Beyond that, we take the information from Insights to find api calls to cache on our side rather than hitting Facebook every time. You will likely have to do something similar if you're not already. They have limits for calling too often, as well as for requesting too much data. For those, we had to reduce the limits of historical data we requested.

Facebook Application Request limit reached

I am getting an FBerror "This operation can't be completed: Application request limit reached".
Does anybody know why is it so? How to check the limit? How to increase the limit? What depends on the limit allocation?
I recently ran across this issue doing a large number of requests using an application access token (the initial project requirements mandated that the user shouldn't have to authorize the app).
After much frustration, we finally were put in touch with a contact at Facebook who provided the following info in response to my question regarding request limits:
There is a limit, but it's pretty high, it should be difficult to hit unless they're using the same access tokens for all calls and not caching results, etc. It's 600 calls per 600 seconds per access token.
Ultimately we ended up requiring the user to authorize, as Facebook does not seem to distinguish between user access tokens (one token per user) and application access tokens (one token for all users) when calculating its seemingly arbitrary request limits.
If you are running into this error with a user access token, you may need to optimize your API calls (possibly by combining FQL queries or replacing multiple Graph requests with a single FQL query).
try this with your php code:
50 continuous FQL calls. After a pause of 10 seconds (sleep (10)) You repeat.
if($nr%50==0)
{
sleep(10);
echo "\n\n---Bloque #".++$numBloque."---\n\n";
}