What is the throttling time between requests for Box API? - box-view-api

Can't find it anywhere on Box View site. Just wondering how long I should wait between trying to convert documents.

When you make a request that exceeds a rate limit, you'll receive an a response with an HTTP status of 429 TOO MANY REQUESTSand the retry time will be included as a Retry-After header e.g.
HTTP/1.1 429 Too Many Requests
Retry-After: {retry time in seconds}
You can use the {retry time in seconds} to determine how long to wait.
Docs: http://developers.box.com/view/#rate-limiting

Related

When to use HTTP status code 425 "Too Early"

The 425 "Too Early" status code's description:
Indicates that the server is unwilling to risk processing a request that might be replayed
How is it used in a real world scenario? Examples would be appreciated.
You can use a 425 as an error code to handle idempotent requests.
Real world example: I want a request to my API to send money to someone through some crusty unreliable old banks api. Like 60% of the time the underlying api is fast enough, but 40% of the time clients will time out while waiting. If they retry after a timeout the request could potentially double bill them.
So in my API, I ask the sender to send a transactionId, then when they retry the request, they would resend the same transactionId. On my apis side I'm going to store that transactionId and then start the (potentially long running) money transfer. When the transfer finishes you save the result to the transactionId and then return 200(transferResult) to the sender.
If the client gets impatient and retries then the next web request will see that that transactionId is still in flight and return a 425 Too Early. They can then wait a few seconds and try again getting more 425 Too Early responses until the transfer finishes and you return the 200(transferResult) to the sender.
I know this answer is 6 months late, but maybe that helps understand what a 425 can be used for.

Cannot use sc_http_req_rate when the counter isn’t currently tracked

As per documentation, sc_http_req_rate will return the rate of http requests from the stick table, but only from the currently tracked counters.
From testing, this means that if you are incrementing your counter using http-response rather than http-request, the field is unavailable. It cannot be used for rate limiting, or sent on to the backends.
A down to earth conseguence of this is that I cannot limit the number of bots generating too many 404 requests.
How can I load and use the stick table http rate during the request if it’s only tracked in the response?
Thank you :)

What is the best HTTP status code for pin code's "Max Attempt Reached"?

I'm implementing a pin code authorization in my web application and one of the requirements is to limit the attempt count of a user to n times in a day.
So, what is the best HTTP status code returned to the user when they reached the max attempt count?
Now, I'm thinking of
403
429 (but it's not about sending too many requests)
400 (but the request payload didn't invalid)
429 is exactly what you want.
from: https://datatracker.ietf.org/doc/html/rfc6585
429 Too Many Requests
The 429 status code indicates that the user has sent too many
requests in a given amount of time ("rate limiting").
The response representations SHOULD include details explaining the
condition, and MAY include a Retry-After header indicating how long
to wait before making a new request.
For example:
HTTP/1.1 429 Too Many Requests
Content-Type: text/html
Retry-After: 3600
<html>
<head>
<title>Too Many Requests</title>
</head>
<body>
<h1>Too Many Requests</h1>
<p>I only allow 50 requests per hour to this Web site per
logged in user. Try again soon.</p>
</body>
</html>
Note that this specification does not define how the origin server
identifies the user, nor how it counts requests. For example, an
origin server that is limiting request rates can do so based upon
counts of requests on a per-resource basis, across the entire server,
or even among a set of servers. Likewise, it might identify the user
by its authentication credentials, or a stateful cookie.
Responses with the 429 status code MUST NOT be stored by a cache.
Note how the spec invites the service / implementation to provide details. It does not say what type of requests is too much or anything specific, really. Therefore, you will want to say something like "stop spamming my service because x, y, z".

How does pagination affect rate limit

I am looking at http://stocktwits.com/developers/docs/parameters and am wondering if anyone has used pagination before.
The doc says there is a limit of 800 messages, how does that interact with the request limit? Could I in theory query 200 different stock tickers every hour and get back (up to) 800 messages?
If so that sounds like a great way to get around the 30 message limit.
The documentation is unclear on this and we are rolling out new documentation that explains this more clearly.
Every stream request will have a default and max limit of 30 messages per response, regardless of whether the cursor params are present or not. So you could query 200 different stock streams every hour and get up to 6,000 messages or 12,000 if sending your access token along with the request. 200 request per hour for non authenticated requests and 400 for authenticated requests.

azure mobile service custom api script http request timeout

I implemented exports.get = function(request,response) of a custom api on a mobile service of azure. I download 5 thousands records from the rest service and then i prepare the json for the output. The problem is that the time of downloading of all records is too long, for that script exceeds the default timeout of 30 secs. I was thinking if there is a way to increase the timeout of the response.
I don't believe you can have a timeout greater than 30 seconds, as I have encountered this problem myself with azure custom APIs. According to this link https://msdn.microsoft.com/en-us/library/azure/dd894042.aspx, Table operations are limited to 30 seconds, but it's not clear if that applies to custom apis, but it certainly appears to be.
What I would recommend is to implement pagination and return a limited number of records at a time. Your parameters should include the the start index and amount of records to return, and your response should include how many records in total so you can determine how many records to fetch with each request.