What is the best HTTP status code for pin code's "Max Attempt Reached"? - rest

I'm implementing a pin code authorization in my web application and one of the requirements is to limit the attempt count of a user to n times in a day.
So, what is the best HTTP status code returned to the user when they reached the max attempt count?
Now, I'm thinking of
403
429 (but it's not about sending too many requests)
400 (but the request payload didn't invalid)

429 is exactly what you want.
from: https://datatracker.ietf.org/doc/html/rfc6585
429 Too Many Requests
The 429 status code indicates that the user has sent too many
requests in a given amount of time ("rate limiting").
The response representations SHOULD include details explaining the
condition, and MAY include a Retry-After header indicating how long
to wait before making a new request.
For example:
HTTP/1.1 429 Too Many Requests
Content-Type: text/html
Retry-After: 3600
<html>
<head>
<title>Too Many Requests</title>
</head>
<body>
<h1>Too Many Requests</h1>
<p>I only allow 50 requests per hour to this Web site per
logged in user. Try again soon.</p>
</body>
</html>
Note that this specification does not define how the origin server
identifies the user, nor how it counts requests. For example, an
origin server that is limiting request rates can do so based upon
counts of requests on a per-resource basis, across the entire server,
or even among a set of servers. Likewise, it might identify the user
by its authentication credentials, or a stateful cookie.
Responses with the 429 status code MUST NOT be stored by a cache.
Note how the spec invites the service / implementation to provide details. It does not say what type of requests is too much or anything specific, really. Therefore, you will want to say something like "stop spamming my service because x, y, z".

Related

Cannot use sc_http_req_rate when the counter isn’t currently tracked

As per documentation, sc_http_req_rate will return the rate of http requests from the stick table, but only from the currently tracked counters.
From testing, this means that if you are incrementing your counter using http-response rather than http-request, the field is unavailable. It cannot be used for rate limiting, or sent on to the backends.
A down to earth conseguence of this is that I cannot limit the number of bots generating too many 404 requests.
How can I load and use the stick table http rate during the request if it’s only tracked in the response?
Thank you :)

What is the most appropriate HTTP status code for an already processed POST request?

I have a RESTful API that is used by another internal application that post updates to it.
The problem is that some unexpected peaks occur and, during those times, a request might take longer than 60 seconds (the limit defined by the load balancer, which I cannot change) to respond, which causes a 504 Gateway Timeout error.
When the latter application gets such response, it will retry the request again after 10 minutes or so.
This caused some requests to be processed twice, because the first request was successful, but took more than 60 seconds.
So I decided to use Idempotency Keys in the requests to avoid this problem. The issue is that I don't know what I should return in this case.
Should I just stick with 200 OK? Should I return some 4xx code?
I'd say it highly depends if it is an error for you or not. But I'd say the exact response code is more a matter of taste rather than best practice. But since I guess you're rejecting the duplicated requests, you want to report an error code such as 409 Conflict:
Indicates that the request could not be processed because of conflict
in the current state of the resource, such as an edit conflict between
multiple simultaneous updates.
https://en.wikipedia.org/wiki/List_of_HTTP_status_codes#4xx_Client_errors
Whenever a resource conflict would be caused by fulfilling the request. Duplicate entries and deleting root objects when cascade-delete is not supported are a couple of examples.
https://www.restapitutorial.com/httpstatuscodes.html
A potentially useful reference is RFC 5789, which describes the PATCH method. Obviously, you aren't doing a patch, but the error handling is analogous.
For instance, if you were sending a JSON Patch document, then you might be ensuring idempotent behavior by including a test operation that checks that the resource is in the expected initial state. After your operation, that check would presumably fail. In that case, the error handling section directs your attention to RFC 5789 -- section 2.2 outlines a number of different possible cases.
Another source of inspiration is to look at RFC 7232 which describes conditional requests. The section on If-Match includes this gem:
An origin server MUST NOT perform the requested method if a received If-Match condition evaluates to false; instead, the origin server MUST respond with either a) the 412 (Precondition Failed) status code or b) one of the 2xx (Successful) status codes if the origin server has verified that a state change is being requested and the final state is already reflected in the current state of the target resource (i.e., the change requested by the user agent has already succeeded, but the user agent might not be aware of it, perhaps because the prior response was lost or a compatible change was made by some other user agent).
From this, I infer that 200 is completely acceptable if you can determine that the work was already done successfully.

200 vs 403 server response - which degrades server's performance more?

Some rogue people have set up server monitoring that connects to server every 2 minutes to check if it's down (they connect from several different accounts so they ping the server every 20 seconds or so). It's a simple GET request.
I have two options:
Leave it as it is (ie. allow them via a normal 200 server response).
Block them by either IP or user-agent (giving 403 response).
My question is - what is the better solution as far as server performance is concerned (ie. what is less 'stressful' on the server) - 1 (200 response) or 2 (403 response)?
I'm inclined to #1 since there would be no IP / user-agent checking which should mean less stress on the server, correct?
It doesn't matter.
The status code and an if-check on the user-string is completely dominated by network IO, gc and server subsystems.
If they just query every 2 minutes, I'd very much leave it alone. If they query a few hundred times per second; time to act.

How does pagination affect rate limit

I am looking at http://stocktwits.com/developers/docs/parameters and am wondering if anyone has used pagination before.
The doc says there is a limit of 800 messages, how does that interact with the request limit? Could I in theory query 200 different stock tickers every hour and get back (up to) 800 messages?
If so that sounds like a great way to get around the 30 message limit.
The documentation is unclear on this and we are rolling out new documentation that explains this more clearly.
Every stream request will have a default and max limit of 30 messages per response, regardless of whether the cursor params are present or not. So you could query 200 different stock streams every hour and get up to 6,000 messages or 12,000 if sending your access token along with the request. 200 request per hour for non authenticated requests and 400 for authenticated requests.

What is the throttling time between requests for Box API?

Can't find it anywhere on Box View site. Just wondering how long I should wait between trying to convert documents.
When you make a request that exceeds a rate limit, you'll receive an a response with an HTTP status of 429 TOO MANY REQUESTSand the retry time will be included as a Retry-After header e.g.
HTTP/1.1 429 Too Many Requests
Retry-After: {retry time in seconds}
You can use the {retry time in seconds} to determine how long to wait.
Docs: http://developers.box.com/view/#rate-limiting