Protect yourself against Dos attacks - webserver

This might be something more suited for Serverfault, but many webdevelopers who come only here will probably benefit from possible answers to this question.
The question is: How do you effectively protect yourself against Denial Of Service attacks against your webserver?
I asked myself this after reading this article
For those not familiar, here's what I remember about it: a DoS attack will attempt to occupy all your connections by repeatedly sending bogus headers to your servers.
By doing so, your server will reach the limit of possible simultanious connections and as a result, normal users can't acces your site anymore.
Wikipedia provides some more info: http://en.wikipedia.org/wiki/Denial_of_service

There's no panacea, but you can make DoS attacks more difficult by doing some of the following:
Don't (or limit your willingness to) do expensive operations on behalf of unauthenticated clients
Throttle authentication attempts
Throttle operations performed on behalf of each authenticated client, and place their account on a temporary lockout if they do too many things in too short a time
Have a similar global throttle for all unauthenticated clients, and be prepared to lower this setting if you detect an attack in progress
Have a flag you can use during an attack to disable all unauthenticated access
Don't store things on behalf of unauthenticated clients, and use a quota to limit the storage for each authenticated client
In general, reject all malformed, unreasonably complicated, or unreasonably huge requests as quickly as possible (and log them to aid in detection of an attack)
Don't use a pure LRU cache if requests from unauthenticated clients can result in evicting things from that cache, because you will be subject to cache poisoning attacks (where a malicious client asks for lots of different infrequently used things, causing you to evict all the useful things from your cache and need to do much more work to serve your legitimate clients)
Remember, it's important to outright reject throttled requests (for example, with an HTTP 503: Service Unavailable response or a similar response appropriate to whatever protocol you are using) rather than queueing throttled requests. If you queue them, the queue will just eat up all your memory and the DoS attack will be at least as effective as it would have been without the throttling.
Some more specific advice for the HTTP servers:
Make sure your web server is configured to reject POST messages without an accompanying Content-Length header, and to reject requests (and throttle the offending client) which exceed the stated Content-Length, and to reject requests with a Content-Length which is unreasonably long for the service that the POST (or PUT) is aimed at

For this specific attack (as long as the request is GET) based a load balancer or a WAF which only bases full requests to the webserver would work.
The problems start when instead of GET POST is used (which is easy) because you can't know if this is a malicious POST or just some really slow upload from an user.
From DoS per se you can't really protect your webapp because of a simple fact. Your resources are limited while the attacker potentially has unlimited time and resources to perform the DoS. And most of the time it's cheap for the attacker to perform the required steps. e.g. this attack mentioned above a few 100 slow running connections -> no problem

Asynchronous servers, for one, are more or less immune to this particular form of attack. I for instance serve my Django apps using an Nginx reverse proxy, and the attack didn't seem to affect its operation whatsoever. Another popular asynchronous server is lighttpd.
Mind you, this attack is dangerous because it can be performed even by a single machine with a slow connection. However, common DDoS attacks pit your server against an army of machines, and there's little you can do to protect yourself from them.

Short answer:
You cannot protect yourself against a DoS.
And i dont agree it belongs on serverfault since DoS is categorized as a security issue and is definetly related to programming

Related

Is there standard way of making multiple API calls combined into one HTTP request?

While designing rest API's I time to time have challenge to deal with batch operations (e.g. delete or update many entities at once) to reduce overhead of many tcp client connections. And in particular situation problem usually solves by adding custom api method for specific operation (e.g. POST /files/batchDelete which accepts ids at request body) which doesn't look pretty from point of view of rest api design principles but do the job.
But for me general solution for the problem still desirable. Recently I found Google Cloud Storage JSON API batching documentation which for me looks like pretty general solution. I mean similar format may be used for any http api, not just google cloud storage. So my question is - does anybody know kind of general standard (standard or it's draft, guideline, community effort or so) of making multiple API calls combined into one HTTP request?
I'm aware of capabilities of http/2 which include usage of single tcp connection for http requests but my question is addressed to application level. Which in my opinion still make sense because despite of ability to use http/2 taking that on application level seems like the only way to guarantee that for any client including http/1 which is currently the most used version of http.
TL;DR
REST nor HTTP are ideal for batch operations.
Usually caching, which is one of RESTs constraints, which is not optional but mandatory, prevents batch processing in some form.
It might be beneficial to not expose the data to update or remove in batch as own resources but as data elements within a single resource, like a data table in a HTML page. Here updating or removing all or parts of the entries should be straight forward.
If the system in general is write-intensive it is probably better to think of other solutions such as exposing the DB directly to those clients to spare a further level of indirection and complexity.
Utilization of caching may prevent a lot of workload on the server and even spare unnecessary connecctions
To start with, REST nor HTTP are ideal for batch operations. As Jim Webber pointed out the application domain of HTTP is the transfer of documents over the Web. This is what HTTP does and this is what it is good at. However, any business rules we conclude are just a side effect of the document management and we have to come up with solutions to turn this document management side effects to something useful.
As REST is just a generalization of the concepts used in the browsable Web, it is no miracle that the same concepts that apply to Web development also apply to REST development in some form. Thereby a question like how something should be done in REST usually resolves around answering how something should be done on the Web.
As mentioned before, HTTP isn't ideal in terms of batch processing actions. Sure, a GET request may retrieve multiple results, though in reality you obtain one response containing links to further resources. The creation of resources has, according to the HTTP specification, to be indicated with a Location header that points to the newly created resource. POST is defined as an all purpose method that allows to perform tasks according to server-specific semantics. So you could basically use it to create multiple resources at once. However, the HTTP spec clearly lacks support for indicating the creation of multiple resources at once as the Location header may only appear once per response as well as define only one URI in it. So how can a server indicate the creation of multiple resources to the server?
A further indication that HTTP isn't ideal for batch processing is that a URI must reference a single resource. That resource may change over time, though the URI can't ever point to multiple resources at once. The URI itself is, more or less, used as key by caches which store a cacheable response representation for that URI. As a URI may only ever reference one single resource, a cache will also only ever store the representation of one resource for that URI. A cache will invalidate a stored representation for a URI if an unsafe operation is performed on that URI. In case of a DELETE operation, which is by nature unsafe, the representation for the URI the DELETE is performed on will be removed. If you now "redirect" the DELETE operation to remove multiple backing resources at once, how should a cache take notice of that? It only operates on the URI invoked. Hence even when you delete multiple resources in one go via DELETE a cache might still serve clients with outdated information as it simply didn't take notice of the removal yet and its freshness value would still indicate a fresh-enough state. Unless you disable caching by default, which somehow violates one of REST's constraints, or reduce the time period a representation is considered fresh enough to a very low value, clients will probably get served with outdated information. You could of course perform an unsafe operation on each of these URIs then to "clear" the cache, though in that case you could have invoked the DELETE operation on each resource you wanted to batch delete itself to start with.
It gets a bit easier though if the batch of data you want to remove is not explicitly captured via their own resources but as data of a single resource. Think of a data-table on a Web page where you have certain form-elements, such as a checkbox you can click on to mark an entry as delete candidate and then after invoking the submit button send the respective selected elements to the server which performs the removal of these items. Here only the state of one resource is updated and thus a simple POST, PUT or even PATCH operation can be performed on that resource URI. This also goes well with caching as outlined before as only one resource has to be altered, which through the usage of unsafe operations on that URI will automatically lead to an invalidation of any stored representation for the given URI.
The above mentioned usage of form-elements to mark certain elements for removal depends however on the media-type issued. In the case of HTML its forms section specifies the available components and their affordances. An affordance is the knowledge what you can and should do with certain objects. I.e. a button or link may want to be pushed, a text field may expect numeric or alphanumeric input which further may be length limited and so on. Other media types, such as hal-forms, halform or ion, attempt to provide form representations and components for a JSON based notation, however, support for such media-types is still quite limited.
As one of your concerns are the number of client connections to your service, I assume you have a write-intensive scenario as in read-intensive cases caching would probably take away a good chunk of load from your server. I.e. BBC once reported that they could reduce the load on their servers drastically just by introducing a one minute caching interval for recently requested resources. This mainly affected their start page and the linked articles as people clicked on the latest news more often than on old news. On receiving a couple of thousands, if not hundred thousands, request per minute they could, as mentioned before, reduce the number of requests actually reaching the server significantly and therefore take away a huge load on their servers.
Write intensive use-cases however can't take benefit of caching as much as read-intensive cases as the cache would get invalidated quite often and the actual request being forward to the server for processing. If the API is more or less used to perform CRUD operations, as so many "REST" APIs do in reality, it is questionable if it wouldn't be preferable to expose the database directly to the clients. Almost all modern database vendors ship with sophisticated user-right management options and allow to create views that can be exposed to certain users. The "REST API" on top of it basically just adds a further level of indirection and complexity in such a case. By exposing the DB directly, performing batch updates or deletions shouldn't be an issue at all as through the respective query languages support for such operations should already be build into the DB layer.
In regards to the number of connections clients create: HTTP from 1.0 on allows the reusage of connections via the Connection: keep-alive header directive. In HTTP/1.1 persistent connections are used by default if not explicitly requested to close via the respective Connection: close header directive. HTTP/2 introduced full-duplex connections that allow many channels and therefore requests to reuse the same connections at the same time. This is more or less a fix for the connection limitation suggested in RFC 2626 which plenty of Web developers avoided by using CDN and similar stuff. Currently most implementations use a maximum limit of 100 channels and therefore simultaneous downloads via a single connections AFAIK.
Usually opening and closing a connection takes a bit of time and server resources and the more open connections a server has to deal with the more a system may suffer. Though open connections with hardly any traffic aren't a big issue for most servers. While the connection creation was usually considered to be the costly part, through the usage of persistent connections that factor moved now towards the number of requests issued, hence the request for sending out batch-requests, which HTTP is not really made for. Again, as mentioned throughout the post, through the smart utilization of caching plenty of requests may never reach the server at all, if possible. This is probably one of the best optimization strategies to reduce the number of simultaneous requests, as probably plenty of requests might never reach the server at all. Probably the best advice to give is in such a case to have a look at what kind of resources are requested frequently, which requests take up a lot of processing capacity and which ones can easily get responded with by utilizing caching options.
reduce overhead of many tcp client connections
If this is the crux of the issue, the easiest way to solve this is to switch to HTTP/2
In a way, HTTP/2 does exactly what you want. You open 1 connection, and using that collection you can send many HTTP requests in parallel. Unlike batching in a single HTTP request, it's mostly transparent for clients and response and requests can be processed out of order.
Ultimately batching multiple operations in a single HTTP request is always a network hack.
HTTP/2 is widely available. If HTTP/1.1 is still the most used version (this might be true, but gap is closing), this has more to do with servers not yet being set up for it, not clients.

What can go wrong if we do NOT follow RESTful best practices?

TL;DR : scroll down to the last paragraph.
There is a lot of talk about best practices when defining RESTful APIs: what HTTP methods to support, which HTTP method to use in each case, which HTTP status code to return, when to pass parameters in the query string vs. in the path vs. in the content body vs. in the headers, how to do versioning, result set limiting, pagination, etc.
If you are already determined to make use of best practices, there are lots of questions and answers out there about what is the best practice for doing any given thing. Unfortunately, there appears to be no question (nor answer) as to why use best practices in the first place.
Most of the best practice guidelines direct developers to follow the principle of least surprise, which, under normal circumstances, would be a good enough reason to follow them. Unfortunately, REST-over-HTTP is a capricious standard, the best practices of which are impossible to implement without becoming intimately involved with it, and the drawback of intimate involvement is that you tend to end up with your application being very tightly bound to a particular transport mechanism. So, some people (like me) are debating whether the benefit of "least surprise" justifies the drawback of littering the application with REST-over-HTTP concerns.
A different approach examined as an alternative to best practices suggests that our involvement with HTTP should be limited to the bare minimum necessary in order to get an application-defined payload from point A to point B. According to this approach, you only use a single REST entry point URL in your entire application, you never use any HTTP method other than HTTP POST, never return any HTTP status code other than HTTP 200 OK, and never pass any parameter in any way other than within the application-specific payload of the request. The request will either fail to be delivered, in which case it is the responsibility of the web server to return an "HTTP 404 Not Found" to the client, or it will be successfully delivered, in which case the delivery of the request was "HTTP 200 OK" as far as the transport protocol is concerned, and anything else that might go wrong from that point on is exclusively an application concern, and none of the transport protocol's business. Obviously, this approach is kind of like saying "let me show you where to stick your best practices".
Now, there are other voices that say that things are not that simple, and that if you do not follow the RESTful best practices, things will break.
The story goes that for example, in the event of unauthorized access, you should return an actual "HTTP 401 Unauthorized" (instead of a successful response containing a json-serialized UnauthorizedException) because upon receiving the 401, the browser will prompt the user of credentials. Of course this does not really hold any water, because REST requests are not issued by browsers being used by human users.
Another, more sophisticated way the story goes is that usually, between the client and the server exist proxies, and these proxies inspect HTTP requests and responses, and try to make sense out of them, so as to handle different requests differently. For example, they say, somewhere between the client and the server there may be a caching proxy, which may treat all requests to the exact same URL as identical and therefore cacheable. So, path parameters are necessary to differentiate between different resources, otherwise the caching proxy might only ever forward a request to the server once, and return cached responses to all clients thereafter. Furthermore, this caching proxy may need to know that a certain request-response exchange resulted in a failure due to a particular error such as "Permission Denied", so as to again not cache the response, otherwise a request resulting in a temporary error may be answered with a cached error response forever.
So, my questions are:
Besides "familiarity" and "least surprise", what other good reasons are there for following REST best practices? Are these concerns about proxies real? Are caching proxies really so dumb as to cache REST responses? Is it hard to configure the proxies to behave in less dumb ways? Are there drawbacks in configuring the proxies to behave in less dumb ways?
It's worth considering that what you're suggesting is the way that HTTP APIs used to be designed for a good 15 years or so. API designers are tending to move away from that approach these days. They really do have their reasons.
Some points to consider if you want to avoid using ReST over HTTP:
ReST over HTTP is an efficient use of the HTTP/S transport mechanism. Avoiding the ReST paradigm runs the risk of every request / response being wrapped in verbose envelopes. SOAP is an example of this.
ReST encourages client and server decoupling by putting application semantics into standard mechanisms - HTTP and XML/JSON (or others data formats). These protocols and standards are well supported by standard libraries and have been built up over years of experience. Sure, you can create your own 'unauthorized' response body with a 200 status code, but ReST frameworks just make it unnecessary so why bother?
ReST is a design approach which encourages a view of your distributed system which focuses on data rather than functionality, and this has a proven a useful mechanism for building distributed systems. Avoiding ReST runs the risk of focusing on very RPC-like mechanisms which have some risks of their own:
they can become very fine-grained and 'chatty'
which can be an inefficient use of network bandwidth
which can tightly couple client and server, through introducing stateful-ness and temporal coupling beteween requests.
and can be difficult to scale horizontally
Note: there are times when an RPC approach is actually a better way of breaking down a distributed system than a resource-oriented approach, but they tend to be the exceptions rather than the rule.
existing tools for developers make debugging / investigations of ReSTful APIs easier. It's easy to use a browser to do a simple GET, for example. And tools such as Postman or RestClient already exist for more complex ReST-style queries. In extreme situations tcpdump is very useful, as are browser debugging tools such as firebug. If every API call has application layer semantics built on top of HTTP (e.g. special response types for particular error situations) then you immediately lose some value from some of this tooling. Building SOAP envelopes in PostMan is a pain. As is reading SOAP response envelopes.
network infrastructure around caching really can be as dumb as you're asking. It's possible to get around this but you really do have to think about it and it will inevitably involve increased network traffic in some situations where it's unnecessary. And caching responses for repeated queries is one way in which APIs scale out, so you'll likely need to 'solve' the problem yourself (i.e. reinvent the wheel) of how to cache repeated queries.
Having said all that, if you want to look into a pure message-passing design for your distributed system rather than a ReSTful one, why consider HTTP at all? Why not simply use some message-oriented middleware (e.g. RabbitMQ) to build your application, possibly with some sort of HTTP bridge somewhere for Internet-based clients? Using HTTP as a pure transport mechanism involving a simple 'message accepted / not accepted' semantics seems overkill.
REST is intended for long-lived network-based applications that span multiple organizations. If you don’t see a need for the constraints, then don’t use them. -- Roy T Fielding
Unfortunately, there appears to be no question (nor answer) as to why use best practices in the first place.
When in doubt, go back to the source
Fielding's dissertation really does quite a good job at explaining how the REST architectural constraints ensure that you don't destroy the properties those constraints are designed to protect.
Keep in mind - before the web (which is the reference application for REST), "web scale" wasn't a thing; the notion of a generic client (the browers) that could discover and consume thousands of customized applications (provided by web servers) had not previously been realized.
According to this approach, you only use a single REST entry point URL in your entire application, you never use any HTTP method other than HTTP POST, never return any HTTP status code other than HTTP 200 OK, and never pass any parameter in any way other than within the application-specific payload of the request.
Yup - that's a thing, it's called RPC; you are effectively taking the web, and stripping it down to a bare message transport application that just happens to tunnel through port 80.
In doing so, you have stripped away the uniform interface -- you've lost the ability to use commodity parts in your deployment, because nobody can participate in the conversation unless they share the same interpretation of the message data.
Note: that's doesn't at all imply that RPC is "broken"; architecture is about tradeoffs. The RPC approach gives up some of the value derived from the properties guarded by REST, but that doesn't mean it doesn't pick up value somewhere else. Horses for courses.
Besides "familiarity" and "least surprise", what other good reasons are there for following REST best practices?
Cheap scaling of reads - as your offering becomes more popular, you can service more clients by installing a farm of commodity reverse-proxies that will serve cached representations where available, and only put load on the server when no fresh representation is available.
Prefetching - if you are adhering to the safety provisions of the interface, agents (and intermediaries) know that they can download representations at their own discretion without concern that the operators will be liable for loss of capital. AKA - your resources can be crawled (and cached)
Similarly, use of idempotent methods (where appropriate) communicates to agents (and intermediaries) that retrying the send of an unacknowledged message causes no harm (for instance, in the event of a network outage).
Independent innovation of clients and servers, especially cross organizations. Mosaic is a museum piece, Netscape vanished long ago, but the web is still going strong.
Of course this does not really hold any water, because REST requests are not issued by browsers being used by human users.
Of course they are -- where do you think you are reading this answer?
So far, REST works really well at exposing capabilities to human agents; which is to say that the server side is so ubiquitous at this point that we hardly think about it any more. The notion that you -- the human operator -- can use the same application to order pizza, run diagnostics on your house, and remote start your car is as normal as air.
But you are absolutely right that replacing the human still seems a long ways off; there are various standards and media types for communicating semantic content of data -- the automated client can look at markup, identify a phone number element, and provide a customized array of menu options from it -- but building into agents the sorts of fuzzy intelligence needed to align offered capabilities with goals, or to recover from error conditions, seems to be a ways off.

REST - can clients cache links to resources?

Let's say you've got a fully hypermedia driven API. Consumers have to navigate three reources, via following hypermedia, until they can get to the resource they want. Is there any reason a client could not cache these steps temporarily and go directly to the resource they want?
I know the goal of REST is to decouple clients and servers, but if you've got 5 web requests going on behind the scenes the user experience could be poor waiting for all this to happen.
The worst case I can think of is that a cached URL gets changed. And so the client will just start from the entrypoint again and cache the steps.
Caching on the client side is going to be very important for a lot of well performing Hypermedia clients. Here is some more specific guidance straight from Fielding's dissertation:
The advantage of adding cache constraints is that they have the potential to partially or completely eliminate some interactions, improving efficiency, scalability, and user-perceived performance by reducing the average latency of a series of interactions. The trade-off, however, is that a cache can decrease reliability if stale data within the cache differs significantly from the data that would have been obtained had the request been sent directly to the server.
The are trade offs but event a short time frame for caching will greatly improve performance. Ideally the Hypermedia API will provide caching guidance. This could be done in the same manner that HTML caching works with the browser and Expires and Cache-Control headers.
Also if the resource has moved then the API should inform you with the proper 301 Moved Permanently response.

how expensive is SSL for a RESTful API?

I am making a RESTful API and am wondering how computationally expensive it is for the server if each request is done using SSL? It's probably hard to quantify, but a comparison to non-SSL requests would be useful (e.g. 1 SSL is as expensive as 30 non-SSL request).
Am I right in thinking that for an SSL connection to be established, both parties need to generate public and private keys, share them with each other, and then start communicating. If when using a RESTful API, does this process happen on each request? Or is there some sort of caching that reuses a key for a given host for a given period of time (if so, how long before they expire?).
And one last question, the reason I am asking is because I am making an app that uses facebook connect, and there are some access tokens involved which grant access to someone's facebook account, having said that, why does facebook allow transmitting these access tokens over non-encrypted connections? Surely they should guard the access tokens as strongly as the username/passwd combos, and as such enforce an SSL connection... yet they don't.
EDIT: facebook does in fact enforce a HTTPS connection whenever the access_token is being transmitted.
http://www.imperialviolet.org/2010/06/25/overclocking-ssl.html
On our [Google's, ed.] production frontend machines, SSL/TLS accounts for less than 1% of the CPU load, less than 10KB of memory per connection and less than 2% of network overhead. Many people believe that SSL takes a lot of CPU time and we hope the above numbers (public for the first time) will help to dispel that.
If you stop reading now you only need to remember one thing: SSL/TLS is not computationally expensive any more.
SSL process is roughly as follows:
Server (and optionally client) present their (existing, not generated) public key using a certificate, together with a signed challenge. The opposite party verifies the signature (its mathematical validity, the certificate path up to the CA, the revocation status,...) to be sure the opposite party is who it claims to be.
Between the authenticated parties a secret session key is negotiated (for example using the Diffie Hellman algorithm).
The parties switch to encrypted communication
This is an expensive protocol up to here and happens every time a socket is established. You can not cache a check about "who's on the other side". This is why you should persistent sockets (event with REST).
mtraut described the way SSL works, yet he omited the fact that TLS supports session resuming. However, even as the session resuming is supported by the protocol itself and many conformant servers, it's not always supported by client-side implementations. So you should not rely on resuming and you better keep a persistent session where possible.
On the other hand, SSL handshake is quite fast (about a dozen of milliseconds) nowadays so it's not the biggest bottleneck in most cases.

How to know if a message is sent from an iPhone to the server?

I have written an iPhone application communicating with a server. The app sends a message to the server and prints the result.
Now I have a question: Is there a way to know if the message sent to the server came from an iPhone?
I am asking this because I want to prevent attackers from sending messages from somewhere else and flooding the server.
If you use in-app purchases, then there is a full authentication chain that validates device X purchased the app. You're server can track this and then only give full responses to previously authenticated devices.
This approach also keeps pirated apps pretty much out of the picture.
This approach wouldn't stop a concerted DDOS attack, but your server can at least ignore non-valid clients and thus reduce its workload significantly. Since your server is ignoring invalid requests here, it also makes it less appealing to potential non-device users and the illicit user would probably only attack you if they disliked you, as opposed to them just bogging down your server for its free web services.
If you don't use in app purchases, you could set up your own authentication process and give a token to the device and have your server remember said tokens, and then later only serve valid responses for requests that had the said token (appropriately hashed and salted). This approach would not stop pirated apps from using your service, but would effectively stop non-devices from using your web service (again, except for concerted hacking efforts).
An even simpler approach is to have an obfuscated request format that would take a concerted effort to reverse engineer.
In all of these approaches, you might have to monitor your server for unusual activity and then taking appropriate steps.
I would encourage you to match your efforts to the expected risk. You can spend days, months, even years, properly securing an app, make sure the cost is worth the reward.
You could do some form of authentication, encryption or fingerprinting, eg. using SHA, MD5, etc. That way you could make it difficult (but not impossible) for an attacker to abuse your server.
You can't tell it's from an iPhone until you have received and examined the connection on the server. If you do that, you have already opened the possibility of a DOS (Denial of service) attack due to connection exhaustion.