I am making a RESTful API and am wondering how computationally expensive it is for the server if each request is done using SSL? It's probably hard to quantify, but a comparison to non-SSL requests would be useful (e.g. 1 SSL is as expensive as 30 non-SSL request).
Am I right in thinking that for an SSL connection to be established, both parties need to generate public and private keys, share them with each other, and then start communicating. If when using a RESTful API, does this process happen on each request? Or is there some sort of caching that reuses a key for a given host for a given period of time (if so, how long before they expire?).
And one last question, the reason I am asking is because I am making an app that uses facebook connect, and there are some access tokens involved which grant access to someone's facebook account, having said that, why does facebook allow transmitting these access tokens over non-encrypted connections? Surely they should guard the access tokens as strongly as the username/passwd combos, and as such enforce an SSL connection... yet they don't.
EDIT: facebook does in fact enforce a HTTPS connection whenever the access_token is being transmitted.
http://www.imperialviolet.org/2010/06/25/overclocking-ssl.html
On our [Google's, ed.] production frontend machines, SSL/TLS accounts for less than 1% of the CPU load, less than 10KB of memory per connection and less than 2% of network overhead. Many people believe that SSL takes a lot of CPU time and we hope the above numbers (public for the first time) will help to dispel that.
If you stop reading now you only need to remember one thing: SSL/TLS is not computationally expensive any more.
SSL process is roughly as follows:
Server (and optionally client) present their (existing, not generated) public key using a certificate, together with a signed challenge. The opposite party verifies the signature (its mathematical validity, the certificate path up to the CA, the revocation status,...) to be sure the opposite party is who it claims to be.
Between the authenticated parties a secret session key is negotiated (for example using the Diffie Hellman algorithm).
The parties switch to encrypted communication
This is an expensive protocol up to here and happens every time a socket is established. You can not cache a check about "who's on the other side". This is why you should persistent sockets (event with REST).
mtraut described the way SSL works, yet he omited the fact that TLS supports session resuming. However, even as the session resuming is supported by the protocol itself and many conformant servers, it's not always supported by client-side implementations. So you should not rely on resuming and you better keep a persistent session where possible.
On the other hand, SSL handshake is quite fast (about a dozen of milliseconds) nowadays so it's not the biggest bottleneck in most cases.
Related
I am aware of certificate chains when validating a client certificate. Still, this either puts a lot of burden on the server administrator or restricts clients, which can be unfavorable when implementing a public OPC UA server.
An implementation of the client certificate validator that accepts all certificates for message encryption/signing is certainly possible. But would such an implementation be considered insecure in that matter?
If yes, how?
Yes, it is considered insecure.
Aside from the (hopefully) obvious use case, where certificates ensure you know exactly what client applications are allowed to connect to the server, certificates are also the first line of defense against malicious clients and are part of a "defense in depth" strategy.
A malicious actor that can't establish a secure channel with the server doesn't have much to work with. A malicious actor that can establish a secure channel can, e.g., open many connections, create many sessions (without activating, potentially causing a DoS are you use resources), attempt to guess credentials, re-use default credentials that an application may ship with, etc...
Further... in the face of the recent CIS alert re: ICS/SCADA devices + OPC UA servers, you'd be a bit of a fool to willingly ship a less secure product for the sake of convenience.
I have a project where data sent between two peers needs to be encrypted. I dont need to authenticate the server or the client , I just need my data to be unreadable on the network.
I have two options:
1- Secure socket
- Open a secure socket
- Write clear data
2- Socket
- Open a socket
- Encrypt data
- Write encrypted data
Is there a performance benefit in using a secure socket instead of "normal" socket in which I write encrypted data? (let's say i'm using the same cipher in both case)
No, there is no difference with regards to speed when it comes to the algorithms used. In general you'd need authenticity, integrity and authenticity of messages in a transport protocol. Generally after the initial handshake this is performed by symmetric algorithms in a rather efficient manner.
Creating your own transport protocol is so fraught with danger that the chance of creating and implementing a secure protocol by a novice is about zero. For instance, if you don't know about plaintext or padding oracle attacks then you may loose confidentiality of the message, basically leaving you with messages without any protection.
So check the fastest TLS 1.2 or 1.3 ciphersuites and use that. You may want to check what Google has introduced to TLS; they've really focussed on speed and security.
(Note that a secure socket without authentication allows a man in the middle (MITM) to intercept and thus see in the clear your data.)
A secure socket will take longer to establish, then taken about the same to encrypt. So performance wise, if you have a pre-shared symmetric encryption key, you would benefit a from skipping the ssl/tls handshake and go directly to tcp socket. That would show as a big speedup for numerous short connections, in particular if they were not using sslcontext and session resumption (lots of JSSE jargon, I know, but I keep it obscure because this you know or you don't, here is not the place).
However, if you don't have a pre-shared key, the whole handshake is really something you shouldn't avoid.
I have written an iPhone application communicating with a server. The app sends a message to the server and prints the result.
Now I have a question: Is there a way to know if the message sent to the server came from an iPhone?
I am asking this because I want to prevent attackers from sending messages from somewhere else and flooding the server.
If you use in-app purchases, then there is a full authentication chain that validates device X purchased the app. You're server can track this and then only give full responses to previously authenticated devices.
This approach also keeps pirated apps pretty much out of the picture.
This approach wouldn't stop a concerted DDOS attack, but your server can at least ignore non-valid clients and thus reduce its workload significantly. Since your server is ignoring invalid requests here, it also makes it less appealing to potential non-device users and the illicit user would probably only attack you if they disliked you, as opposed to them just bogging down your server for its free web services.
If you don't use in app purchases, you could set up your own authentication process and give a token to the device and have your server remember said tokens, and then later only serve valid responses for requests that had the said token (appropriately hashed and salted). This approach would not stop pirated apps from using your service, but would effectively stop non-devices from using your web service (again, except for concerted hacking efforts).
An even simpler approach is to have an obfuscated request format that would take a concerted effort to reverse engineer.
In all of these approaches, you might have to monitor your server for unusual activity and then taking appropriate steps.
I would encourage you to match your efforts to the expected risk. You can spend days, months, even years, properly securing an app, make sure the cost is worth the reward.
You could do some form of authentication, encryption or fingerprinting, eg. using SHA, MD5, etc. That way you could make it difficult (but not impossible) for an attacker to abuse your server.
You can't tell it's from an iPhone until you have received and examined the connection on the server. If you do that, you have already opened the possibility of a DOS (Denial of service) attack due to connection exhaustion.
I'm building a client/server iPhone game, where I would like to keep third-party clients from accessing the server. This is for two reasons: first, my revenue model is to sell the client and give away the service, and second I want to avoid the proliferation of clients that facilitate cheating.
I'm writing the first version of the server in rails, but I'm considering moving to erlang at some point.
I'm considering two approaches:
Generate a "username" (say, a GUID) and hash it (SHA256 or MD5) with a secret shipped with the app, and use the result as the "password". When the client connects with the server, both are sent via HTTP Basic Auth over https. The server hashes the username with the same secret and makes sure that they match.
Ship a client certificate with the iPhone app. The server is configured to require the client certificate to be present.
The first approach has the advantage of being simple, low overhead, and it may be easier to obfuscate the secret in the app.
The second approach is well tested and proven, but might be higher overhead. However, my knowledge of client certificates is at the "read about it in the Delta Airlines in-flight magazine" level. How much bandwidth and processing overhead would this incur? The actual data transferred per request is on the order of a kilobyte.
No way is perfect--but a challenge/response is better than a key.
A certificate SHOULD use challenge/response. You send a random string, it encrypts it using the certificate's private key, then you get it back and decrypt it with the public key.
Depending on how well supported the stuff is on the iPhone, implementing the thing will be between trivial and challenging.
A nice middle-road I use is xor. It's slightly more secure than a password, trivial to implement and takes at least an hour or two of dedication to hack.
Your app ships with a number built in (key).
When an app connects to you, you generate a random number (with the same number of bits as the key) and send it to the phone
The app gets the number, xor's it with the key and sends the result back.
On the server you xor the returned result with the key which should result in your original random number.
This is only slightly hacker resistant, but you can employ other techniques to make it better like changing the key each time you update your software, hiding the random number with some other random number, etc. There are a lot of tricks to hiding this, but eventually hackers will find it. Changing the methodology with each update might help.
Anyway, xor is a hack but it works for cases where sending a password is just a little to hackable.
The difference between xor and public key is that xor is EASILY reversible by just monitoring a successful conversation, public key is (theoretically) not reversible without significant resources and time.
Who is your adversary here? Both methods fail to prevent cracked copies of the application from connecting to the server. I think that's the most common problem with iPhone game (or general) development for paid apps.
However, this may protect the server from other non-iPhone clients, as it deters programmers from reverse engineering the network packet interfaces between the iPhone and the server.
Have your game users authenticate with their account through OAuth, to authorize them to make game state changes on your server.
If you can't manage to authenticate users, you'd need to authenticate your game application instance somehow. Having authentication credentials embedded in the binary would be a bad idea as application piracy is prevalent and would render your method highly insecure. My SO question on how to limit Apple iPhone application piracy might be of use to you in other ways.
This might be something more suited for Serverfault, but many webdevelopers who come only here will probably benefit from possible answers to this question.
The question is: How do you effectively protect yourself against Denial Of Service attacks against your webserver?
I asked myself this after reading this article
For those not familiar, here's what I remember about it: a DoS attack will attempt to occupy all your connections by repeatedly sending bogus headers to your servers.
By doing so, your server will reach the limit of possible simultanious connections and as a result, normal users can't acces your site anymore.
Wikipedia provides some more info: http://en.wikipedia.org/wiki/Denial_of_service
There's no panacea, but you can make DoS attacks more difficult by doing some of the following:
Don't (or limit your willingness to) do expensive operations on behalf of unauthenticated clients
Throttle authentication attempts
Throttle operations performed on behalf of each authenticated client, and place their account on a temporary lockout if they do too many things in too short a time
Have a similar global throttle for all unauthenticated clients, and be prepared to lower this setting if you detect an attack in progress
Have a flag you can use during an attack to disable all unauthenticated access
Don't store things on behalf of unauthenticated clients, and use a quota to limit the storage for each authenticated client
In general, reject all malformed, unreasonably complicated, or unreasonably huge requests as quickly as possible (and log them to aid in detection of an attack)
Don't use a pure LRU cache if requests from unauthenticated clients can result in evicting things from that cache, because you will be subject to cache poisoning attacks (where a malicious client asks for lots of different infrequently used things, causing you to evict all the useful things from your cache and need to do much more work to serve your legitimate clients)
Remember, it's important to outright reject throttled requests (for example, with an HTTP 503: Service Unavailable response or a similar response appropriate to whatever protocol you are using) rather than queueing throttled requests. If you queue them, the queue will just eat up all your memory and the DoS attack will be at least as effective as it would have been without the throttling.
Some more specific advice for the HTTP servers:
Make sure your web server is configured to reject POST messages without an accompanying Content-Length header, and to reject requests (and throttle the offending client) which exceed the stated Content-Length, and to reject requests with a Content-Length which is unreasonably long for the service that the POST (or PUT) is aimed at
For this specific attack (as long as the request is GET) based a load balancer or a WAF which only bases full requests to the webserver would work.
The problems start when instead of GET POST is used (which is easy) because you can't know if this is a malicious POST or just some really slow upload from an user.
From DoS per se you can't really protect your webapp because of a simple fact. Your resources are limited while the attacker potentially has unlimited time and resources to perform the DoS. And most of the time it's cheap for the attacker to perform the required steps. e.g. this attack mentioned above a few 100 slow running connections -> no problem
Asynchronous servers, for one, are more or less immune to this particular form of attack. I for instance serve my Django apps using an Nginx reverse proxy, and the attack didn't seem to affect its operation whatsoever. Another popular asynchronous server is lighttpd.
Mind you, this attack is dangerous because it can be performed even by a single machine with a slow connection. However, common DDoS attacks pit your server against an army of machines, and there's little you can do to protect yourself from them.
Short answer:
You cannot protect yourself against a DoS.
And i dont agree it belongs on serverfault since DoS is categorized as a security issue and is definetly related to programming