Is it a good idea to use One-time-passwords for securing REST-Applications? - rest

In the last weeks I started building REST-Producing/Consuming web applications and therefore started to worry about the security of my communication.
I made up the following procedure:
One REST-Consumer and one REST-Producer secretly negotiate a common secret and initialize a One-Time-Password(OTP)-Component with this secret.
With every Request and Response the Clients send an OTP.
This OTP is generated by the OTP-Component based on the negotiated secret.
The other partner generates the same order of OTPs and checks whether the sent OTP is correct and accepts or blocks the communication.
After the OTP chain runs empty, the two communicators exchange a new secret and reinitialize (1.).
This structure is generally effective for multi-client environments and communication with many REST-Communicators. I have several questions regarding this procedure:
Is the calculation of OTPs fast enough to handle ms-transactions on the clients?
Is the overhead of the OTPs comparatively small in contrast to other security-features?
Is a OTP-procedure more secure than TLS-communication?
Could OTP-security be a method to use over HTTP-channels? (Assuming, it's ok that the data is plain readable!)
Which security-implementations are as secure as the explained procedure, but are cheaper, faster or less error-prone?
Thanks in advance. Please correct me, if the question has any mistakes or is out-of-scope!

Ok, I've read your question more thoroughly and understood it now. :-)
If you are pre-generating a list of OTPs on both sides there should
be no problem regarding performance. However you need to secure the
storage of the OTPs, which could be tricky regarding who has access
to the systems.
The overhead is IMHO insignificant when you are pregenerating the
list.
TLS is transportlayer security, OTP applicationlayer so there is no
direct comparability. TLS < 1.2 may be unsecure, but so are OTPs if
the way to generate them is weak. Which brings me to the next point:
If you send the OTPs unencrypted it may be possible to do a
man-in-the-middle and reengineer the algorithm and predict the next
OTPs.
Less error prone as in means of already used in production would be
for example Jax-RS secured with CXF. Cheaper? Could be, depends
on the implementation of the OTPs (buy, make, etc.) Faster? No. As
stated in answers to 1 and 2: If you have pre-generated OTPs there's not much overhead.
Regardless of the answers above: You should always think twice before implementing a custom solution in this area, as mistakes can be crucial. Think about the threats to your communication, do a trade-off-analysis and look at the result. Perhaps you will be satisfied using TLS > 1.2?

Related

Apache Thrift - How to provide secure communication

I want to secure the communication between Thrift server and client instances. To achieve that, firstly I enabled SSL communication using keystore on the server-side and truststore on the client-side as explained in this post: https://chamibuddhika.wordpress.com/2011/10/03/securing-a-thrift-service/
Afterwards, I wrapped my transport instances on both client and server with TEncryptedFramedTransport.java class provided in the following SO post: Symmetric encryption (AES) in Apache Thrift. This enabled symmetric encryption of messages transferred through socket connection.
My question is that does applying both of these make my communication more secure? Or is it unnecessary to apply both and should go with only one of these?
There is a concept called "defense in depth". The idea is that you still have one more defense in place even when one might got broken. The downside is, as always, that you have to pay for it with performance.
The real question here is this: Do I trust SSL/TLS alone or do I absolutely want to add another (application-)level of security that serves as another hurdle if some man-in-the-middle manages to get inside my SSL/TLS channel, even if that will cost me some performance?
Another aspect could be that might be forced to communicate across unsecure channels, i.e. when there is no TLS available. Remember, Thrift allows to switch transports as needed, and the SSL/TLS infrastructure is only available in certain cases.
If the answer is yes, do it. It would be the same answer with REST, SOAP, XMLRPC, Avro, gRPC or the well-known avian carriers.
So the final, decisive answer if you should do that depends on your priorities.
Be also aware that there could also be other attack vectors in your solution that might need to be adressed.

What can go wrong if we do NOT follow RESTful best practices?

TL;DR : scroll down to the last paragraph.
There is a lot of talk about best practices when defining RESTful APIs: what HTTP methods to support, which HTTP method to use in each case, which HTTP status code to return, when to pass parameters in the query string vs. in the path vs. in the content body vs. in the headers, how to do versioning, result set limiting, pagination, etc.
If you are already determined to make use of best practices, there are lots of questions and answers out there about what is the best practice for doing any given thing. Unfortunately, there appears to be no question (nor answer) as to why use best practices in the first place.
Most of the best practice guidelines direct developers to follow the principle of least surprise, which, under normal circumstances, would be a good enough reason to follow them. Unfortunately, REST-over-HTTP is a capricious standard, the best practices of which are impossible to implement without becoming intimately involved with it, and the drawback of intimate involvement is that you tend to end up with your application being very tightly bound to a particular transport mechanism. So, some people (like me) are debating whether the benefit of "least surprise" justifies the drawback of littering the application with REST-over-HTTP concerns.
A different approach examined as an alternative to best practices suggests that our involvement with HTTP should be limited to the bare minimum necessary in order to get an application-defined payload from point A to point B. According to this approach, you only use a single REST entry point URL in your entire application, you never use any HTTP method other than HTTP POST, never return any HTTP status code other than HTTP 200 OK, and never pass any parameter in any way other than within the application-specific payload of the request. The request will either fail to be delivered, in which case it is the responsibility of the web server to return an "HTTP 404 Not Found" to the client, or it will be successfully delivered, in which case the delivery of the request was "HTTP 200 OK" as far as the transport protocol is concerned, and anything else that might go wrong from that point on is exclusively an application concern, and none of the transport protocol's business. Obviously, this approach is kind of like saying "let me show you where to stick your best practices".
Now, there are other voices that say that things are not that simple, and that if you do not follow the RESTful best practices, things will break.
The story goes that for example, in the event of unauthorized access, you should return an actual "HTTP 401 Unauthorized" (instead of a successful response containing a json-serialized UnauthorizedException) because upon receiving the 401, the browser will prompt the user of credentials. Of course this does not really hold any water, because REST requests are not issued by browsers being used by human users.
Another, more sophisticated way the story goes is that usually, between the client and the server exist proxies, and these proxies inspect HTTP requests and responses, and try to make sense out of them, so as to handle different requests differently. For example, they say, somewhere between the client and the server there may be a caching proxy, which may treat all requests to the exact same URL as identical and therefore cacheable. So, path parameters are necessary to differentiate between different resources, otherwise the caching proxy might only ever forward a request to the server once, and return cached responses to all clients thereafter. Furthermore, this caching proxy may need to know that a certain request-response exchange resulted in a failure due to a particular error such as "Permission Denied", so as to again not cache the response, otherwise a request resulting in a temporary error may be answered with a cached error response forever.
So, my questions are:
Besides "familiarity" and "least surprise", what other good reasons are there for following REST best practices? Are these concerns about proxies real? Are caching proxies really so dumb as to cache REST responses? Is it hard to configure the proxies to behave in less dumb ways? Are there drawbacks in configuring the proxies to behave in less dumb ways?
It's worth considering that what you're suggesting is the way that HTTP APIs used to be designed for a good 15 years or so. API designers are tending to move away from that approach these days. They really do have their reasons.
Some points to consider if you want to avoid using ReST over HTTP:
ReST over HTTP is an efficient use of the HTTP/S transport mechanism. Avoiding the ReST paradigm runs the risk of every request / response being wrapped in verbose envelopes. SOAP is an example of this.
ReST encourages client and server decoupling by putting application semantics into standard mechanisms - HTTP and XML/JSON (or others data formats). These protocols and standards are well supported by standard libraries and have been built up over years of experience. Sure, you can create your own 'unauthorized' response body with a 200 status code, but ReST frameworks just make it unnecessary so why bother?
ReST is a design approach which encourages a view of your distributed system which focuses on data rather than functionality, and this has a proven a useful mechanism for building distributed systems. Avoiding ReST runs the risk of focusing on very RPC-like mechanisms which have some risks of their own:
they can become very fine-grained and 'chatty'
which can be an inefficient use of network bandwidth
which can tightly couple client and server, through introducing stateful-ness and temporal coupling beteween requests.
and can be difficult to scale horizontally
Note: there are times when an RPC approach is actually a better way of breaking down a distributed system than a resource-oriented approach, but they tend to be the exceptions rather than the rule.
existing tools for developers make debugging / investigations of ReSTful APIs easier. It's easy to use a browser to do a simple GET, for example. And tools such as Postman or RestClient already exist for more complex ReST-style queries. In extreme situations tcpdump is very useful, as are browser debugging tools such as firebug. If every API call has application layer semantics built on top of HTTP (e.g. special response types for particular error situations) then you immediately lose some value from some of this tooling. Building SOAP envelopes in PostMan is a pain. As is reading SOAP response envelopes.
network infrastructure around caching really can be as dumb as you're asking. It's possible to get around this but you really do have to think about it and it will inevitably involve increased network traffic in some situations where it's unnecessary. And caching responses for repeated queries is one way in which APIs scale out, so you'll likely need to 'solve' the problem yourself (i.e. reinvent the wheel) of how to cache repeated queries.
Having said all that, if you want to look into a pure message-passing design for your distributed system rather than a ReSTful one, why consider HTTP at all? Why not simply use some message-oriented middleware (e.g. RabbitMQ) to build your application, possibly with some sort of HTTP bridge somewhere for Internet-based clients? Using HTTP as a pure transport mechanism involving a simple 'message accepted / not accepted' semantics seems overkill.
REST is intended for long-lived network-based applications that span multiple organizations. If you don’t see a need for the constraints, then don’t use them. -- Roy T Fielding
Unfortunately, there appears to be no question (nor answer) as to why use best practices in the first place.
When in doubt, go back to the source
Fielding's dissertation really does quite a good job at explaining how the REST architectural constraints ensure that you don't destroy the properties those constraints are designed to protect.
Keep in mind - before the web (which is the reference application for REST), "web scale" wasn't a thing; the notion of a generic client (the browers) that could discover and consume thousands of customized applications (provided by web servers) had not previously been realized.
According to this approach, you only use a single REST entry point URL in your entire application, you never use any HTTP method other than HTTP POST, never return any HTTP status code other than HTTP 200 OK, and never pass any parameter in any way other than within the application-specific payload of the request.
Yup - that's a thing, it's called RPC; you are effectively taking the web, and stripping it down to a bare message transport application that just happens to tunnel through port 80.
In doing so, you have stripped away the uniform interface -- you've lost the ability to use commodity parts in your deployment, because nobody can participate in the conversation unless they share the same interpretation of the message data.
Note: that's doesn't at all imply that RPC is "broken"; architecture is about tradeoffs. The RPC approach gives up some of the value derived from the properties guarded by REST, but that doesn't mean it doesn't pick up value somewhere else. Horses for courses.
Besides "familiarity" and "least surprise", what other good reasons are there for following REST best practices?
Cheap scaling of reads - as your offering becomes more popular, you can service more clients by installing a farm of commodity reverse-proxies that will serve cached representations where available, and only put load on the server when no fresh representation is available.
Prefetching - if you are adhering to the safety provisions of the interface, agents (and intermediaries) know that they can download representations at their own discretion without concern that the operators will be liable for loss of capital. AKA - your resources can be crawled (and cached)
Similarly, use of idempotent methods (where appropriate) communicates to agents (and intermediaries) that retrying the send of an unacknowledged message causes no harm (for instance, in the event of a network outage).
Independent innovation of clients and servers, especially cross organizations. Mosaic is a museum piece, Netscape vanished long ago, but the web is still going strong.
Of course this does not really hold any water, because REST requests are not issued by browsers being used by human users.
Of course they are -- where do you think you are reading this answer?
So far, REST works really well at exposing capabilities to human agents; which is to say that the server side is so ubiquitous at this point that we hardly think about it any more. The notion that you -- the human operator -- can use the same application to order pizza, run diagnostics on your house, and remote start your car is as normal as air.
But you are absolutely right that replacing the human still seems a long ways off; there are various standards and media types for communicating semantic content of data -- the automated client can look at markup, identify a phone number element, and provide a customized array of menu options from it -- but building into agents the sorts of fuzzy intelligence needed to align offered capabilities with goals, or to recover from error conditions, seems to be a ways off.

iPhone/iPad Encrypting JSON

I want to encrypt some json from a server and then decrypt it on the iphone/ipad. What are your thoughts on this? What is the best approach to this? Should I scrap this idea and just go via SSL?
Save yourself a lot of trouble and just use HTTPS for all server communications.
As stated above one way is to do everything over https.
An alternative I can think of is the following:
Generate an symmetrical encryption
key per session/login per client on
the server
Send that key to the client over
https
From there on encrypt all the data
you send to the client with that key
The client can then decrypt the
encrypted data
I don't have enough knowledge about https. I often read that is heavy on the resources of the system, but since I have not made or read some good benchmarks I can't give you a rigorous argument for or against it.
The implementation I proposed require a little bit more coding, but you can tailor to your encryption needs.
I think ultimately your decision should be made based on your usage scenario, if you sent very little data, not often to a few client application, you can't go wrong with https. If your expected encrypted traffic is high, the alternative solution might make sense.

Restricting access to server to iPhone app

I'm building a client/server iPhone game, where I would like to keep third-party clients from accessing the server. This is for two reasons: first, my revenue model is to sell the client and give away the service, and second I want to avoid the proliferation of clients that facilitate cheating.
I'm writing the first version of the server in rails, but I'm considering moving to erlang at some point.
I'm considering two approaches:
Generate a "username" (say, a GUID) and hash it (SHA256 or MD5) with a secret shipped with the app, and use the result as the "password". When the client connects with the server, both are sent via HTTP Basic Auth over https. The server hashes the username with the same secret and makes sure that they match.
Ship a client certificate with the iPhone app. The server is configured to require the client certificate to be present.
The first approach has the advantage of being simple, low overhead, and it may be easier to obfuscate the secret in the app.
The second approach is well tested and proven, but might be higher overhead. However, my knowledge of client certificates is at the "read about it in the Delta Airlines in-flight magazine" level. How much bandwidth and processing overhead would this incur? The actual data transferred per request is on the order of a kilobyte.
No way is perfect--but a challenge/response is better than a key.
A certificate SHOULD use challenge/response. You send a random string, it encrypts it using the certificate's private key, then you get it back and decrypt it with the public key.
Depending on how well supported the stuff is on the iPhone, implementing the thing will be between trivial and challenging.
A nice middle-road I use is xor. It's slightly more secure than a password, trivial to implement and takes at least an hour or two of dedication to hack.
Your app ships with a number built in (key).
When an app connects to you, you generate a random number (with the same number of bits as the key) and send it to the phone
The app gets the number, xor's it with the key and sends the result back.
On the server you xor the returned result with the key which should result in your original random number.
This is only slightly hacker resistant, but you can employ other techniques to make it better like changing the key each time you update your software, hiding the random number with some other random number, etc. There are a lot of tricks to hiding this, but eventually hackers will find it. Changing the methodology with each update might help.
Anyway, xor is a hack but it works for cases where sending a password is just a little to hackable.
The difference between xor and public key is that xor is EASILY reversible by just monitoring a successful conversation, public key is (theoretically) not reversible without significant resources and time.
Who is your adversary here? Both methods fail to prevent cracked copies of the application from connecting to the server. I think that's the most common problem with iPhone game (or general) development for paid apps.
However, this may protect the server from other non-iPhone clients, as it deters programmers from reverse engineering the network packet interfaces between the iPhone and the server.
Have your game users authenticate with their account through OAuth, to authorize them to make game state changes on your server.
If you can't manage to authenticate users, you'd need to authenticate your game application instance somehow. Having authentication credentials embedded in the binary would be a bad idea as application piracy is prevalent and would render your method highly insecure. My SO question on how to limit Apple iPhone application piracy might be of use to you in other ways.

Implementing a Handshake for a Socket Connection

I'm developing a program with a client/server model where the client logs on to the server, and the server assigns a session id/handshake which the client will use to identify/authorize its subsequent messages to the server.
I'm wondering what length should the handshake be for it to be reasonably secure but also short enough to minimize data overhead, since I'd like to have it be low latency.
I'm thinking of using MD5 or murmurhash2 with the username and a random number salt with a collision detection, but I'm wondering if there's a more efficient solution (i.e. a better algorithm) and whether 32bits is too much/too little for this kind of thing.
Any input is highly appreciated.
I would use a HTTPS connection for your client/server communications.
It's easy to use (almost all the major SDKs implement it) and it provides good encription.
Regards.
PD: In reference of encryption method I would use Whirlpool because Mr. Rivest said in 2005 it was broken.
This may not be as simple as it looks. Note that if you send anything in clear over the network (e.g. session id/handshake), anyone can eavesdrop the communication and reuse this value to act as the client.
If you cannot use https, as the first answer suggested, you probably need to look at key agreement protocols. Once both parties agree on a shared secret key (which cannot be reconstructed based on observed communications), you can use it to authenticate all the remaining transmissions with a MAC (e.g. HMAC).
Whatever you do, don't use MD5, it's so totally broken. Whirlpool may also not be the good option, it's slower and there is a recent (theoretical) attack on the main part of it, see
ASIACRYPT 2009 Program.
I would stick with SHA-256 for now.