why big sites do not use ocsp stapling? - facebook

I learnt something about ocsp recently.
I check ocsp stapling of site by command like:
$openssl s_client -connect www.stackoverflow.com:443 -status -servername www.stackoverflow.com
I found my own site and stackoverflow and some other small sites have ocsp settings. They have OCSP Response Data field in command response.
But when I check some big sites like google.com, github.com, facebook.com, they don't have such a field. And I get OCSP response: no response sent.
So why they don't use it?

To protect your users, as a web server, you must use OCSP must staple. But even in such a situation, you need to consider that OCSP responses that are given are often valid for multiple days, so an attacker can get an OCSP valid status for a server certificate before the certificate was revoked, and insert it as an OCSP stapled status during its MitM attack to a web client (of course, the attacker must be the same that hacked the certificate and for which the certificate has been revoked). So, a browser, that would not support OCSP stapling and that would require OCSP, would be protected. A browser supporting OCSP stapling would not be. This is one of the reasons why OCSP stapling is not the ultimate choice. there are some cases where it is a less secure way to avoid security flaws, and some other cases where it is useless. Even if it is not a bad attempt to get more trust on the Internet, of course.
So, some big companies like Google offer other means to protect their customers and users using their browsers: they want to offer a more secure experience to their users, comparing to other browsers, because their goal is to get more users. For this purpose, Google Chrome implements the proprietary CRLSets mechanism.
The same companies and others also want to promote other ways for people to get (back) trust on the Internet. For instance, some companies follow Google to promote the Certificate Transparency mechanism. So, this would be a bad idea, politically speaking, to implement OCSP stapling on their site and working for another trust mechanism for the Internet.
The best paper I've read about certificate revocation and the means to protect your users is here: https://arstechnica.com/information-technology/2017/07/https-certificate-revocation-is-broken-and-its-time-for-some-new-tools/

My answer is: probably because saving more bytes in response.
Alexandres's answer (specially bolded text) may be correct for google.com, but not for facebook.
OCSP stapled response would be present in every TLS connection and this takes many bytes. Why would happen if no OCSP staple exists in response?! client sends a OCSP verify request to orginal server and caches response for several days. For popular sites like google and facebook, users see pages several times a day, and normally OCSP response is cached at client and no OCSP request is required (e.g. only 2% of clients need actual OCSP request).
So for this popular sites, removing OCSP stapling (and saving a hundred bytes for 98% of requests) is a better choice than making a site faster for 2% of requests.

Related

Is it ok to return certificate status without OCSP(Online Certificate Status Protocol)

I created the certificate authority server using Node.js and some cryptographic library supporting RSA sign, verification and generating X.509. When I added the certificate revocation feature with Online Certificate Status Protocol(OCSP), I thought of why I have to send a request and receive a response with OCSP because only what I want to know is not OCSP Request/Response object but just certificate status(Good or revoked.)
Does it make sense requesting not OCSP response object(.PEM or something else) but the certificate status value like HTTP status code(200: OK, 400: NOT FOUND)?
OCSP (Online Certificate Status Protocol) is a standard protocol to get the current status of a certificate ruled by RFC6960
The protocol defines the interchanged messages, including content, encoding, content-type and HTTP response codes.
If you want to build a general-purpose PKI it does not make sense to define your own protocol because no current client will use it (browsers, mobile devices, software tools, etc.), but expect you to have a standard OCSP service.
But if you are going to build your own client tools for an internal PKI it may be useful to have a very simple status query service (e.g 200 GOOD, 401 REVOKED, 404 UNKNOWN). But in that case do not call it OCSP
The reason OCSP responds with an object signed by the CA is so that relying parties know that the object and hence the certificate status is authentic.
If your new status service receives a query of "What is the status of certificate with serial number 123456789" and returns a simple HTTP response, the client will not be able to authenticate that response; making it very simple to carry out a substitution attack and place a 200 GOOD response when in fact the certificate's private key has been compromised and a 401 REVOKED should be sent.
You cannot fix that by responding over HTTPS as that will result in perpetual recursive status checking.
You could possibly use HTTPS if the status server's certificate is issued by a CA that doesn't use your protocol, instead using alternatives such as OCSP or a CRL distribution point. But that just makes the whole solution more complex instead of simplifying the status checking problem.

Is this simple REST authentication scheme secure?

I have been looking into REST authentication schemes (many discussed on SO), and many of them seem to be overly complex for my purposes. I have formulated a simpler scheme from elements of the more complex ones: but I would like to know if there are any security holes in my approach.
Influencing factors:
TLS everywhere is too slow and resource heavy
I do not require security against eavesdropping as all information is public.
Proposed authentication scheme:
"Sign up" and "Login" are achieved via a TLS connection. On Login, a username and password are supplied and a shared secret key is returned by the server (and then stored in local storage by the client e.g. HTML5 local storage, App storage, etc).
Every other request takes place over cleartext HTTP
Client side algorithm:
Before sending, every request is "salted" with the shared secret key and an SHA hash is taken of the salted request.
This hash is inserted into the request in a custom HTTP header.
The salt is removed from the request.
The request is sent with the custom header.
Server side algorithm:
Server isolates and removes the custom Hash header from the request.
Server salts the request string with the shared secret key.
Server takes the hash of the salted request and compares it to the value of the custom hash header.
If they are the same, we have identified which user sent the request and can proceed with authorisation etc based on this knowledge.
Are there any vulnerabilities in this scheme that I have overlooked?
I would question your assumptions here.
TLS everywhere is too slow and resource heavy
TLS is becoming almost ubiquitous for APIs and one reason is because it is now relatively cheap for both clients and servers to support it. How much overhead? As usual "it depends" but certainly negligible enough for most modern APIs, even mass-consumer APIs like Facebook and Twitter, to be moving towards using it exclusively.
I do not require security against eavesdropping as all information is public.
This is a common myth about TLS. Even for public data, consider the implications:
Any intermediary agent can inject their own requests and responses. It could be junk, malicious, subtly incorrect, whatever. Secure communication is not just to keep the content private, it's also to maintain its integrity. (A common example is telcos and hotels injecting ads into websites.)
The data may be public, but the traffic may still be sensitive. What if you could monitor Apple's Wikipedia requests? Would it be interesting if there was a spike in requests to car-related articles? Without TLS, intermediaries can monitor requests a user is making.
None of which critiques your algorithm. You could ask on Cryptography Stack, but it's considered fairly risky to roll your own authentication and rarely worth it nowadays.
What you are describing is an MAC based authentication scheme. Instead of rolling your own implementation, you should look at Hawk or AWS authentication schemes.
A downside of such an authentication scheme is that the server that needs to validate the request needs to talk to the authentication server to get the secret key. This impacts the scalability of the system in a negative way.
Token based authentication schemes can validate the request without going back to the token issuing authority due to digital signatures.
Finally, I agree with #mahemoff that TLS is becoming ubiquitous and very cheap. Actually, depending on the circumstances, HTTPS may outperform HTTP.

crl vs ocsp revocation with iText

I have read all the white papers on the subject, successfully signed certified and time stamped my pdf document, but confusion arises when I want to do revocation.
When I don't implement crl or ocsp in my signature properties/revocation I get that ocsp revocation check was done online.
If I implement crl, I get this :
If I implement ocsp, or ocsp and crl together, I get this:
My question would be: ocsp in my case obviously has a priority, that should be down to my CA, in this case Start Com class 1, am i right? and is this a good way to implement both(because as it looks i can only see ocsp even dough I implemented both). and why is it, in both cases, check revocation button grayed out? what am I missing?

Why sign REST queries even when using SSL?

I've just read this very interesting article: Principles for Standardized REST Authentication and I'm wondering why one should sign REST queries even when using SSL. In my understanding, signing REST queries lets the server ensure requests come from trusted clients.
Having said that, is signing really necessary considering that SSL also protects against man-in-the-middle attacks?
As stated on the Wikipedia article for HTTPS:
[...] HTTPS provides authentication of the web site and associated web server that one is communicating with, which protects against man-in-the-middle attacks. Additionally, it provides bidirectional encryption of communications between a client and server, which protects against eavesdropping and tampering with and/or forging the contents of the communication. In practice, this provides a reasonable guarantee that one is communicating with precisely the web site that one intended to communicate with (as opposed to an imposter), as well as ensuring that the contents of communications between the user and site cannot be read or forged by any third party. [...]
This is why you need HTTPS, so that the client "is sure" that it's requests are sent to the proper destination. The article you linked also says this:
If you are not validating the SSL certificate of the server, you don't know who is receiving your REST queries.
But HTTPS normally does not authenticate the client unless you configure the server to request a certificate from the client in order to perform mutual authentication. If you read the comments in the post you linked you will see people mentioning this:
If you are going to use https, why not use it fully, and ask for client side certificates too? Then you get a fully RESTful authentication method, because the client and the server are authenticated at the connection layer, and there is no need to bring authentication into the URI level.
But HTTPS with client-side certificates is more expensive and complex so most API providers keep "the normal" HTTPS to identify the server and use a lighter mechanism to identify the clients: the API keys. The API keys basically consist of a name which is public - for example "Johnny" - and a secret key which is private - for example a long string of randomly generated characters.
When you make a request to the server you include the name "Johnny" in the URL so that the server knows who sent the request. But the server doesn't just blindly trust you that you are "Johnny", you have to prove it by signing the request with the secret key which, because it's private, only the real "Johnny" knows.
A digital signature has legal implications such as non-repudiation, which any value transaction should require. It's not just a matter of authentication. A digital signature on an actual transaction is a much stronger piece of evidence in court than 'this conversation was carried out over SSL with mutual authentication so it must have been the defendant Your Honour'.

How to securely communicate with server?

I'm building a solution consisting of an app and a server. Server provides some methods (json) and the app uses them. My aim is to make those API methods inaccessible to other clients. What is the best way to do so?
Should I take a look at certificates (to sign every outgoing request)? If yes, where do I start and what is the performance impact of doing so?
What are alternatives?
Put another way, you need a way to distinguish a valid client's request from an invalid client's request. That means the client needs to present credentials that demonstrate the request comes from a valid source.
SSL certificates are an excellent way to assert identity that can be validated. The validity of an SSL certificate can be confirmed if the certificate contains a valid signature created by another certificate known to be secure, a root cert. As noted in other answers an embedded certificate won't do the job because that certificate can be compromised by dissecting the app. Once it is compromised, you can't accept any requests presenting it, locking out all your users.
Instead of one embedded app cert, you need to issue a separate certificate to each valid user. To do that, you need to set up (or outsource to) a Certificate Authority and issue individual, signed certificates to valid clients. Some of these certificate will be compromised by the user -- either because they were hacked, careless or intentionally trying to defraud your service. You'll need to watch for these stolen certificates, place them on a certificate revocation list (CRL) and refuse service to these compromised certificates. Any web server is able to refuse a connection based on a CRL.
This doesn't solve the security issues, it just moves them out of the app. It is still possible for someone to create what appears to be a valid certificate through social engineering or by stealing your root certificate and manufacturing new signed certificates. (These are problems all PKI providers face.)
There will be a performance hit. How much of a hit depends on the number of requests from the app. The iPhone NSURLConnection class provides support for SSL client certificates and client certificates can be installed in the phone from an e-mail or authenticated web request. Managing the infrastructure to support the client certs will require more effort than coding it into the app.
Incidentally, voting down any answer you don't like creates a chilling effect in the community. You're not nearly as likely to get advice -- good or bad -- if you're going to take a whack at everyone's reputation score.
I will now freely admit that it's an interesting question, but I have no idea how it could be done.
Original answer:
Interesting question. Assuming people can't reverse-engineer the iPhone app, the only solution that comes to mind would be to sign requests with a public key, or some other secret known only to the application. By that, I mean adding an extra argument to every API call that is a hash of the destination URL and other arguments combined with a secret known only to your server and application.
To expand upon this: suppose your API call has arguments foo, bar and qux. I would add a signature argument, the value of which could be something as simple as sorting the other arguments by name, concatenating them with their values, adding a secret, and hashing the lot. Then on the server side, I would do the same thing (excepting the signature argument) and check that the hash matches the one we were given in the request.
Consider authenticated HTTP.
For a cheaper alternative, there's shared secret/hash scheme. The client and the server have a shared secret string of text. Upon request, the client hashes together (using MD5, or SHA1, or SHA something else - you choose) the request fields and the secret. The hash value is attached to the request - say, as another POST field.
The server does the same operation with the request and with its copy of the secret, then compares the hash values. If they don't match - service denied.
For added security, you may encrypt the hash with a RSA public key. The client has the public key, the server keeps the private key. The server decrypts the hash with the private key, then the same. I did that with a C++ WinMobile client and a PHP-based service - works like a charm. No experience with crypto on iPhone, though.
UPDATE: now that I think of it, if we assume that the attacker has complete control over the client (ahem jailbroken iPhone and a debugger), the problem, as formulated above, is not solvable in theory. After all, the attacker might use your bits to access the service. Reverse-engineer the executable, find the relevant functions and call them with desired data. Build some global state, if necessary. Alternatively, they can automate your UI, screen scraper style. Such is the sad state of affairs.