Not sure if this is the right place to ask this kind of question. I am looking for some examples of private key leakage or compromised keypair (via insider attack, configuration mistake, etc.) that lead to certificate revocation. I am particularly looking for cases when CA was compromised but can also use info on leaf certificates.
Thanks!
Related
I have a client a application which is distributed to multiple clients. Sometimes this application acts as server for some processes.I want the communication to be over ssl. I want to insert the server certificate inside the application and publish to multiple clients. Is this design a good idea?
Is there any real time product example which is using this design?
You could possibly add TLS communication this way, but as I understand your question, all application instances would receive the same certificate and they could thus impersonate each other. If someone extracts the private key for the certificate in one app they can decrypt all the communication for all processes and applications. Just the fact that the key is distributed to multiple environments outside your control could justify revocation of the certificate an any time.
The design is not a good idea if you want proper TLS communication with a publicly trusted root. The processes and applications will likely have to communicate using untrusted certificates, possibly self signed certificates.
I am currently implementing a secure channel setup with an HSM.
The protocol is proprietary but uses standard crypto mechanisms (rsa sha)
At a securre channel setup we receive a stack of certificates, with the last one
the remote device personal cert.
This chain must be validated, in high level languages, no problem.
But I could not find any example how this is done with the pkcs11 interface.
I have the impression there is no cert chain verification method in pkcs11?
Must I disect every cert and calculate the signature with the basic pkcs11
functions? And this is not very secure, you would want to pass the whole stack to a HSM, which reports back: OK or NOT. In case of OK the public key of the (in our case) device cert could be used to crypt a random channel key, etc etc.
So the question is , how is this normally done with pkcs11?
X.509 certificate chain validation is high level operation which is not directly supported in rather low-level PKCS#11 API (same goes for certificate signing request generation, certificate issuance etc.). You will need to use some other general purpose cryptographic library such as OpenSSL for that.
Microsoft Security Advisory 2880823 has been released along with the
policy announcement that Microsoft will stop recognizing the validity
of SHA-1 based certificates after 2016.
Does it mean that all signatures created with SHA1 algorithm will be not valid?
Or only certificates used the SHA-1 algorithm will be not valid in newer operation systems?
The latter.
It would be very strange if the cryptographic security provider would not be able to handle PKCS#1 signatures with SHA1 anymore. What will happen is that certificate chain validation will not allow the SHA-1 signature to be used for certificate verification (except for the trusted/root certificate, because these are explicitly trusted in the first place).
As stated in the advisory:
The new policy will no longer allow root certificate authorities to issue X.509 certificates using the SHA-1 hashing algorithm for the purposes of SSL and code signing after January 1, 2016.
This seems to only affect intermediate CA certificates although I would not be surprised if e.g. IE would also disallow the use of SHA-1 for chain validation for lower intermediate CA's and end user certificates.
The updates also indicate that Microsoft won't allow any certificates to be signed using SHA-1 within a chain for TLS.
As for the "newer operating systems" part of the question: I would expect policy change to be implemented for all supported platforms (at the time the change is introduced).
Note that the use of SHA-1 for signature verification will be pretty dangerous, especially if the contents can be controlled. This is very much the case when the signature is used for non-repudiation. It is much less the case if the signature is used for e.g. challenge verification in a challenge response protocol where the input is ephemeral and generated by the party that performs the verification.
Some (especially bank) password systems require you to enter three (specified) letters out of your password to log in.
This is supposed to defeat keyloggers, and possibly wire-sniffing replay attacks (for unencrypted sessions).
Clearly, there's no way such a scheme can work using ordinary password hashing, since you'd need to know the whole password to compute the hash.
What do such systems commonly store server-side to make this work?
Do they store the password in plaintext, or maybe a separate hash of each letter, or what?
As you correctly note, standard password hashing schemes won't work if authentication is done using only a substring of the password. There are a number of ways that such a system could be implemented:
Store the password in plain:
Simple and easy to implement.
Insecure if the database is compromised.
May not comply with regulations requiring hashed or encrypted password storage (but using low-level database encryption might get around that).
Store the password encrypted, decrypt to check:
No more secure than storing it in plain if the encryption key is also compromised.
May satisfy regulations forbidding password storage in plain.
Could be made more secure by using a dedicated hardware security module or a separate authentication server, which would store the key and provide a black-box interface for encryption and substring verification.
Store hashes of all (or sufficiently many) possible substrings:
Needs much more storage space than other solutions.
Password can still be recovered fairly easily by brute force if the database is compromised, since each substring can be attacked separately.
Use k-out-of-n threshold secret sharing:
Needs less space than storing multiple hashes, but more than storing the password in plain or using reversible encryption.
No need to decrypt the password for substring verification.
Still susceptible to brute force attack if database is compromised: anyone who can guess k letters of the password can recover the rest. (In fact, with some implementations, k-1 letters might be enough.)
Ultimately, all of these schemes suffer from weakness against brute force attacks if the database is compromised. The fundamental reason for this is that there just isn't very much entropy in a three-letter substring of a typical password (or, indeed, of even a particularly strong one), so it won't take many guesses to crack.
Which of these is best? That's hard to say. If I had to choose one of these schemes, I'd probably go for encrypted storage using strong symmetric encryption (such as AES), with a separate server or HSM to handle encryption and verification. That way, at least, an attacker compromising a front-end server wouldn't be able to just copy the database and attack it offline (although they could still mount a brute force attack on the HSM if it didn't implement effective rate limiting).
However, I'd say that the whole idea of using only part of the password for authentication is deeply flawed: it doesn't really deliver the security benefits it's supposed to, except in a few particularly constrained attack scenarios (such as an eavesdropper that can only observe one authentication event, and cannot just keep trying until they get the same challenge), yet it fundamentally weakens security by reducing the amount of information needed for successful authentication. There are much better solutions, such as TANs, to the security concerns that partial password authentication is supposed to address.
I am considerung to use zeromq as messaging layer between my applications. At least in some cases I want the communication to be secure and I am thinking about SSL.
Is there some standard way how to ssl-enable zeromq? As far as I understand it doesn't support it out of the box.
It would be nice if I just had a parameter when connnecting to a socket (bool: useSsl) :)
Any ideas?
Understanding that this is not really an answer to your question, I'm going to be encrypting the messages directly with RSA, before sending them with 0mq.
In the absence of a more integrated encryption method that is fully tested and implemented in my platform of choice, that's what I'm going with. 0mq just recently released version 4, which has encryption baked in, but it's still considered experimental and isn't fully supported by the language bindings.
Encrypting the message, rather than the connection, seems to provide the simplest upgrade path, and the difference for our purposes are pretty much just semantics given how we'd have to implement encryption currently, today.
Edit: I know more about encryption now than I did when I wrote this, RSA is not an appropriate choice for encrypting message data. Use AES, either with manually sharing keys (this is our approach for the short term) or implementing a key sharing scheme as in Jim Miller's answer... but beware if you take the latter approach, designing and implementing a key-sharing scheme securely is hard. Way harder than you'd think. You can implement SSL/TLS directly (using message BIOs), and others have done so, it's also not simple but at least know that the SSL scheme is industry standard and therefore meets a minimum security requirement.
In short, before the Elliptic Curve crypto baked into ZMQ 4 is considered reliable and becomes standard, the "accepted solution" would be to implement SSL/TLS over the connection manually, and failing that, use AES 128 or 256 with a secure key sharing mechanism (key sharing is where RSA would appropriately be used).
We are currently implementing a pre-shared key solution using 0mq that implements a key exchange protocol based loosely on TLS/SSL.
Essentially, we have a data aggregator service that publishes encrypted state of health data over a multicast 0mq publisher. A symmetric key is used (AES128) to encrypt the data and can be retrieved from a second service running as a simpler request/response model over 0mq.
To retrieve the symmetric key (PSK), we are implementing the following protocol:
Client connects
Server sends its certificate
Client verifies server certificate against a CA chain of trust
Client sends its certificate
Server verifies client certificate against its CA chain
Server encrypts PSK using client public key
Server sends encrypted PSK to client
Client decrypts PSK
Once a client has the PSK, it can decrypt the messages retrieved over multicast.
We are also looking at implementing a session expire algorithm that uses two enveloped keys in the multicast service. One key is the current session key, and the second is the old, expiring key. That way, a client has a little more time to retrieve the new key without having to buffer encrypted messages before retrieving the new key.
According to zeromq.org, it's not supported yet but they are looking into it. It looks like it's suggested as a project for Google Summer of Code.