I am currently implementing a secure channel setup with an HSM.
The protocol is proprietary but uses standard crypto mechanisms (rsa sha)
At a securre channel setup we receive a stack of certificates, with the last one
the remote device personal cert.
This chain must be validated, in high level languages, no problem.
But I could not find any example how this is done with the pkcs11 interface.
I have the impression there is no cert chain verification method in pkcs11?
Must I disect every cert and calculate the signature with the basic pkcs11
functions? And this is not very secure, you would want to pass the whole stack to a HSM, which reports back: OK or NOT. In case of OK the public key of the (in our case) device cert could be used to crypt a random channel key, etc etc.
So the question is , how is this normally done with pkcs11?
X.509 certificate chain validation is high level operation which is not directly supported in rather low-level PKCS#11 API (same goes for certificate signing request generation, certificate issuance etc.). You will need to use some other general purpose cryptographic library such as OpenSSL for that.
Related
I have a client a application which is distributed to multiple clients. Sometimes this application acts as server for some processes.I want the communication to be over ssl. I want to insert the server certificate inside the application and publish to multiple clients. Is this design a good idea?
Is there any real time product example which is using this design?
You could possibly add TLS communication this way, but as I understand your question, all application instances would receive the same certificate and they could thus impersonate each other. If someone extracts the private key for the certificate in one app they can decrypt all the communication for all processes and applications. Just the fact that the key is distributed to multiple environments outside your control could justify revocation of the certificate an any time.
The design is not a good idea if you want proper TLS communication with a publicly trusted root. The processes and applications will likely have to communicate using untrusted certificates, possibly self signed certificates.
Microsoft Security Advisory 2880823 has been released along with the
policy announcement that Microsoft will stop recognizing the validity
of SHA-1 based certificates after 2016.
Does it mean that all signatures created with SHA1 algorithm will be not valid?
Or only certificates used the SHA-1 algorithm will be not valid in newer operation systems?
The latter.
It would be very strange if the cryptographic security provider would not be able to handle PKCS#1 signatures with SHA1 anymore. What will happen is that certificate chain validation will not allow the SHA-1 signature to be used for certificate verification (except for the trusted/root certificate, because these are explicitly trusted in the first place).
As stated in the advisory:
The new policy will no longer allow root certificate authorities to issue X.509 certificates using the SHA-1 hashing algorithm for the purposes of SSL and code signing after January 1, 2016.
This seems to only affect intermediate CA certificates although I would not be surprised if e.g. IE would also disallow the use of SHA-1 for chain validation for lower intermediate CA's and end user certificates.
The updates also indicate that Microsoft won't allow any certificates to be signed using SHA-1 within a chain for TLS.
As for the "newer operating systems" part of the question: I would expect policy change to be implemented for all supported platforms (at the time the change is introduced).
Note that the use of SHA-1 for signature verification will be pretty dangerous, especially if the contents can be controlled. This is very much the case when the signature is used for non-repudiation. It is much less the case if the signature is used for e.g. challenge verification in a challenge response protocol where the input is ephemeral and generated by the party that performs the verification.
I am implementing Open Id Connect as the Relying Party using the 'implicit flow'. We are using the HS256 MAC based algorithm for signature.
As part of this implementation we need to generate a shared secret or key that is shared with both parties.
Is it safe to store this shared key in our database without hashing it? Normally we would hash and salt a password or API key but it seems that with the 'implicit flow' specified in Open Id Connect the secret is never sent across the wire, hence we would need to always be able to retrieve it again.
What is the best practice in this scenario?
As I understand, if you have implemented Implicit Flow, you are using a public client.
In the Open ID Connect Specification you can read in 10.1 : Symmetric signatures MUST NOT be used by public (non-confidential) Clients because of their inability to keep secrets.
In this case, you must use a signing algorithm that uses asymetric key (RSA or EDCSA).
We have an iOS app which interacts with various webservices at the backend. The backend however wants to validate that the request coming to it is from our valid iOS app and not from a replay attack or a "man in the middle" attack. We are eventually going to have all our calls changed to https. However, is there any way the backend can validate the request is coming from our legitimate app? We were thinking of using cryptographic nonce with every request, but it would still be prone to "man in the middle" attack. Is there any certificate exchange that can be used between the iOS app and the server?
TLS and SSL support client authentication using certificates. NSStream might support client side authentication, but I have not been able to find a way to do it without dropping down to using OpenSSL for the actual implementation.
Addition:
ASIHTTPRequest supports client certificates since version 1.8, so no fuss in implementing it.
what about using a private/public key scheme so that the iOS app can sign every request it sends?
if private/public key scheme may sound scary, the same idea of "signing" your requests can be easily implemented by hashing your crypto nonce by using sha1, sha2 or other cryptographic hashing algorithms. this would be pretty easy to implement (implementation are readily available), fast, and would ensure a higher security level.
I would suggest to use OAuth. It well known and understood and pretty much secure, and in the case that someone gets your token, you can issue a new one with an app update and revoke the old one.
This is a general http problem, not just an iOS issue. In fact, it's the very problem https is designed to solve, or at least mitigate. You can sign the request, use HMAC to authenticate the message, use digest authentication, and so on, but as long as you're using http, a man-in-the-middle attack cannot be easily detected. Spend your time moving to https as quickly as you can instead.
This problem is impossible to solve absolutely. Anything you put into your scheme can be ultimately broken by jailbreaking the phone and running the client in a debugger. Of course, that doesn't mean you can't make it more difficult to spoof your client using client certificates and whatnot, and you should. But if for example the security of financial transactions depend on your app not being spoofable, that would be bad...
I am considerung to use zeromq as messaging layer between my applications. At least in some cases I want the communication to be secure and I am thinking about SSL.
Is there some standard way how to ssl-enable zeromq? As far as I understand it doesn't support it out of the box.
It would be nice if I just had a parameter when connnecting to a socket (bool: useSsl) :)
Any ideas?
Understanding that this is not really an answer to your question, I'm going to be encrypting the messages directly with RSA, before sending them with 0mq.
In the absence of a more integrated encryption method that is fully tested and implemented in my platform of choice, that's what I'm going with. 0mq just recently released version 4, which has encryption baked in, but it's still considered experimental and isn't fully supported by the language bindings.
Encrypting the message, rather than the connection, seems to provide the simplest upgrade path, and the difference for our purposes are pretty much just semantics given how we'd have to implement encryption currently, today.
Edit: I know more about encryption now than I did when I wrote this, RSA is not an appropriate choice for encrypting message data. Use AES, either with manually sharing keys (this is our approach for the short term) or implementing a key sharing scheme as in Jim Miller's answer... but beware if you take the latter approach, designing and implementing a key-sharing scheme securely is hard. Way harder than you'd think. You can implement SSL/TLS directly (using message BIOs), and others have done so, it's also not simple but at least know that the SSL scheme is industry standard and therefore meets a minimum security requirement.
In short, before the Elliptic Curve crypto baked into ZMQ 4 is considered reliable and becomes standard, the "accepted solution" would be to implement SSL/TLS over the connection manually, and failing that, use AES 128 or 256 with a secure key sharing mechanism (key sharing is where RSA would appropriately be used).
We are currently implementing a pre-shared key solution using 0mq that implements a key exchange protocol based loosely on TLS/SSL.
Essentially, we have a data aggregator service that publishes encrypted state of health data over a multicast 0mq publisher. A symmetric key is used (AES128) to encrypt the data and can be retrieved from a second service running as a simpler request/response model over 0mq.
To retrieve the symmetric key (PSK), we are implementing the following protocol:
Client connects
Server sends its certificate
Client verifies server certificate against a CA chain of trust
Client sends its certificate
Server verifies client certificate against its CA chain
Server encrypts PSK using client public key
Server sends encrypted PSK to client
Client decrypts PSK
Once a client has the PSK, it can decrypt the messages retrieved over multicast.
We are also looking at implementing a session expire algorithm that uses two enveloped keys in the multicast service. One key is the current session key, and the second is the old, expiring key. That way, a client has a little more time to retrieve the new key without having to buffer encrypted messages before retrieving the new key.
According to zeromq.org, it's not supported yet but they are looking into it. It looks like it's suggested as a project for Google Summer of Code.