Read X.509 Certificate Store OpenPGP Format - x509

I assume that this is possible based on the fact that both utilize RSA for encryption. I should be able to read X.509 and store it as a new OpenPGP key.
The collaborator of my software needs OpenPGP.
Another collaborator provides X.509.
I am looking for a way to convert the keys.
Is that possible, how would one do that?

The short version: you can somewhat, but there is rarely good use in doing so.
You can extract the numbers forming the key and theoretically put together a new X.509 and/or OpenPGP key from them, but those would still remain incompatible, different keys in the respective system. Actually, the monkeysphere project brings tools for both directions (openpgp2pem and pem2openpgp, but make sure to read the rest of the post before heading out and converting keys).
Both X.509 and OpenPGP are more than a file format for keys: they add (incompatible) options for key management and certification, metadata, identifiers, ... Also, both systems use slightly different cryptographic modes of operation, and have very different (and thus incompatible) formats for encrypted and signed messages. They even have enormous differences in how certifications are handled (hierarchical structure in case of X.509 vs. an arbitrary graph in case of OpenPGP).
With other words: anything you do with the X.509 "representation" of an OpenPGP key sharing the same RSA primes cannot be used with the OpenPGP variant, and the other way round. Certificates issued in one system don't work in the other (and cannot be converted!).
As both keys "representations" are incompatible anyway and have to be managed separately, I would strongly recommend to create different sets of keys from beginning. After all, this adds another layer of security in case one of the keys is breached, as the other key stayed undamaged. Apart from performing unusual operations is always error-prone and suspicious to follow-up issues.
There might be good use cases, for example the monkeysphere project requires those conversions for authenticating SSH connection through OpenPGP keys. But I would not consider general usage for signing and encrypting messages and files a good use case for the reasons given above.

Related

Client-side Code signing technical explanation

Question: Is there a technical explanation how client side code-signing can be used in enterprise enviroments with open source tools like signtool or openssl?
In my usecase, I want to create a hash of a code file and sent the hash on a seperate server. This server is just used for signing. On this server are also the certificates and private keys stored.
After the hash is signed, I want to transfer the signed hash back to the client and envelope the signed hash with the code file e.g. a .exe file.
In the "Mircosoft Code Signing Best Practices", it's also recommended to first create a hash of the data and then sign the hash value with a private key.
Unfortunately I can't find any further informations how to implement this with seperate steps, described as abow.
http://download.microsoft.com/download/a/f/7/af7777e5-7dcd-4800-8a0a-b18336565f5b/best_practices.doc
"In practice, using public-key algorithms to directly sign files is inefficient. Instead, typical code-signing procedures first create a cryptographic hash of the data in a file—also known as a digest—and then sign the hash value with a private key. The signed hash value is then used in the digital signature. The digital signature can be packaged with the data or transmitted separately. A separately transmitted digital signature is known as a detached signature."

whose performance is better digital signatures (ECDSA) or Hash based signatures in case of ad-hoc networks

i want to know performance wise which is better to provide message authenticity, ECDSA signatures or hash based signatures, although i have read the comparisons of ECDSA with RSA, but not found with hash based signatures. Can ECDSA signatures replaced with Hash based signatures improves the message authenticity or not.
ECDSA is a hash based signature, in that the data gets hashed, then ECDSA is performed on the hash (not the whole data)
When it comes to data verification there are three main approaches:
Straight hash (e.g. SHA-2-256)
The fastest option to verify
If you are only protecting against line corruption this is a valid choice.
Otherwise, requires that the hash/digest value be sent over a secure (from tamper) channel, because the tampered can easily transmit the digest along with the tampered document
Provides no proof of origin
HMAC (e.g. HMACSHA256)
Requires that both the sender and receiver share the secret key
Either the sender or receiver having the key stolen puts both sides at risk
Secret key needs to come from key agreement algorithms (ECDH) or be transmitted in secret (encrypted)
Proves the document came from someone with the shared secret.
Digital Signature (e.g. ECDSA, RSA signature)
The sender is the only entity with the private key, receiver needs public key (non-secret)
Public key can be embedded in an X.509 certificate to provide a notarized association of public key to the signer
Or the public key can be transmitted raw over a secure (from tamper) channel.
Provides strong assurances about the document origin, since they shouldn't share their private key.
All three options use a hash algorithm to reduce the original data, the rest of the algorithms are what do you do with that data. There's not really a standard definition of "secure", you have to say "secure against (something)". ECDSA provides more assurances than HMAC as long as the private key isn't shared. But if HMAC provides enough assurance it is probably faster on average (specialty hardware aside).

What is the best way to post signed content on the internet?

I am currently working on an architecture, where users can post content any server. To ensure the content has actually been posted by a certain user (and has not been altered after being posted), a signature is created using the private key of the author of the content, whose public key is accessible for everyone on a centralized repository.
Problem is, I have no control over how the content is actually stored on these servers. So I might transmit the content e.g. as a JSON object with all data being base64-encoded and the signature is created using a hash of this the base64-encoded content concatenated in a certain order:
{
"a": "b",
"c": "d",
"signature": "xyz"
}
with
signature := sign(PrivKey, hash(b + d);
Now the server will probably store the content of this in another way, e.g. a database. So maybe the encoding changes. Maybe a mysql_real_escape_string() is done in PHP so stuff gets lost. Now if one wants to check the signature there might be problems.
So usually when creating signatures you have a fixed encoding and a byte sequence (or string) with some kind of unambiguous delimiter - which is not the case here.
Hence the question: How to deal with signatures in this kinda scenario?
It is still required to have a specific message representation in bits or bytes to be able to sign it. There are two ways to do this:
just store the byte representation of the message and don't alter it afterwards (if the message is a string, first encode it with a well defined character encoding);
define a canonical representation of the message, you can either store the canonical representation the message directly or convert it in memory when you are updating the hash within your signature.
A canonical representation of a message is a special, unique representation of the data that somehow distinguishes it from all other possible messages; this may for instance also include sorting the entries of a table (as long as the order doesn't change the meaning of the table), removing whitespace etc.
XML encryption for instance contains canonicalization methods for XML encoding. Obviously it is not possible to define canonicalization for data that has no intrinsic structure. Another (even) more complicated canonical representation is DER for ASN.1 messages (e.g. X509 certificates themselves as well as within RSA signatures).
I think you're really asking two different questions:
How should data be signed?
I suggest using standard digital signature data format when possible, and "detached signatures" at other times. What this means in practice: PDF, Word, Excel and other file formats that provide for digital signatures should remain in those formats.
File formats that don't provide for digital signatures should be signed using a detached signature. The recommended standard for detached signatures is the .p7b file type–A PKCS#7 digital signature structure without the data. Here is an example of signing data with a detached signature from my company.
This means that the "Relying Party" -- the person downloading/receiving the information -- would download two files. The first is the original data file, unchanged. The second file will be the detached signature for the first.
Benefits The signed file formats that directly support digital signatures can have their signatures verified using the file's usual software app. Ie, the free Adobe PDF Reader app knows how to verify digitally signed PDFs. In the same way, MS Word know how to verify signed Word files.
And for the other file types, the associated detached signature file will guarantee to the recipient that the file was not modified since it was signed and who the signer was (depending on the trust issue, see below).
Re database storage -- you don't care how the data is stored on the different servers (database, file system, etc.) In any or all cases, the data should remain unchanged.
How to establish trust between the signer and the recipient
I suggest that the organization create its own root certificate. You can then put the certificate as a file on your SSL web site. (Your web site's SSL certificate should be from a CA, eg Comodo, VeriSign, etc.) The result is that people who trust your web site's SSL certificate can then trust your organizational certificate. And your signers' certificates should be chained to your organization's certificate, thus establishing trust for the recipients.
This method of creating a self-signed organizational certificate is low cost and provides a high level of trust. But relying parties will need to download and install your organization's certificate.
If that is not good, you can get certificates for your signers from a public Certificate Authority (CA), but that will drive up the cost by at least an order of magnitude due to the charges from the CA. My company, CoSign, supports all of these configurations.

What sort of algorithms are involved when an application deciphers the token by the issuer in SSO?

In case of claim based authentication which uses SSO, an application receives a token from the issuer for a particular user and that token contains the claims as well as some sort of digital signature in order to be traced by the application that an issuer is a trusted one.
I want to know, if there are some sort of algorithms involved by which this application recognizes an issuer?
I had read that issuer has a public key and all the other applications have their own private key, is it true?
There are many protocols, formats and methods of doing Single Sign On such as Security Assertion Markup Language (SAML), OpenID and OAuth. The goal is for one entity, such as a website, to identity and authenticate the user (such as through a user name and password) and other entities, such as other websties, trust the evidence of that authentication through a token. This means users need not remember yet another password and each website maintain their own list of passwords.
This trust is usually enforced through cryptography using a digital signature. Digital signatures are used because it allows the trusting entity to verify token was (1) issued by the authenticating entity only and (2) not tampered with without being able to impersonate (pretend to be) the authenticating entity.
As you say above, this is performed using asymmetric or public key cryptography. Symmetric cryptography, such as the AES or DES algorithms, use a single key to encrypt and decrypt data. Asymmetric cryptography, such as the RSA algorithm, uses two related keys. Data encrypted using one can only be decrypted by the other and vice versa.
One key is usually kept secret, called the private key, and the other is distributed widely, called the public key. In the example above, the authenticating entity has the private key that allows it to encrypt data that anyone with the public key can decrypt.
It would seem to follow that the authenticating entity would just encrypt the user details and use that as the token. However, commonly used asymmetric algorithms like RSA are very slow and encrypting even small amounts of data can take too long.
Therefore, instead of encrypting the user details, the authenticating entity generates a "hash" or "digest" and encrypts that. A hash algorithm converts a piece of data into a small number (the hash) in a very difficult to reverse way. Difference pieces of data also create different hashes. Common hash algorithms include Message Digest 5 (MD5) and Secure Hash Algorithm (SHA) and its derivatives like SHA1, SHA256 and SHA512.
The hash encrypted with the authenticating entity's private key is called a digital signature. When it receives the token, the trusting entity decrypts the token using the authenticating entity's public key and compares it to a hash it calculates itself. If the hashes are the same, the trusting entity knows it has not been modified (because the hashes match) and it must have come from the authenticating entity (because only it knows its private key).
If you want more information about SAML and claims-based authentication, I found this video very helpful. It does get complicated rather quickly and you may need to watch it multiple times but Vittorio covers most of these concepts in great detail.

What is the Difference between a Hash and MAC (Message Authentication code)?

What is the Difference between a Hash and MAC (Message Authentication code)?
By their definitions they seem to serve the same function.
Can someone explain what the difference is?
The main difference is conceptual: while hashes are used to guarantee the integrity of data, a MAC guarantees integrity AND authentication.
This means that a hashcode is blindly generated from the message without any kind of external input: what you obtain is something that can be used to check if the message got any alteration during its travel.
A MAC instead uses a private key as the seed to the hash function it uses when generating the code: this should assure the receiver that, not only the message hasn't been modified, but also who sent it is what we were expecting: otherwise an attacker couldn't know the private key used to generate the code.
According to wikipedia you have that:
While MAC functions are similar to cryptographic hash functions, they possess different security requirements. To be considered secure, a MAC function must resist existential forgery under chosen-plaintext attacks. This means that even if an attacker has access to an oracle which possesses the secret key and generates MACs for messages of the attacker's choosing, the attacker cannot guess the MAC for other messages without performing infeasible amounts of computation.
Of course, although their similarities, they are implemented in a different way: usually a MAC generation algorithm is based upon a hash code generation algorithm with the extension that cares about using a private key.
A hash is a function that produces a digest from a message. A cryptographically secure hash is for which it is computationally infeasible to generate a message with a given digest. On its own a hash of a message gives no information about the sender of a given message. If you can securely communicate the hash of a message then it can be used to verify that a large message has been correctly received over an unsecured transport.
A message authentication code is a way of combining a shared secret key with the a message so that the recipient of the message can authenticate that the sender of the message has the shared secret key and the no-one who doesn't know the secret key could have sent or altered the message.
An HMAC is a hash-based message authentication code. Usually this involves applying a hash function one or more times to some sort of combination of the shared secret and the message. HMAC usually refers the the algorithm documented in RFC 2104 or FIPS-198.
A MAC does not encrypt the message so the message is in plain text. It does not reveal the secret key so a MAC can be sent across on open channel with out compromising the key.
Found this to the point answer from another forum.
These types of cryptographic primitive can be distinguished by the security goals they fulfill (in the simple protocol of "appending to a message"):
Integrity: Can the recipient be confident that the message has not been accidentally modified?
Authentication: Can the recipient be confident that the message originates from the sender?
Non-repudiation: If the recipient passes the message and the proof to a third party, can the third party be confident that the message originated from the sender? (Please note that I am talking about non-repudiation in the cryptographic sense, not in the legal sense.) Also important is this question:
Keys: Does the primitive require a shared secret key, or public-private keypairs? I think the short answer is best explained with a table:
Cryptographic primitive | Hash | MAC | Digital
Security Goal | | | signature
------------------------+------+-----------+-------------
Integrity | Yes | Yes | Yes
Authentication | No | Yes | Yes
Non-repudiation | No | No | Yes
------------------------+------+-----------+-------------
Kind of keys | none | symmetric | asymmetric
| | keys | keys
Please remember that authentication without confidence in the keys used is useless. For digital signatures, a recipient must be confident that the verification key actually belongs to the sender. For MACs, a recipient must be confident that the shared symmetric key has only been shared with the sender.
Click here for more info
HASH FUNCTION: A function that maps a message of any length into a fixed length hash value, which serves as the authenticator.
MAC: A function of the message and a secret key that produces a fixed length value that serves as the authenticator.
A Hash is a summary or a finger print of a message and provide neither integrity nor authentication itself, as is it is susceptible to man-in-the-middle attack. Suppose A wants to send a message M, combined with hash H of M, to B. Instead C capture the message and generate Message M2 and hash H2 of M2, and sends it to B. Now B, by no mean can verify whether this is the original message from A or not. However, hash can be used in some other ways to achieve integrity and authentication, such as MAC.
A MAC which is also a summary of the message provide Integrity and Authentication. MAC can be computed in many ways. The simplest method is to use a hash function with two inputs, the message and a shared secret key. The use of the shared secret key adds the Authentication ability to the MAC, and thus provide integrity and authentication. However, MAC still does not provide non-repudiation, as any of the party(es) having the shared secret key can produce the message and MAC.
Here comes the Digital Signature and Public Key Cryptography in action.
Basically the main difference is MAC uses a private key and hash does not use any keys. Because of that MAC allows us to achieve authentication.
Hash functions utilize asymmetric cryptography whereas, MAC use symmetric cryptography.
Cryptographic hash functions are not always a MAC, but MAC can be a cryptographic hash functions (keyed hash functions).
Hash functions provide non-repudiation where MAC do no provide non-re