My use of the Crypto++ library has gone very well, but I have a small question...
If I use RSAES_OAEP_Encryptor & RSAES_OAEP_Decryptor everything is fine. (I'm using a 2048-bit key from PEM files generated by OpenSSL).
My question is this: Will the length of ciphertext produced by encryptor.Encrypt(...) always equal decryptor.FixedCiphertextLength() or can be be less than that? I only ask as this is in a library used by a number of applications and I need to sanity check parameters.....
BTW. Is there any faster was to encrypt/decrypt using RSA which maintains at least the same level of security provided by OAEP? With a 1024 bit key, on an example test box, averaged over 1000 iterations, I'm finding it takes about 80uS to encode a short string and 1.03mS (12 times longer) to decrypt; with a 2048-bit key encryption takes 190uS and decryption, 4.3mS (22 times longer). I know that RSA decryption is slow, but.... the system is running XP Pro SP3/Xeon E5520 and was compiled with VS2008 with /MD rather than /MT. I can't use a shorter key than 2048-bits for compliance reasons...
Many thanks
Nick
Length of ciphertext produced by RSAES_OAEP_Encryptor?
In the case of RSA, I believe FixedPlaintextLength() and FixedCiphertextLength() call MaxPreImage() and MaxImage(). MaxPreImage() and MaxImage(), in turn, returns n - 1.
Will the length of ciphertext produced by encryptor.Encrypt(...) always equal decryptor.FixedCiphertextLength() or can be be less than that?
It depends on the cryptosystem being used. Usually, its the size of the key that determines if FixedCiphertextLength() holds (and not the size of the plain text). In the case of RSA/OAEP and others like ElGamal, I believe it holds.
I think the class of interest here is the PK_CryptoSystem Class Reference. Classes like RSAES_OAEP_Encryptor inherit from PK_CryptoSystem, and that's where FixedCiphertextLength() and friends come from.
With a 1024 bit key, on an example test box, averaged over 1000 iterations, I'm finding it takes about 80uS to encode a short string and 1.03mS (12 times longer) to decrypt; with a 2048-bit key encryption takes 190uS and decryption, 4.3mS (22 times longer)
This is a different question, but...
In the case of encryption or verification, the public exponent is used. The public exponent is, by default, 65537 (IIRC). That's got a low hamming weight (high density of 0's), so square and multiply exponentiation routines run relatively fast.
On the other hand, decryption and signing use the private exponent, which should have about a normal distribution of 1's and 0's. There's lots of squares and multiplies to perform, and those take extra time.
Taking advantage of those little timing differences is where your side channels come from if you are not careful. If you are not careful, then the NSA will thank you.
I can't use a shorter key than 2048-bits for compliance reasons
A 2048-bit modulus is about 10x slower than a 1024-bit modulus. Its why so many folks were reluctant to move from 1024-bit, and why 1024-bit is still kind of preferred.
Peter Gutmann has this to say about it in his Engineering Security book (p. 229):
Another example [of broken threat models] occurred with keys in
certificates, for which browser vendors (in response to NIST
requirements) forced CAs to switch from 1024-bit to 2048-bit keys,
with anything still using 1024-bit keys being publicly denounced as
insecure. As discussed in “Problems” on page 1, the bad guys didn’t
even notice whether their fraudulent certificates were being signed
with, or contained, 2048-bit keys or not.
Related
I noticed a HashSet<int> performing very slowly when working on a Flutter project. I had about 20,000 integers in a Set, and checking set.contains() took a very long time. But when I use toString() to convert all items to string, it performed 1000x faster.
I then tried to create a minimal reproducible code with 10 million random integers, but I couldn't reproduce the issue. Turns out, something special about these data caused the extreme slowness. I've attached a test code (and data) at the end of this question.
How to reproduce:
First, click "add int" button to add 14790 integers to a set. Then click "query int" (runs set.contains(123)) and "query string" (runs set.contains('123')). Observe that: 1. both operations are super slow; 2. the int version is slower than the string version. Picture:
Then click "clear items", then "add string" to add the toString() version of the data. Then click "query int" and "query string" again, notice how much faster it becomes. Picture:
Lastly, click both "add int" and "add string" to create a mixed set (with twice the entries). Observe that the querying times dropped in half for the int version, as if the faster strings helped "dilute" the problem. Picture:
I've had several friends running the same test code on various machines (intel i5, apple M1, snapdragon), timings are different but the conclusions are the same.
What's not the answer here:
Here are some things I considered, but they couldn't explain what's happening with some more tests.
Maybe int needs boxing, whereas string is already an object?
That's probably not the issue here. With 1 million randomly generated values, ints performed faster than strings.
string is immutable so their hash value could be cached?
I don't know if they are cached, but this doesn't explain the results observed with 1 million randomly generated values.
int hash resulted in a lot of collisions?
I tried to print out .hashCode for all ints and strings in the data set, and verified they are all unique.
Test code:
The full test code with data is too long for StackOverflow, I've put it here https://pastebin.com/raw/4fm2hKQB instead.
So yeah, I'm lost, if anyone could help me understand what's going on that'll be greatly appreciated!
I commented on the issue in the Dart repo. For completeness I will mention the 'answer' part of the comment here.
The implementations of HashSet and LinkedHashSet make the assumption that the key.hashCode values are 'good' hash codes that are reasonably distributed over a range of integers so that the lower N bits do not collide or nearly collide to 'bunch up' in the hash-table. Unfortunately int.hashCode does not have this property as it is effectively the identity function.
Things go wrong when the lower bits of all the keys are the same (or have only a few of the possible values) so taking the lower N bits gives the same effective hash code value. This is just the power-of-two version of the % 1000 example mentioned by #ch271828n.
#ch271828n mentions using a different hashCode. This is probably the best short-term solution. Use LinkedHashSet(hashCode: dispersedHashCode) with something like this:
int dispersedHashCode(e) { // untested!
int hash = e.hashCode;
// Odd number with 30%-50% of the bits set in an irregular pattern.
hash *= 0x1736B4D29;
hash += hash >>> 20;
// maybe do it again to let bits higher that 20 influence the low bits.
return hash;
}
Something like this would ideally be built into the core library hashed structures. This might take a long time since, realistically, a performance issue with a simple work-around will be likely be prioritized behind security bugs, incorrect behaviour bugs, performance issues with no work-around, and new features that enable customers to do things that are otherwise impossible to difficult to do.
A completely different approach would be to use an ordered Set like SplayTreeSet.
I am also considering hash collision problem.
int hash resulted in a lot of collisions?
I tried to print out .hashCode for all ints and strings in the data set, and verified they are all unique.
Well, "all unique" does not mean "there is no collision". For a hash set, the number of bins are much less than the number of hashcode. For example, suppose you have a hash set with 1000 bins, and the mapping from hashCode to bin index is a simple bin index := hashCode % 1000, and suppose your data has hashCode like 0, 1000, 2000, 3000 etc. In this artificial case, your data has all unique hashCode, but they all fall into the first bin out of the 1000 bins. Huge collision!
A simple approach to debug whether it is the problem of hashcode: Re-run the program with LinkedHashSet(hashCode: (e) => some_other_hash_approach(e), equals: ...). By using such a new hash set, you can test on other hashCode generating functions. If some hashCode generating functions do not result in the same extremely slow speed, it is highly because of the original hashCode function which causes collision.
In addition, you can even use the same hashCode method for both the int and the String case. Then you guarantee that both cases have exactly the same collision behavior. Then it is easy to see whether collision is the cause, or is unrelated.
Another debug approach: Look at the C++ source code of LinkedHashSet, and see what algorithms it uses to assign data to bins. Then check whether collision as mentioned above happens or not.
A third debug method: Compile the pure-Dart program into an executable, and use profilers like perf to run it. Then you can see which code is hottest and consume most of the time. You may need debug symbols of Dart's native C++ code, which should be fetchable.
I want to generate a secure random number to use for bearer tokens in vapor swift.
I have looked at OpenCrypto but it doesn't seem to be able to generate random numbers.
How do I do this?
For Vapor you can generate a token like so:
[UInt8].random(count: 32).base64
That will be cryptographically secure to use. You can use it like in this repo
You may want to take a look at SecRandomCopyBytes(_:_:_:):
From Apple documentation:
Generates an array of cryptographically secure random bytes.
var bytes = [Int8](repeating: 0, count: 10)
let status = SecRandomCopyBytes(kSecRandomDefault, bytes.count, &bytes)
if status == errSecSuccess { // Always test the status.
print(bytes)
// Prints something different every time you run.
}
In general (but keep reading), you'll want SystemRandomNumberGenerator for this. As documented:
SystemRandomNumberGenerator is automatically seeded, is safe to use in multiple threads, and uses a cryptographically secure algorithm whenever possible.
The "whenever possible" may give you pause depending on how this is going to be deployed. If it's on an enumerated list of platforms, you can check that they use a CSPRNG. "Almost" (see below) all current platforms do:
Apple platforms use arc4random_buf(3).
Linux platforms use getrandom(2) when available; otherwise, they read
from /dev/urandom.
Windows uses BCryptGenRandom.
On Linux, getrandom is explicitly appropriate for cryptographic purposes, and blocks if it cannot yet provide good entropy. See the source for its implementation. Specifically, if the entropy pool is not initialized yet, it will block:
if (!(flags & GRND_INSECURE) && !crng_ready()) {
if (flags & GRND_NONBLOCK)
return -EAGAIN;
ret = wait_for_random_bytes();
if (unlikely(ret))
return ret;
}
On systems without getrandom, I believe swift_stdlib_random, which SystemRandomNumberGenerator uses, may read /dev/urandom before it's initialized. This is a rare situation (typically immediately after boot, though possibly due to other processes rapidly consuming entropy), but it can reduce the randomness of your values. Of currently supported Swift platforms, I believe this only impacts CentOS 7.
On Windows, BCryptGenRandom is documented to be appropriate for cryptographic random numbers:
The default random number provider implements an algorithm for generating random numbers that complies with the NIST SP800-90 standard, specifically the CTR_DRBG portion of that standard.
SP800-90 covers both the algorithm and entropy requirements.
I have a single-threaded client/server application that needs to do both encryption and decryption of their network communication. I plan on using OpenSSL's EVP API and AES-256-CBC.
Some sample pseudo-code I found from a few examples:
// key is 256 bits (32 bytes) when using EVP_aes_256_*()
// I think iv is the same size as the block size, 128 bits (16 bytes)...is it?
1: EVP_CIPHER_CTX *ctx = EVP_CIPHER_CTX_new();
2: EVP_CipherInit_ex(ctx, EVP_aes_256_cbc(), NULL, key, iv, 1); //0=decrypt, 1=encrypt
3: EVP_CipherUpdate(ctx, outbuf, &outlen, inbuf, inlen);
4: EVP_CipherFinal_ex(ctx, outbuf + outlen, &tmplen));
5: outlen += tmplen;
6: EVP_CIPHER_CTX_cleanup(ctx);
7: EVP_CIPHER_CTX_free(ctx);
The problem is from all these examples, I'm not sure what needs to be done at every encryption/decryption, and what I should only do once on startup.
Specifically:
At line 1, do I create this EVP_CIPHER_CTX just once and keep re-using it until the application ends?
Also at line 1, can I re-use the same EVP_CIPHER_CTX for both encryption and decryption, or am I supposed to create 2 of them?
At line 2, should the IV be re-set at every packet I'm encrypting? Or do I set the IV just once, and then let it continue forever?
What if I'm encrypting UDP packets, where a packet can easily go missing or be received out-of-order: am I correct in thinking CBC wont work, or is this where I need to reset the IV at the start of every packet I send out?
Sorry for reviving an old thread, but I noticed the following error in the accepted answer:
At line 1, do I create this EVP_CIPHER_CTX just once and keep re-using it until the application ends?
You create it once per use. That is, as you need to encrypt, you use the same context. If you need to encrypt a second stream, you would use a second context. If you needed to decrypt a third stream, you would use a third context.
Also at line 1, can I re-use the same EVP_CIPHER_CTX for both encryption and decryption, or am I supposed to create 2 of them?
No, see above.
This is not necessary. From the man page for OpenSSL:
New code should use EVP_EncryptInit_ex(), EVP_EncryptFinal_ex(), EVP_DecryptInit_ex(), EVP_DecryptFinal_ex(),
EVP_CipherInit_ex() and EVP_CipherFinal_ex() because they can reuse an existing context without allocating and freeing it up on each call.
In other words, you need to re-initialize the context each time before you use it, but you can certainly use the same context over and over again without creating (allocating) a new one.
I have a single-threaded client/server application that needs to do both encryption and decryption of their network communication. I plan on using OpenSSL's EVP API and AES-256-CBC.
If you are using the SSL_* functions from libssl, then you will likely never touch the EVP_* APIs.
At line 1, do I create this EVP_CIPHER_CTX just once and keep re-using it until the application ends?
You create it once per use. That is, as you need to encrypt, you use the same context. If you need to encrypt a second stream, you would use a second context. If you needed to decrypt a third stream, you would use a third context.
Also at line 1, can I re-use the same EVP_CIPHER_CTX for both encryption and decryption, or am I supposed to create 2 of them?
No, see above.
The ciphers will have different states.
At line 2, should the IV be re-set at every packet I'm encrypting? Or do I set the IV just once, and then let it continue forever?
No. You set the IV once and then forget about it. That's part of the state the context object manages for the cipher.
What if I'm encrypting UDP packets, where a packet can easily go missing or be received out-of-order: am I correct in thinking CBC wont work...
If you are using UDP, its up to you to detect these sorts of problems. You'll probably end up reinventing TCP.
Encryption alone is usually not enough. You also need to ensure authenticity and integrity. You don't operate on data that's not authentic. That's what keeps getting SST/TLS and SSH in trouble.
For example, here's the guy who wrote the seminal paper on authenticated encryption with respect to IPSec, SSL/TLS and SSH weighing in on the Authenticate-Then-Encrypt (EtA) scheme used by SSL/TLS: Last Call: (Encrypt-then-MAC for TLS and DTLS) to Proposed Standard:
The technical results in my 2001 paper are correct but the conclusion
regarding SSL/TLS is wrong. I assumed that TLS was using fresh IVs and
that the MAC was computed on the encoded plaintext, i.e.
Encode-Mac-Encrypt while TLS is doing Mac-Encode-Encrypt which is
exactly what my theoretical example shows is insecure.
For authenticity, you should forgo CBC mode and switch to GCM mode. GCM is an authenticated encryption mode, and it combines confidentiality and authenticity into one mode so you don't have to combine primitives (like AES/CBC with an HMAC).
or is this where I need to reset the IV at the start of every packet I send out?
No, you set the IV once and then forget about it.
The problem is from all these examples, I'm not sure what needs to be done at every encryption/decryption, and what I should only do once on startup.
Create this once: EVP_CIPHER_CTX
Call this once for setup: EVP_CipherInit
Call this as many times as you'd like: EVP_CipherUpdate
Call this once for cleanup: EVP_CipherFinal
The OpenSSL wiki has quite a few examples of using the EVP_* interfaces. See EVP Symmetric Encryption and Decryption, EVP Authenticated Encryption and Decryption and EVP Signing and Verifying.
All the examples use the same pattern: Init, Update and then Final. It does not matter if its encryption or hashing.
Related: this should be of interest to you: EVP Authenticated Encryption and Decryption. Its sample code from the OpenSSL wiki.
Related: you can find copies of Viega, Messier and Chandra's Network Security with OpenSSL online. You might consider hunting down a copy and getting familiar with some of its concepts.
Sorry for reviving an old thread too, but LazerSharks asked twice about evp cipher context in comments. I don't have enough reputation points here to add some comments that's why I'll take an answer here. (Google search even now doesn't show needful information)
From the "Network Security with OpenSSL" book by Pravir Chandra, Matt Messier, John Viega:
Before we can begin encrypting or decrypting, we must allocate and initialize a cipher context. The cipher context is a data structure that keeps track of all relevant state for the purposes of encrypting or decrypting data over a period of time. For example, we can have multiple streams of
data encrypted in CBC mode. The cipher context will keep track of the key associated with each
stream and the internal state that needs to be kept between messages for CBC mode. Additionally, when encrypting with a block-based cipher mode, the context object buffers data that doesn't exactly align to the block size until more data arrives, or until the buffer is explicitly flushed, at which point the data is usually padded as appropriate.
The generic cipher context type is EVP_CIPHER_CTX. We can initialize one, whether it was allocated dynamically or statically, by calling EVP_CIPHER_CTX_init, like so:
EVP_CIPHER_CTX *x = (EVP_CIPHER_CTX *)malloc(sizeof(EVP_CIPHER_CTX));
EVP_CIPHER_CTX_init(x);
Just to complement bacchuswng's answer I have been recently experimenting with LibreSSL and it seems there is no problem with reusing an EVP_CIPHER_CTX but you need to make sure you call EVP_CIPHER_CTX_reset or EVP_CIPHER_CTX_cleanup before starting another encryption / decryption. This is because EVP_EncryptInit / EVP_DecryptInit will clear the context's memory effectively leaking the context's cipher_data previously being used.
Some ciphers don't need to allocate cipher_data but the one I was testing (EVP_aes_128_gcm) does need 680 bytes which can grow out of control quite rapidly.
I can't tell if this is the same behavior with OpenSSL but since documentation for this library is way too hard to come by, I figured I'd share this little quirk (bug perhaps?).
I developed an application in C++ using Crypto++ to encrypt information and store the file in the hard drive. I use an integrity string to check if the password entered by the user is correct. Can you please tell me if the implementation generates a secure file? I am new to the world of the cryptography and I made this program with what I read.
string integrity = "ImGood"
string plaintext = integrity + string("some text");
byte password[pswd.length()]; // The password is filled somewhere else
byte salt[SALT_SIZE]; // SALT_SIZE is 32
byte key[CryptoPP::AES::MAX_KEYLENGTH];
byte iv[CryptoPP::AES::BLOCKSIZE];
CryptoPP::AutoSeededRandomPool rnd;
rnd.GenerateBlock(iv, CryptoPP::AES::BLOCKSIZE);
rnd.GenerateBlock(salt, SALT_SIZE);
CryptoPP::PKCS5_PBKDF2_HMAC<CryptoPP::SHA512> gen;
gen.DeriveKey(key, CryptoPP::AES::MAX_KEYLENGTH, 32,
password, pswd.length(),
salt, SALT_SIZE,
256);
CryptoPP::StringSink* sink = new CryptoPP::StringSink(cipher);
CryptoPP::Base64Encoder* base64_enc = new CryptoPP::Base64Encoder(sink);
CryptoPP::CFB_Mode<CryptoPP::AES>::Encryption cfb_encryption(key, CryptoPP::AES::MAX_KEYLENGTH, iv);
CryptoPP::StreamTransformationFilter* aes_enc = new CryptoPP::StreamTransformationFilter(cfb_encryption, base64_enc);
CryptoPP::StringSource source(plaintext, true, aes_enc);
sstream out;
out << iv << salt << cipher;
The information in the string stream "out" is then written to a file. Another thing is that I don't know what the "purpose" parameter in the derivation function means, I'm guessing it is the desired length of the key so I put 32, but I'm not sure and I can't find anything about it in the Crypto++ manual.
Any opinion, suggestion or mistake pointed out is appreciated.
Thank you very much in advance.
A file can be "secure" only if you define what you mean by "secure".
Usually, you will be interested in two properties:
Confidentiality: the data that is encrypted shall remain unreadable to attackers; revealing the plaintext data requires knowledge of a specific secret.
Integrity: any alteration of the data should be reliably detected; attackers shall not be able to modify the data in any way (even "blindly") without the modification being noticed by whoever decrypts the data.
Your piece of code, apparently, fulfils confidentiality (to some extent) but not integrity. Your string called "integrity" is a misnomer: it is not an integrity check. Its role is apparently to detect accidental password mistakes, not attacks; thus, it would be less confusing if that string was called passwordVerifier instead. An attacker can alter any bit beyond the first 48 bits without the decryption process noticing anything.
Adding integrity (the genuine thing) requires the use of a MAC. Combining encryption and a MAC securely is subject to subtleties; therefore, it is recommended to use for encryption and MAC an authenticated encryption mode which does both, and does so securely (i.e. that specific combination was explicitly reviewed by hordes of cryptographers). Usual recommended AE modes include GCM and EAX.
An important point to note is that, in a context where integrity matters, data cannot be processed before having been verified. This has implications for big files: if your huge file is adorned with a single MAC (whether "manually" or as part of an AE mode), then you must first verify the complete file before beginning to do anything with the plaintext data. This does not work well with streamed processing (e.g. if playing a huge video). A workaround is to split the data into individual chunks, each with its own MAC, but then some care must be taken about the ordering of chunks (attackers could try to remove, duplicate or reorder chunks): things become complex. Complexity, on a general basis, is bad for security.
There are contexts where integrity does not matter. For instance, if your attack model is "the attacker steals the laptop", then you only have to care about confidentiality. However, if the attack model is "the attacker steals the laptop, modifies a few files, and puts it back in my suitcase without me noticing", then integrity matters: the attacker could plant a modification in the file, and infer parts of the secret itself based on your external behaviour when you next access the file.
For confidentiality, you use CFB, which is a bit old-style, but not wrong. For the password-to-key transform, you use PBKDF2, which is fine; the iteration count, though, is quite low: you use 256. Typical values are 20000 or more. The theory is that you should make actual performance measures to set this count to as high a value as you can tolerate: a higher value means slower processing, both for you and for the attacker, so you ought to crank that up (depending on your patience).
Mandatory warning: you are in the process of defining your own crypto, which is a path fraught with perils. Most people who do that produce weak systems, and that includes trained cryptographers; in fact, being a trained cryptographer does not mean that you know how to define a secure protocol, but rather that you know better than defining your own protocol. You are thus highly encouraged to rely on an existing protocol (or format), rather than making your own. I suggest OpenPGP, with (for instance) GnuPG as support library. Even if you need for some reason (e.g. licence issues) to reimplement the format, using a standard format is still a good idea: it avoids introducing weaknesses, it promotes interoperability, and it can be tested against existing systems.
I'm working on a college paper about TLS and I am asked why TLS sequence number counter is a 64-bit number when TLS only uses 32-bit sequence number in its messages. I've looked around for a while, even checked the RFC and I have found nothing so far. Can anyone help me?
Looks to me like the question is just plain wrong. TLS uses 64-bit sequence numbers, and these are implicit (i.e. not transmitted as part of TLS messages).
Maybe the original questions is confusing SQNs in TLS with SQNs in IPsec: there, 32-bit sequence numbers are included in ESP and AH header fields, but 64-bit extended sequence numbers (ESNs) are permitted by the relevant RFCs.
I take it the following quite from RFC2246, page 74, first paragraph, fifth sentence is an insufficient answer?
Since sequence numbers are 64-bits
long, they should never overflow.
There can be - and often are - differences between wording of the specification and any particular conforming implementation. English is an imprecise language for algorithm specification.
You fail to specify whether the implementation you are looking at never overflows in to bit 33, or if you've just not seen it happen. Claiming that you have seen the counter wrap modulo 2^32 would be a different claim altogether.
Please first understand what you are asking. What is a TLS message? Are you referring to TLS records? TLS uses a 64-bit counter for record messages and this number is not included in the TLS records. It is used implicitly.