Importing a key to SmartCard HSM 4k - import

I have an old private RSA key of length 4k that I want to import into a SmartCard-HSM 4k.
Is there an easy way doing that?

If such function is supported, then it should be relatively easy.
However:
4Kib RSA key components don't fit in a single APDU;
4Kib is about the maximum that smart cards may support, and even then they may require specific configuration.
Problem 1 can be avoided using extended length APDU's, command chaining or using special "files" to write the key to.

Related

Password hashing using CryptoKit

I'm using (CryptoKit) to use AES-GCM to encrypt some data and authenticate it as well.
However, I was wondering how I would generate an AES-GCM key from a plain text password. Normally, you would use a KDF function for that, like PBKDF2.
In CryptoKit, there is a HKDF class which does about what I want: https://developer.apple.com/documentation/cryptokit/hkdf
However, I am wondering what KDF algorithm the DeriveKey function uses. Does it use PBKDF2? Does it use bcrypt? If so, how do I specify settings, or are the settings automatically determined?
HKDF is defined in RFC5869. It is intended to generate keys from some cryptographically secure "keying material" (IKM). It is not intended for stretching a human-generated password. As discussed in section 4 Applications of HKDF:
On the other hand, it is anticipated that some applications will not
be able to use HKDF "as-is" due to specific operational requirements,
or will be able to use it but without the full benefits of the
scheme. One significant example is the derivation of cryptographic
keys from a source of low entropy, such as a user's password. The
extract step in HKDF can concentrate existing entropy but cannot
amplify entropy. In the case of password-based KDFs, a main goal is
to slow down dictionary attacks using two ingredients: a salt value,
and the intentional slowing of the key derivation computation. HKDF
naturally accommodates the use of salt; however, a slowing down
mechanism is not part of this specification. Applications interested
in a password-based KDF should consider whether, for example, [PKCS5]
meets their needs better than HKDF.
I don't believe that CryptoKit offers a PBKDF of any kind (PBKDF2, scrypt, bcrypt, argon2). It's a very limited framework (I have yet to find a situation where it was useful). You will likely need to continue to use CommonCrypto for this, or implement it yourself (or use something like CryptoSwift, which I believe implements several).

Separate data encryption

I store some sensitive data. Data is divded into parts and I want to have separate accees to each part. Let's assume that I have 1000 files. I want to encrypt each file by the same symetric encryption algorithm.
I guess that breaking a key is easier when hacker has got 1000 cryptogram than he has only one cryptogram, so I think that I should use separate key for each file.
My question is following:
Should I use separate key for each file?
If I should, there is problem with storing 1000 keys. So I want to have one secret key and use some my own algorithm to calculate separate key for each file from secret key. Is it good idea?
If you consider passive adversary and use CPA-strong cipher (like AES), it is sufficient to use only one key for all files. Supposing adversary knows the cipher you use, and even knows plaintexts, he cannot reconstruct the key with non-negligible probability. Here is more detailed answer.
If you consider also active adversary (which can replace ciphertexts) you should use Authenticated Encryption. But as I understand this is not your case.
So I want to have one secret key and use some my own algorithm to calculate separate key for each file from secret key. Is it good idea?
In general, developing your own algorithm or scheme is bad idea. You can easily make some unseen mistake in algorithm or implementation and you data will be vulnerable. It is better to use well-known algorithms and implementations peer-reviewed by lots of people and proved to be secure.

What difference does key length make when signing a file?

I've never taken any classes on encryption or security and I'm trying to teach myself some basics, so forgive me if this is a silly question (don't worry, I'm not working on anything sensitive)
So, I'm playing around with Crypto++ so that I can make a signature of a file to see if the file has been edited by someone other than me. The test application that comes with the library looks like it has options (rs and rv) that do exactly what I want to do in my own program (verify the integrity of the signature of a file). Of course, before doing that I need to generate a public and private key. When doing so with the test application's g option it asks me to specify the key length in bits. What difference does the key length make?
The key length determines how hard it is for someone to break your cryptography. For digital signatures, that means how hard is it for someone to generate a fake signature.
For RSA a key length of 1024 bits is sufficient for non-sensitive information, but it should only be used for a few years and then replaced with a new key. 2048 bits is stronger and 4096 is stronger still.
For a naive brute-force attacker, adding a single bit to the key length doubles the amount of work they need to do to compromise your key. However, algorithms like RSA do not scale in this way: a 2048-bit RSA key is not 2^1024 times as hard to break as a 1024-bit key (unless the attacker is really stupid).
Generally public key algorithms (e.g. RSA) need much larger keys than symmetric key algorithms (e.g. AES) because they rely on different mathematical properties.
For a good primer on cryptography you should check out Peter Gutmann's godzilla crypto tutorial. It's pretty readable and gives you a good overview of how crypto works in its various forms.

Need to apply for CCATS if using simple XOR cipher?

iTunesConnect states that developers need to get a CCATS Classification if using encryption in the app. My app uses a simple XOR obfuscation cipher when transferring data over HTTP. Does this still fall under that requirement? If not, then what type of encryption need to be CCATS classified?
I am not qualified to offer legal advice.
Your probably need to consult with a lawyer to get a answer, as this is a legal matter.
I read the regulations here. My opinion is that if it is a publically available symetric algorithm, which XOR is, as long as the key length is 64-bit or less, then you don't need a license. So, if your key length is 64-bits or less, you do not fall under that requirement.
Again this is just my opinion, but from reading Title 15.B.VII.C.742.15, encryption that needs to be CCATS classified is:
symetric encryption that uses key lengths longer than 64-bits
proprietary (non-public) encryption with key lengths longer than 56-bits
asymetric key exchange algorithms with key lengths greater than 512-bits
elliptic curve algotihms with key lengths greater than 112-bits

How practical would it be to repeatedly encrypt a given file?

I'm currently experimenting with both public-key and personal file encryption. The programs I use have 2048 bit RSA and 256 bit AES level encryption respectively. As a newbie to this stuff (I've only been a cypherpunk for about a month now - and am a little new to information systems) I'm not familiar with RSA algorithms, but that's not relevant here.
I know that unless some secret lab or NSA program happens to have a quantum computer, it is currently impossible to brute force hack the level of security these programs provide, but I was wondering how much more secure it would be to encrypt a file over and over again.
In a nutshell, what I would like to know is this:
When I encrypt a file using 256-bit AES, and then encrypt the already encrypted file once more (using 256 again), do I now have the equivalent of 512-bit AES security? This is pretty much a question of whether or not the the number of possible keys a brute force method would potentially have to test would be 2 x 2 to the 256th power or 2 to the 256th power squared. Being pessimistic, I think it is the former but I was wondering if 512-AES really is achievable by simply encrypting with 256-AES twice?
Once a file is encrypted several times so that you must keep using different keys or keep putting in passwords at each level of encryption, would someone** even recognize if they have gotten through the first level of encryption? I was thinking that perhaps - if one were to encrypt a file several times requiring several different passwords - a cracker would not have any way of knowing if they have even broken through the first level of encryption since all they would have would still be an encrypted file.
Here's an example:
Decrypted file
DKE$jptid UiWe
oxfialehv u%uk
Pretend for a moment that the last sequence is what a cracker had to work with - to brute-force their way back to the original file, the result they would have to get (prior to cracking through the next level of encryption) would still appear to be a totally useless file (the second line) once they break through the first level of encryption. Does this mean that anyone attempting to use brute-force would have no way of getting back to the original file since they presumably would still see nothing but encrypted files?
These are basically two questions that deal with the same thing: the effect of encrypting the same file over and over again. I have searched the web to find out what effect repeated encryption has on making a file secure, but aside from reading an anecdote somewhere that the answer to the first question is no, I have found nothing that pertains to the second spin on the same topic. I am especially curious about that last question.
**Assuming hypothetically that they somehow brute-forced their way through weak passwords - since this appears to be a technological possibility with 256-AES right now if you know how to make secure ones...
In general, if you encrypt a file with k-bit AES then again with k-bit AES, you only get (k+1) bits of security, rather than 2k bits of security, with a man-in-the-middle attack. The same holds for most types of encryption, like DES. (Note that triple-DES is not simply three rounds of encryption for this reason.)
Further, encrypting a file with method A and then with method B need not be even as strong as encrypting with method B alone! (This would rarely be the case unless method A is seriously flawed, though.) In contrast, you are guaranteed to be at least as strong as method A. (Anyone remembering the name of this theorem is encouraged to leave a comment; I've forgotten.)
Usually you're much better off simply choosing a single method as strong as possible.
For your second question: Yes, with most methods, an attacker would know that the first layer had been compromised.
More an opinion here...
First, when computer are strong enough to do a brute-force attack on AES-256 for example, it will be also for iterations of the same... doubling or tripling the time or effort is insignificant at that level.
Next, such considerations can be void depending on the application you are trying to use this encryption in... The "secrets" you will need to carry become bigger (number of iterations and all the different keys you will need, if in fact they are different), the time to do the encryption and the decryption will also need to increase.
My hunch is that iterating the encryption does not help much. Either the algorithm is strong enough to sustain a brute-force attach or it is not. The rest is all in the protection of the keys.
More practically, do you think your house is more protected if you have three identical or similar locks on your front door ? (and that includes number of keys for you to carry around, don't loose those keys, make sure windows and back door are secured also...)
Question 1:
The size of the solution space is going to be the same for two passes of the 256-bit key as the 512-bit key, since 2^(256+256) = 2^512
The actual running time of each decrypt() may increase non-linearly as the key-size grows (it would depend on the algorithm), in this case I think brute forcing the 256+256 would run faster than the 2^512, but would still be infeasible.
Question 2:
There are probably ways to identify certain ciphertext. I wouldn't be surprised if many algorithms leave some signature or artifacts that could be used for identification.