Symmetric Variant of the BFV scheme - seal

Is it possible to encrypt a plaintext using the secret key in SEAL?
Does the symmetric variant help in increasing the noise budget of the ciphertext, or improve the homomorphic evaluation in some other way?

No symmetric key primitives are implemented in SEAL 3.2. There are some benefits:
Smaller initial noise;
Possibility to replace half of a freshly encrypted ciphertext with a random seed, resulting in ~ 50% reduction in message expansion (but only in fresh ciphertexts). This can be significant.
The only problem with the symmetric key schemes is that the ciphertexts can't easily be re-randomized since without the public key there isn't any easy way to create fresh encryptions of zero. As a result, it might be hard or impossible to create provably secure protocols where the computation depends on the private data coming from other sources than the secret key owner (through multiply_plain and add_plain).

Related

use a 512 bit asymmetric encryption key to avoid 'contain encryption' in the app

I have an app that contains a sqlite db with some data is encrypted with a public/private key combination. I generate this pair from the distribution provisional certificate in keychain access (Right click and save as .cer and then again as .p12 with password).
The app is ready to be submitted to apple and I find out that if any encryption is used, I'll have to submit documents for ERN authorization. While reading through the documentation, it mentions that if your key is less than 512 bit for asymmetric encryption, you will be exempt from it.
iii) your app uses, accesses, implements or incorporates encryption with key lengths not exceeding 56 bits symmetric, 512 bits asymmetric and/or 112 bit elliptic curve
(iv) your app is a mass market product with key lengths not exceeding 64 bits symmetric, or if no symmetric algorithms, not exceeding 768 bits asymmetric and/or 128 bits elliptic curve.
Now my problem is if I create a certificate sign request with 512 bit size then I can not create certificate from developer portal with that request.
Is there a way to get around this, other than switching to a symmetric key algorithm? I would like to avoid rewriting that portion. Basically, I would like to create a .cer/.p12 pair using 512 bit encryption instead of the standard 2048. I need something that supports UTF-8. The one I can manually create from mac only supports ASCII.
if anyone ever is confused about this, i changed it to symmetric key and apple approved the app, didn't have to submit any additional documents.

The maximum length of a message that can be hashed with WHIRLPOOL

I'm just wondering what is the maximum length. I read from Wikipedia that it takes a message of any length less than 2^256 bits. Does that mean 2 to the power of 256? Also, would it be more secure to hash a password multiple times? Example:
WHIRLPOOL(WHIRLPOOL(WHIRLPOOL(WHIRLPOOL("passw0rd"))))
Or does that increase the risk of collisions?
Yes, this does mean 2^256 bits. Of course, as there are 2^3 bits in a byte, you will thus have a maximum space of 2^253 bytes. Nothing to worry about.
Yes, it is better to hash multiple times. No, you don't have to worry about "cycles" (much). Many pseudo random number generators are using hashes the same way. Hash algorithms should not loose too much information and should not have a short cycle time.
Passwords hashes are should however be calculated using password based key derivation functions. The "key" is then stored. PBKDF's may use hashes (e.g. PBKDF2) or keyed block ciphers (bcrypt). Most KDF's are using message authentication codes (HMAC or MAC) instead of directly using the underlying hash algorithm or block cipher.
Input to PBKDF's is a salt and iteration count. The iteration count is used to make it harder for an attacker to brute force the system by trying out all kinds of passwords. It's basically the same as what you did above with WHIRLPOOL. Only the iteration count is normally somewhere between 1 and 10 thousand. More data is normally mixed in as well in each iteration.
More importantly, the (password specific) salt is used to make sure that duplicate passwords cannot be detected and to avoid attacks using rainbow tables. Normally the salt is about 64 to 128 bits. The salt and iteration count should be stored with the "hash".
Finally, it is probably better to use a NIST vetted hash algorithm such as SHA-512 instead of WHIRLPOOL.

Generating an activation key from a large set of serial numbers and activation keys

I have a bunch of serial numbers and their corresponding activation keys for some old software. Since installing them originally I have lost a number of the activation keys (but still have the serial number). I still have a data set of about 20 keys and even eyeballing it I can tell there is a method to the madness in determining the the activation keys. Given my large data set is there a way I can backsolve to figure out the activation keys for the information I lost.
example of serial #: 14051 Activation Key: E9E9F-9993432-45543
What you're trying to do is come up with a function that maps serial numbers to activation keys. Without knowing more about the nature of the function, this could be anywhere from very easy (a polynomial with only a few terms) to very hard (a multi-tiered function involving lots of block XORs, substitution tables, complicated key schedules, ...).
If you have access to the key verifier routine (e.g. by disassembly - which is almost always against the EULAs of commercial software), then you have a routine that returns whether or not a given activation key is correct for a given serial number. If this was done by computing an activation key for a serial number, then you are practically done. If this was done by computing the inverse function on the key, then your task is a little harder: you need to invert that function to retrieve the key derivation algorithm, which may not be so easy. If you end up having to solve some hard mathematical problems (e.g. the discrete logarithm problem) because the scheme depends on public-key cryptography, then you're hoping that the values you're dealing with are small enough that you can brute-force or use a known algorithm (e.g. Pollard's rho algorithm) in computationally feasible time.
In any case, you'll need to get comfortable with disassembly and debugging, and hope that there are no anti-debugger measures in place.
Otherwise, the problem is much harder - you'd need to make some educated guesses and try them (e.g. by trying to do a polynomial fit), and hope for the best. Because of the very large variety of different possible functions that can fit any set of inputs and outputs (mathematically uncountable, though in practice limited by source code size), trying to do a known-plaintext attack on the algorithm itself is generally infeasible.
It depends on how dumb the scheme was in the first place, but my guess would be that it's not likely. There's no fixed methodology, but the general domain is the same as codebreaking.

Aes key length significance/implications

I am using a AES algorithm in my application for encrypting plain text. I am trying to use a key which is a six digit number. But as per the AES spec, the key should be minimum sixteen bytes in length. I am planning to append leading zeros to my six digit number to make it a 16 byte and then use this as a key.
Would it have any security implications ? I mean will it make my ciphertext more prone to attacks.
Please help.
You should use a key derivation function, in particular PBKDF2 is state-of-the-art in obtaining an AES key from a password or PIN.
In particular, PBKDF2 makes more difficult to perform a key search because it:
randomizes the key, therefore making precomputed password dictionaries useless;
increases the computational cost of testing each candidate password increasing the total time required to find a key.
As an additional remark, I would say that 6 digits correspond roughly to 16 bits of password entropy, which are definitely too few. Increase your password length.

Explanation about hashing and its use for data compression

I am facing an application that uses hashing, but I cannot still figure out how it works. Here is my problem, hashing is used to generate some index, and with those indexes I access to different tables, and after I add the value of every table that I get using the indexes and with that I get my final value. This is done to reduce the memory requirements. The input to the hashing function is doing the XOR between a random constant number and some parameters from the application.
Is this a typical hashing application?. The thing that I do not understand is how using hashing can we reduce the memory requirements?. Can anyone clarify this?.
Thank you
Hashing alone doesn't have anything to do with memory.
What it is often used for is a hashtable. Hashtables work by computing the hash of what you are keying off of, which is then used as an index into a data structure.
Hashing allows you to reduce the key (string, etc.) into a more compact value like an integer or set of bits.
That might be the memory savings you're referring to--reducing a large key to a simple integer.
Note, though, that hashes are not unique! A good hashing algorithm minimizes collisions but they are not intended to reduce to a unique value--doing so isn't possible (e.g., if your hash outputs a 32bit integer, your hash would have only 2^32 unique values).
Is it a bloom filter you are talking about? This uses hash functions to get a space efficient way to test membership of a set. If so then see the link for an explanation.
Most good hash implementations are memory inefficient, otherwise there would be more computing involved - and that would exactly be missing the point of hashing.
Hash implementations are used for processing efficiency, as they'll provide you with constant running time for operations like insertion, removal and retrieval.
You can think about the quality of hashing in a way that all your data, no matter what type or size, is always represented in a single fixed-length form.
This could be explained if the hashing being done isn't to build a true hash table, but is to just create an index in a string/memory block table. If you had the same string (or memory sequence) 20 times in your data, and you then replaced all 20 instances of that string with just its hash/table index, you could achieve data compression in that way. If there's an actual collision chain contained in that table for each hash value, however, then what I just described is not what's going on; in that case, the reason for the hashing would most likely be to speed up execution (by providing quick access to stored values), rather than compression.