How was the initial hash value (H(0)) of SHA-224 obtained? - hash

RFC 6234: US Secure Hash Algorithms (SHA and SHA-based HMAC and HKDF) explains only how the initial hash values (H(0)) for SHA-256, SHA-384, and SHA-512 were obtained. How was the H(0) for SHA-224 obtained?
§6.1. SHA-224 and SHA-256 Initialization
For SHA-224, the initial hash value, H(0), consists of the following
32-bit words in hex:
H(0)0 = c1059ed8
H(0)1 = 367cd507
H(0)2 = 3070dd17
H(0)3 = f70e5939
H(0)4 = ffc00b31
H(0)5 = 68581511
H(0)6 = 64f98fa7
H(0)7 = befa4fa4
For SHA-256, the initial hash value, H(0), consists of the following
eight 32-bit words, in hex. These words were obtained by taking the
first 32 bits of the fractional parts of the square roots of the
first eight prime numbers.
H(0)0 = 6a09e667
H(0)1 = bb67ae85
H(0)2 = 3c6ef372
H(0)3 = a54ff53a
H(0)4 = 510e527f
H(0)5 = 9b05688c
H(0)6 = 1f83d9ab
H(0)7 = 5be0cd19
§6.3. SHA-384 and SHA-512 Initialization
For SHA-384, the initial hash value, H(0), consists of the
following eight 64-bit words, in hex. These words were obtained by
taking the first 64 bits of the fractional parts of the square
roots of the ninth through sixteenth prime numbers.
H(0)0 = cbbb9d5dc1059ed8
H(0)1 = 629a292a367cd507
H(0)2 = 9159015a3070dd17
H(0)3 = 152fecd8f70e5939
H(0)4 = 67332667ffc00b31
H(0)5 = 8eb44a8768581511
H(0)6 = db0c2e0d64f98fa7
H(0)7 = 47b5481dbefa4fa4
For SHA-512, the initial hash value, H(0), consists of the
following eight 64-bit words, in hex. These words were obtained by
taking the first 64 bits of the fractional parts of the square
roots of the first eight prime numbers.
H(0)0 = 6a09e667f3bcc908
H(0)1 = bb67ae8584caa73b
H(0)2 = 3c6ef372fe94f82b
H(0)3 = a54ff53a5f1d36f1
H(0)4 = 510e527fade682d1
H(0)5 = 9b05688c2b3e6c1f
H(0)6 = 1f83d9abfb41bd6b
H(0)7 = 5be0cd19137e2179

This is the first component of initial value of SHA-224:
H(0)0 = c1059ed8
This is the first component of the initial value of SHA-384
H(0)0 = cbbb9d5dc1059ed8
Notice that the last 32 bits of SHA-384 is exactly the value of SHA-224.

Related

Hash an 8 digit number that contains non repetitive digits from 1 to 8 only

Given that a number can contain only digits from 1 to 8 (with no repetition), and is of length 8, how can we hash such numbers without using a hashSet?
We can't just directly use the value of the number of the hashing value, as the stack size of the program is limited. (By this, I mean that we can't directly make the index of an array, represent our number).
Therefore, this 8 digit number needs to be mapped to, at maximum, a 5 digit number.
I saw this answer. The hash function returns a 8-digit number, for a input that is an 8-digit number.
So, what can I do here?
There's a few things you can do. You could subtract 1 from each digit and parse it as an octal number, which will map one-to-one every number from your domain to the range [0,16777216) with no gaps. The resulting number can be used as an index into a very large array. An example of this could work as below:
function hash(num) {
return parseInt(num
.toString()
.split('')
.map(x => x - 1), 8);
}
const set = new Array(8**8);
set[hash(12345678)] = true;
// 12345678 is in the set
Or if you wanna conserve some space and grow the data structure as you add elements. You can use a tree structure with 8 branches at every node and a maximum depth of 8. I'll leave that up to you to figure out if you think it's worth the trouble.
Edit:
After seeing the updated question, I began thinking about how you could probably map the number to its position in a lexicographically sorted list of the permutations of the digits 1-8. That would be optimal because it gives you the theoretical 5-digit hash you want (under 40320). I had some trouble formulating the algorithm to do this on my own, so I did some digging. I found this example implementation that does just what you're looking for. I've taken inspiration from this to implement the algorithm in JavaScript for you.
function hash(num) {
const digits = num
.toString()
.split('')
.map(x => x - 1);
const len = digits.length;
const seen = new Array(len);
let rank = 0;
for(let i = 0; i < len; i++) {
seen[digits[i]] = true;
rank += numsBelowUnseen(digits[i], seen) * fact(len - i - 1);
}
return rank;
}
// count unseen digits less than n
function numsBelowUnseen(n, seen) {
let count = 0;
for(let i = 0; i < n; i++) {
if(!seen[i]) count++;
}
return count;
}
// factorial fuction
function fact(x) {
return x <= 0 ? 1 : x * fact(x - 1);
}
kamoroso94 gave me the idea of representing the number in octal. The number remains unique if we remove the first digit from it. So, we can make an array of length 8^7=2097152, and thus use the 7-digit octal version as index.
If this array size is bigger than the stack, then we can use only 6 digits of the input, convert them to their octal values. So, 8^6=262144, that is pretty small. We can make a 2D array of length 8^6. So, total space used will be in the order of 2*(8^6). The first index of the second dimension represents that the number starts from the smaller number, and the second index represents that the number starts from the bigger number.

Offset in cache Mips

I know that offset is : block size=2^n (offset=n). But i have seen that when block size = 8 bytes we do : 8=2^n so offset=n=3 bits, which is correct, but when block size = 1 word, i have seen 1=2^n (offset=n=0). Dont we need to convert word to bytes if we know that cache has 32-bit memory address? (So we have 32bit=4bytes, 4=2^n offset is 2 in that case).
Your did right, It's intuitive that one should know that a word is 4 byte in 32 bit processor and 8 byte in 64 bit.
The byte offset can also can be find in this way, Assume you have address size 32 bits then
byte_offset = 32-tag_bits-set_bits.
In order to solve problem of this kind it's good to know some useful parameter and equation.
Parameter to know
C = cache capacity
b = block size
B = number of blocks
N = degree of associativity
S = number of set
tag_bits
set_bits (also called index)
byte_offset
v = valid bits
Equations to know
B = C/b
S = B/N
b = 2^(byte_offset)
S = 2^(set_bits)
Memory Address
|___tag________|____set___|___byte offset_|

Transforming ciphertext from digital format to alphabetic format

Consider a message "STOP" which we are to encrypt using the RSA algorithm. The values given are p = 43, q = 59, n = pq, e = 13. At first I have transformed "STOP" into blocks of 4-bit code which are 1819 (S = 18 and T = 19) and 1415 (O = 14, P = 15) respectively (alphabets are numbered from 00 to 25).
Finally after calculation I have got 20812182 as the encrypted message (after combining 2081 and 2182). Is there any way to transform this digital code of the ciphertext to the alphabet form?
If we start by considering 2 bits, then 20 = U, 81 = ?, 21 = V, 82 = ?,what will be the alphabets for 81 and 82? I mean to ask,what will be the ciphertext for the plaintext "STOP" in the above case?
RSA works with numbers not binary data nor letters. You can of course convert one to another. E.g. this is what you did when you wrote 20812182. The number with that value can have an endless number of other representations.
Now creating an alphabetical representation that has a minimum size is pretty tricky to do. Basically you can divide by powers of 26. This is however not easy to implement. Instead you can take a subset of your alphabet and use that to represent your number.
To do this use your original number representation and replace 0 with A, 1 with B ... and 9 with J. This would result in CAIBCBIC for your ciphertext.
Note that plaintext and ciphertext are used as names for the input and output of cryptographic ciphers. Both names seem to indicate some kind of human readable text - and maybe they once did - but in cryptography they can be thought of as any kind of data.

Generalised Birthday Calculation Given Hash Length

Let us assume that we are given the following:
The length of the hash
The chance of obtaining a collision
Now, knowing the above, how can we obtain the number of "samples" needed to obtain the given chance percentage?
When we take the Simplified formula for the birthday paradox we get:
probability = k^2/2N
So:
sqr(probability*2*n) = k
Where we know that n = 2^lenghtHash
A small test:
Hash = 16 bit : N= 65536
probability = 50% = 0.5
sqr(0.5*2*65536) = 256 samples
This is not 100% correct as we started of with the Simplified formula, but for big hashes and lager sample sets it gets very close.
for a link on the formula you can look here.
here is a little javascript function to calculate the chance, based on the "Simplified Approximations" algorithm from https://preshing.com/20110504/hash-collision-probabilities/ (thanks for the link #Frank ) to calculate the chance of collision, and using the Decimal.js bignum library to manage bigger numbers than Javascript's Number can handle, example:
samples=2**64; //
hash_size_bytes=20; // 160 bit hash
number_of_possible_hashes=Decimal("2").pow(8*hash_size_bytes);
console.log(collision_chance(samples,number_of_possible_hashes));
// ~ 0.00000001 % chance of a collision with 2**64 samples and 20-byte-long hashes.
samples=77163;
hash_size_bytes=4; // 32bit hash
number_of_possible_hashes=Decimal("2").pow(8*hash_size_bytes);
console.log(collision_chance(samples,number_of_possible_hashes));
// ~ 49.999% chance of a collision for a 4-byte hash with 77163 samples.
function:
// with https://github.com/MikeMcl/decimal.js/blob/master/decimal.min.js
function collision_chance(samples,number_of_possible_hashes){
var Decimal100 = Decimal.clone({ precision: 100, rounding: 8 });
var k=Decimal100(samples);
var N=Decimal100(number_of_possible_hashes);
var MinusK=Decimal100(samples);
MinusK.s=-1;
var ret=((MinusK.mul(k.sub(1))).div(N.mul(2))).exp();
ret=ret.mul(100);
ret=Decimal100(100).sub(ret);
return ret.toFixed(100);
}

How do I get Scala BigDecimal to display a large number of digits?

Doing the following
val num = BigDecimal(1.0)
val den = BigDecimal(3.0)
println((num/den)(MathContext.DECIMAL128))
I only get
0.3333333333333333333333333333333333
Which is less than the 128 I want
The default context is MathContext.DECIMAL128 which is used in all computations so in your example the result of num/den is already rounded to 128 places. You need to set your context on all values first and then do your computations.
val mc = new MathContext(512)
val num = BigDecimal(1.0,mc)
val den = BigDecimal(3.0,mc)
println(num/den)
Don't try and use MathContext.UNLIMITED unless you know your arithmetic does not produce an unbounded decimal representation. It will blow up even before you try to print.
MathContext128 is IEEE 754R Decimal128 format, 34 digits. So the output is correct (I assume the 128 refers to 128 bits of precision, not decimals).
I guess you can make your own MathContext with about four times the precision:
MathContext moreContext = new MathContext(512); // 512 bits (!) of precision
This works:
val mc = new java.math.MathContext(128)
val one_third = (BigDecimal(1, mc) / BigDecimal(3, mc)).toString
// 0. and a bunch of 3
one_third.filter(_ == '3').size // returns 128
If you use 512 you'll get 512 '3' digits.