I'd like to store an scrypt-hashed password in a database. What is the maximum length I can expect?
According to https://github.com/wg/scrypt the output format is $s0$params$salt$key where:
s0 denotes version 0 of the format, with 128-bit salt and 256-bit derived key.
params is a 32-bit hex integer containing log2(N) (16 bits), r (8 bits), and p (8 bits).
salt is the base64-encoded salt.
key is the base64-encoded derived key.
According to https://stackoverflow.com/a/13378842/14731 the length of a base64-encoded string is where n denotes the number of bytes being encoded.
Let's break this down:
The dollar signs makes up 4 characters.
The version numbers makes up 2 characters.
Each hex character represents 4 bits ( log2(16) = 4 ), so the params field makes up (32-bit / 4 bits) = 8 characters.
The 128-bit salt is equivalent to 16 bytes. The base64-encoded format makes up (4 * ceil(16 / 3)) = 24 characters.
The 256-bit derived key is equivalent to 32 bytes. The base64-encoded format makes up (4 * ceil(32 / 3)) = 44 characters.
Putting that all together, we get: 4 + 2 + 8 + 24 + 44 = 82 characters.
In Colin Percival's own implementation, the tarsnap scrypt header is 96 bytes. This comprises:
6 bytes 'scrypt'
10 bytes N, r, p parameters
32 bytes salt
16 bytes SHA256 checksum of bytes 0-47
32 bytes HMAC hash of bytes 0-63 (using scrypt hash as key)
This is also the format used by node-scrypt. There is an explanation of the rationale behind the checksum and the HMAC hash on stackexchange.
As a base64-encoded string, this makes 128 characters.
Related
Using some of the code from Geth's source code https://github.com/ethereum/go-ethereum, I'm trying to find a way to generate valid Ethereum wallets. For starters, I'm using the hex version of secp256k1 prime MINUS 1, which is "FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364140". Plugging that into Metamask I get an address of "0x80C0dbf239224071c59dD8970ab9d542E3414aB2". I would like to use the functions to get the equivalent address.
I've so far put in the hex format of the private key "FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364140". I get a 20 length byte array [164 121 62 11 173 85 89 159 11 68 30 45 4 221 247 191 191 44 73 181]
To be specific, I defined the secp256k1 prime - 1's hex value and used Geth's Crypto's newKey() function
var r io.Reader
r = strings.NewReader("FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364140")
tmp, err := newKey(r)
if err != nil {
fmt.Println(err)
}
fmt.Println(tmp.Address)
I have two questions. First, did I input the private key right as a hex? Second, how do I convert this 20 length byte array to a "normal" hex address?
When I apply the function dwt2() on an image, I get the four subband coefficients. By choosing any of the four subbands, I work with a 2D matrix of signed numbers.
In each value of this matrix I want to embed 3 bits of information, i.e., the numbers 0 to 7 in decimal, in the last 3 least significant bits. However, I don't know how to do that when I deal with negative numbers. How can I modify the coefficients?
First of all, you want to use an Integer Wavelet Transform, so you only have to deal with integers. This will allow you a lossless transformation between the two spaces without having to round float numbers.
Embedding bits in integers is a straightforward problem for binary operations. Generally, you want to use the pattern
(number AND mask) OR bits
The bitwise AND operation clears out the desired bits of number, which are specified by mask. For example, if number is an 8-bit number and we want to zero out the last 3 bits, we'll use the mask 11111000. After the desired bits of our number have been cleared, we can substitute them for the bits we want to embed using the bitwise OR operation.
Next, you need to know how signed numbers are represented in binary. Make sure you read the two's complement section. We can see that if we want to clear out the last 3 bits, we want to use the mask ...11111000, which is always -8. This is regardless of whether we're using 8, 16, 32 or 64 bits to represent our signed numbers. Generally, if you want to clear the last k bits of a signed number, your mask must be -2^k.
Let's put everything together with a simple example. First, we generate some numbers for our coefficient subband and embedding bitstream. Since the coefficient values can take any value in [-510, 510], we'll use 'int16' for the operations. The bitstream is an array of numbers in the range [0, 7], since that's the range of [000, 111] in decimal.
>> rng(4)
>> coeffs = randi(1021, [4 4]) - 511
coeffs =
477 202 -252 371
48 -290 -67 494
483 486 285 -343
219 -504 -309 99
>> bitstream = randi(8, [1 10]) - 1
bitstream =
0 3 0 7 3 7 6 6 1 0
We embed our bitstream by overwriting the necessary coefficients.
>> coeffs(1:numel(bitstream)) = bitor(bitand(coeffs(1:numel(bitstream)), -8, 'int16'), bitstream, 'int16')
coeffs =
472 203 -255 371
51 -289 -72 494
480 486 285 -343
223 -498 -309 99
We can then extract our bitstream by using the simple mask ...00000111 = 7.
>> bitand(coeffs(1:numel(bitstream)), 7, 'int16')
ans =
0 3 0 7 3 7 6 6 1 0
I'm trying to define slice's maximum size through parameters of the configuration file.
I set
SliceMode : 2
SliceArgument : 500
but I don't get the corresponding results:
105080 bits
81616 bits
24256 bits
3752 bits
168 bits
128 bits
10488 bits
160 bits
216 bits
73792 bits
What am I doing wrong?
Thank you in advance!
I'm trying to make a linear algebra-based algorithm for shift(Ceasar) cryptography cipher . Supposing I have a string : 'hello ' . When I'm trying to convert it into a (int)number matrix I do this :
'hello' - 'a'
And the result is
ans =
7 4 11 11 14
This is the desired result . But if I subtract the character 'g' the result will be
ans =
1 -2 5 5 8
I'd like to ask what happens in Matlab(or Octave) when I subtract a character and I get the results above .
As Mohit Jain wrote, the results you get are based on a conversion to ASCII which is the most widely accepted way to numerically encode textual information. ASCII is also included as a subset in the current standard of Unicode, and on supporting platforms Matlab actually uses a 16-bit Unicode encoding, which enables it to not only represent the 95 printable characters of ASCII which support English text, but a large number of international scripts, special characters for applications in mathematics, typography and many other fields. Explicit conversion between numeric and character data in Matlab is done through char and double:
>> double('aAΔ')
ans =
97 65 916
A small latin letter 'a' has the ASCII code 97, a large latin letter 'A' the ASCII code 65, and a large greek letter Delta has the Unicode number 916. Since the latin letters are encoded in sequence with codes 97 to 122 for small letters and 65 to 90 for capitals, you can generate the English alphabet e.g. like this:
>> char(65 : 90)
ans =
ABCDEFGHIJKLMNOPQRSTUVWXYZ
When you apply an arithmetic operator like - to character strings, the characters are implicitly converted to numbers as if you had used double
>> double('hello')
ans =
104 101 108 108 111
>> double('g')
ans =
103
and therefore 'hello' - 'a' is the same as
>> [104 101 108 108 111] - 103
ans =
1 -2 5 5 8
It changes characters of string to their ascii value and then subtracts each value
'hello' - 'a' = 7 4 11 11 14 because h - a = 8 -1 =7
(these should be ascii values but i am using these values for simplicity because its all relative)
e-a=5-1=4
l-a = 12-1 =11 and so on
'hello' - 'g'
h-g=8-7=1
e-g=5-7=-2 and so on
I need to understand the Md5 hash algorithm. I was reading a documents and it states
"The message is "padded" (extended) so that its length (in bits) is
congruent to 448, modulo 512. That is, the message is extended so
that it is just 64 bits shy of being a multiple of 512 bits long.
Padding is always performed, even if the length of the message is
already congruent to 448, modulo 512."
I need to understand what this means in simple terms, especially the 448 modulo 512. The word MODULO is the issue. Please I will appreciate simple examples to this. Funny though, this is the first step to MD5 hash! :)
Thanks
Modulo or mod, is a function that results in telling you the remainder when two numbers are divided by each other.
For example:
5 modulo 3:
5/3 = 1, with 2 remainder. So 5 mod 3 is 2.
10 modulo 16 = 10, because 16 cannot be made.
15 modulo 5 = 0, because 15 goes into 5 exactly 3 times. 15 is a multiple of 5.
Back in school you would have learnt this as "Remainder" or "Left Over", modulo is just a fancy way to say that.
What this is saying here, is that when you use MD5, one of the first things that happens is that you pad your message so it's long enough. In MD5's case, your message must be n bits, where n= (512*z)+448 and z is any number.
As an example, if you had a file that was 1472 bits long, then you would be able to use it as an MD5 hash, because 1472 modulo 512 = 448. If the file was 1400 bits long, then you would need to pad in an extra 72 bits before you could run the rest of the MD5 algorithm.
Modulus is the remainder of division. In example
512 mod 448 = 64
448 mod 512 = 448
Another approach of 512 mod 448 would be to divide them 512/448 = 1.142..
Then you subtract 512 from result number before dot multiplied by 448:
512 - 448*1 == 64 That's your modulus result.
What you need to know that 448 is 64 bits shorter than multiple 512.
But what if it's between 448 and 512??
Normally we need to substract 448 by x(result of modulus).
447 mod 512 = 447; 448 - 447 = 1; (all good, 1 zero to pad)
449 mod 512 = 1; 448 - 449 = -1 ???
So this problem solution would be to take higher multiple of 512 but still shorter of 64;
512*2 - 64 = 960
449 mod 512 = 1; 960 - 449 = 511;
This happens because afterwards we need to add 64 bits original message and the full length have to be multiple of 512.
960 - 449 = 511;
511 + 449 + 64 = 1024;
1024 is multiple of 512;