MD5 Hash and Base64 encoding - hash

If I have a 32 character string (an MD5 hash) and I encode it using Base64, what's the maximun length of the encoded string?

An MD5 value is always 22 (useful) characters long in Base64 notation. Many Base64 algorithms will also append 2 characters of padding when encoding an MD5 hash, bringing the total to 24 characters. The padding adds no useful information and can be discarded. Only the first 22 characters matter.
Here's why:
An MD5 hash is a 128-bit value. Every character in a Base64 string contains 6 bits of information, because there are 64 possible values for the character, and it takes 6 powers of 2 to reach 64. With 6 bits of information in every character, 21 characters has 126 bits of information, and 22 characters contains 132 bits of information. Since 128 bits cannot fit within 21 characters but does fit within 22 characters (with a little room to spare), a 128-bit value will always be represented as 22 characters in Base64.
A note on the padding:
I mentioned above that many Base64 encoding algorithms add a couple of characters of padding when encoding an MD5 value. This is because Base64 represents 3 bytes of information as 4 characters. Since MD5 has 16 bytes of information, many Base64 encoding algorithms append "==" to designate that the input of 16 bytes was 2 bytes short of the next multiple of 3, which would have been 18 bytes. These 2 equal signs add no information whatsoever to the string, and can be discarded when storing.

As per http://en.wikipedia.org/wiki/Base64
"Note that given an input of n bytes, the output will be (n + 2 - ((n + 2) % 3)) / 3 * 4 bytes long, which converges to n * 4 / 3 or 1.33333n for large n."
So, it will be ((32 + 2 - (32 + 2) % 3)) / 3 * 4 = 34 - (34 % 3) / 3 * 4 = (34 - 1) / 3 * 4 = 33/3*4 = 44 characters.
You could always extract it in raw binary form (128 bits) and encode it directly into base 64, which means converting 16 bytes instead of 32, which becomes 24 bytes when base 64 encoded.

MD5 128 bits is represented as 22 characters in Base64. also have 2 padding charater '=' in this case.
How?
$ md5sum ./README.md
c6b5f48774aa0a87a82a276ff86be507 ./README.md
$ md5sum ./README.md | base64
YzZiNWY0ODc3NGFhMGE4N2E4MmEyNzZmZjg2YmU1MDcgIC4vUkVBRE1FLm1kCg==
In this case Base64 encoded string does not shorter than the MD5 hash length
Because what is encoded is the storage form of MD5 hash. not MD5 hash value itself.
Need to note how many bit is used to store one digit of MD5 hash.
Right way:
convert the hash value so
1 convert the hexadecimal to binary
2 convert the binary to base64 coded sting
$ cat ./README.md | openssl dgst -md5
c6b5f48774aa0a87a82a276ff86be507
$ cat ./README.md | openssl dgst -md5 -binary | openssl enc -base64
xrX0h3SqCoeoKidv+GvlBw==
or
$ md5sum ./LICENSE
e3fc50a88d0a364313df4b21ef20c29e ./LICENSE
$ cat ./LICENSE | openssl dgst -md5 -binary | openssl enc -base64
4/xQqI0KNkMT30sh7yDCng==
$ (echo 0:; echo e3fc50a88d0a364313df4b21ef20c29e) | xxd -rp -l 16|base64
4/xQqI0KNkMT30sh7yDCng==

Related

sha256 hash length

I want to get hash sha256 from a string, as sha256 has 256-bit length and the size of each byte is 8 bit, I think the length of hash(in a string) must be 32 characters(256/8 = 32). but the hash length is 64 characters. can anyone help me?
print(hashlib.sha256('lsd'.encode()).hexdigest())
output:
3fd7dcbd130286b3799aa74e7fcb1d2ecc80d4c95a158d91dfa1d6a72557f769
hash length is 64, shouldn't it be 32?
The output characters are actually hexadecimal digits, There are 16 possible digits used to represent numbers (0,1,2,3,4,5,6,7,8,9,A,B,C,D,E,F). We need only 4 bits for each hexadecimal digits, because 2^4=16.
Now we have 256 bits, and 4 bits per digit, so 256/4=64. We get 64 digits in length.

Is this a bug in the passlib base64 encoding?

I am trying to decode an re-encode a bytesytring using passlibs base64 encoding:
from passlib.utils import binary
engine = binary.Base64Engine(binary.HASH64_CHARS)
s2 = engine.encode_bytes(engine.decode_bytes(b"1111111111111111111111w"))
print(s2)
This prints b'1111111111111111111111A' which is of course not what I expected. The last character is different.
Where is my mistake? Is this a bug?
No, it's not a bug.
In all variants of Base64, every encoded character represents just 6 bits and depending on the number of bytes encoded you can end up with 0, 2 or 4 insignificant bits on the end.
In this case the encoded string 1111111111111111111111w is 23 characters long, that means 23*6 = 138 bits which can be decoded to 17 bytes (136 bits) + 2 insignifcant bits.
The encoding you use here is not Base64 but Hash64
Base64 character map used by a number of hash formats; the ordering is wildly different from the standard base64 character map.
In the character map
HASH64_CHARS = u("./0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz")
we find A on index 12 (001100) and w on index 60 (111100)
Now the 'trick' here is, that
binary.Base64Engine(binary.HASH64_CHARS) has a default parameter big=False, which means that the encoding is done in little endian format by default.
In your example it means that w is 001111 and A is 001100. During decoding the last two bits are cut off as they are not needed as explained above. When you encode it again, A is taken as the first character in the character map that can be used two encode 0011 plus two insignifiant bits.

How can U+203C be represented in (226, 128, 188) in swift chapter Strings and Characters?

When I read The Swift Programming Language Strings and Characters. I don't know how U+203C (means !!) can represented by (226, 128, 188) in utf-8.
How did it happen ?
I hope you already know how UTF-8 reserves certain bits to indicate that the Unicode character occupies several bytes. (This website can help).
First, write 0x203C in binary:
0x230C = 10000000111100
So this character takes 16 bits to represent. Due to the "header bits" in the UTF-8 encoding scheme, it would take 3 bytes to encode it:
0x230C = 10 000000 111100
1st byte 2nd byte 3rd byte
-------- -------- --------
header 1110 10 10
actual data 10 000000 111100
-------------------------------------------
full byte 11100010 10000000 10111100
decimal 226 128 188

How to denote 160 bits SHA-1 string as 20 characters in ANSI?

For an input "hello", SHA-1 returns "aaf4c61ddcc5e8a2dabede0f3b482cd9aea9434d", which are 40 hex outputs. I know 1 byte can denote as 1 character, so the 160 bits output should be able to converted to 20 characters. But when I look up "aa" in an ASCII table, there are no such hex value, and I'm confused about that. How to map 160 bits SHA-1 string as 20 characters in ANSI?
ASCII only has 128 characters (7 bits), while ANSI has 256 (8 bits). As to the ANSI value of hex value AA (decimal 170), the corresponding ANSI character would be ª (see for example here).
Now, you have to keep in mind that a number of both ASCII and ANSI characters (0-31) are non-printable control characters (system bell, null character, etc.), so turning your hash into a readable 20 character string will be not possible in most cases. For instance, your example contains the hex value 0F, which would translate to a shift-in character.

What is the real purpose of Base64 encoding?

Why do we have Base64 encoding? I am a beginner and I really don't understand why would you obfuscate the bytes into something else (unless it is encryption). In one of the books I read Base64 encoding is useful when binary transmission is not possible. Eg. When we post a form it is encoded. But why do we convert bytes into letters? Couldn't we just convert bytes into string format with a space in between? For example, 00000001 00000004? Or simply 0000000100000004 without any space because bytes always come in pair of 8?
Base64 is a way to encode binary data into an ASCII character set known to pretty much every computer system, in order to transmit the data without loss or modification of the contents itself.
For example, mail systems cannot deal with binary data because they expect ASCII (textual) data. So if you want to transfer an image or another file, it will get corrupted because of the way it deals with the data.
Note: base64 encoding is NOT a way of encrypting, nor a way of compacting data. In fact a base64 encoded piece of data is 1.333… times bigger than the original datapiece. It is only a way to be sure that no data is lost or modified during the transfer.
Base64 is a mechanism to enable representing and transferring binary data over mediums that allow only printable characters.It is most popular form of the “Base Encoding”, the others known in use being Base16 and Base32.
The need for Base64 arose from the need to attach binary content to emails like images, videos or arbitrary binary content . Since SMTP [RFC 5321] only allowed 7-bit US-ASCII characters within the messages, there was a need to represent these binary octet streams using the seven bit ASCII characters...
Hope this answers the Question
Base64 is a more or less compact way of transmitting (encoding, in fact, but with goal of transmitting) any kind of binary data.
See http://en.wikipedia.org/wiki/Base64
"The general rule is to choose a set of 64 characters that is both part of a subset common to most encodings, and also printable."
That's a very general purpose and the common need is not to waste more space than needed.
Historically, it's based on the fact that there is a common subset of (almost) all encodings used to store chars into bytes and that a lot of the 2^8 possible bytes risk loss or transformations during simple data transfer (for example a copy-paste-emailsend-emailreceive-copy-paste sequence).
(please redirect upvote to Brian's comment, I just make it more complete and hopefully more clear).
For data transmission, data can be textual or non-text(binary) like image, video, file etc.
As we know, during transmission only a stream of data(textual/printable characters) can be sent or received, hence we need a way encode non-text data like image, video, file.
Binary and ASCII representation of non-text(image, video, file) is easily obtainable.
Such non-text(binary) represenation is encoded in textual format such that each ASCII character takes one out of sixty four(A-Z, a-z, 0-9, + and /) possible character set.
Table 1: The Base 64 Alphabet
Value Encoding Value Encoding Value Encoding Value Encoding
0 A 17 R 34 i 51 z
1 B 18 S 35 j 52 0
2 C 19 T 36 k 53 1
3 D 20 U 37 l 54 2
4 E 21 V 38 m 55 3
5 F 22 W 39 n 56 4
6 G 23 X 40 o 57 5
7 H 24 Y 41 p 58 6
8 I 25 Z 42 q 59 7
9 J 26 a 43 r 60 8
10 K 27 b 44 s 61 9
11 L 28 c 45 t 62 +
12 M 29 d 46 u 63 /
13 N 30 e 47 v
14 O 31 f 48 w (pad) =
15 P 32 g 49 x
16 Q 33 h 50 y
These sixty four character set is called Base64 and encoding a given data into this character set having sixty four allowed characters is called Base64 encoding.
Let us take examples of few ASCII characters when encoded to Base64.
1 ==> MQ==
12 ==> MTI=
123 ==> MTIz
1234 ==> MTIzNA==
12345 ==> MTIzNDU=
123456 ==> MTIzNDU2
Here few points are to be noted:
Base64 encoding occurs in size of 4 characters. Because an ASCII character can take any out of 256 characters, which needs 4 characters of Base64 to cover. If the given ASCII value is represented in lesser character then rest of characters are padded with =.
= is not part of base64 character set. It is used for just padding.
Hence, one can see that the Base64 encoding is not encryption but just a way to transform any given data into a stream of printable characters which can be transmitted over network.