Bits, Bytes and numbers. Shrink the size of the byte - word

It may be a very basic low level architecture questions. I am trying to get my head around it. Please correct if my understanding is wrong, as well.
Word = 64 bit, 32 bit, etc. This is a number of bits computer can read at a time.
Questions:
1.) Would this mean, we can send, 4 numbers (of a 8 bits/byte length each) for 32 bit? Or combination of 8 bit (byte), 32 bit (4 bytes), etc numbers at one time?
2.) If we need to send only 8 bit number, then how does it form a word? Only first byte is filled and rest all bytes are padded with 0s or last byte gets filled while rest of the bytes are padded with 0s? Or I saw somewhere like first byte has information as to how the rest of the bytes are filled. Does that apply here? For example, UTF-8. Here, ASCII is 1 byte, and some other chars take up to 4 bytes. So when we send one char, we send all 4 bytes together, but fill the bytes as required for the char and rest of the bytes 0s?
3.) Now to represent 8 digit number, we would need 27 bits (remember famous question, sorting 1 million 8 digit number with just 1 MB RAM). Can we exactly use 27 bits, which is 32 bits (4 bytes) - 5 bits? and use those 5 digits for something else?
Appreciate your answers!

1- Yes, four 8-bit integers can fit in a 32-bit integer. This can be done using bitwise operations, for example (using C operators):
((a & 255) << 24) | ((b & 255) << 16) | ((c & 255) << 8) | (d & 255)
This example uses C operators, but they are also used for the same purpose in several other languages (see below - a complete, compilable version of this example in C). You may want to look up the bitwise operators AND (&), OR (|), and Left Shift (<<);
2- Unused bits are generally 0. The first byte is sometimes used to represent the type of encoding (Look up "Magic Numbers"), but this is implementation dependent. Sometimes it is a different number of bits.
3- Groups of 8-digit numbers can be compressed to use only 27 bits each. This is very similar to the example, except the number of bits and size of the data are different. To do this, you will need 864-bit groups, i.e. 27 32-bit integers to store 32 27-bit numbers. This would be more complex than the example, but it would use the same principles.
Complete, compilable example in C:
#include <stdio.h>
/*Compresses four integers containing one byte of data in the least
*significant byte into a single 32-bit integer*/
__int32 compress(int a, int b, int c, int d){
__int32 compressed = ((a & 255) << 24) | ((b & 255) << 16) |
((c & 255) << 8) | (d & 255);
return compressed;
}
/*Test the compress() function and print the resuts*/
int main(){
printf("%x\n", (unsigned)compress(255, 0, 255, 0));
printf("%x\n", (unsigned)compress(192, 168, 0, 255));
printf("%x\n", (unsigned)compress(84, 94, 255, 2));
return 0;
}

I think that clarification on 2 points is required here :
1. Memory addressing.
2. Word
Memories can be addressed in 2 ways, they are generally either byte addressable or word addressable.
Byte addressable memory means that each byte is given a separate address.
a -> 0th byte
b -> 1st byte
Word addressable memories are those in which each group of bytes that is as wide as the word gets an address. Eg if the Word Length is 32 bits :
a->0th byte
b->4th byte
And so on.
Word
I would say that a word defines the maximum number of bits a processor can handle at a time. For 8086, for eg, it's 16.
It is usually the largest number on which the arithmetic can be performed by the processor. Continuing the example , 8086 can perform operations on 16 bit numbers at a time.
Now i'll try and answer the questions :
1.) Would this mean, we can send, 4 numbers (of a 8 bits/byte length each) for 32 bit? Or combination of 8 bit (byte), 32 bit (4 bytes),
etc numbers at one time?
You can always define your own interpretation for a bunch of bits.
For eg, If it is byte addressable, we can treat every byte individually and thus , we can write code at assemble level that treats each byte as a separate 8 bit number.
If it is not, you can use bit operations to extract individual bytes out.
The point is you can represent 4 8 bit numbers in 32 bits.
2) Mostly, leftover significant bits are stuffed with 0s ( for unsigned numbers)
3.) Now to represent 8 digit number, we would need 27 bits (remember famous question, sorting 1 million 8 digit number with just 1 MB RAM).
Can we exactly use 27 bits, which is 32 bits (4 bytes) - 5 bits? and
use those 5 digits for something else?
Yes, you can do this also. But you know the great space-time tradeoff.
You sure save 5 bits, per number. But you'll need to use bit operations and all the really cool but hard to read stuff. Shooting up time and making code more complex.
But i don't think you'll ever come across a situation where you need such level of saving, unless you are coding for a very constrained system. (embedded etc)

Related

SHA-256 does bit append still required if input is already N x 512 bit?

If my input is less than a multiple of 512 bit , i need to append bit and length bits to my input so that it is a multiple of 512 bit .
https://infosecwriteups.com/breaking-down-sha-256-algorithm-2ce61d86f7a3
But what if my input is already a multiple of 512 bit ? is it still required to do the bit append ? for example , if my message is already 512 bit long , do i need to bit append it to become 1024 bit long ?
And what if my input is less than a multiple of 512 bit , but long enough to not allow append length bits ? for example , my input is 504 bit long.
It seems to explain it rather clearly:
The number of bits we add is calculated as such so that after addition of these bits the length of the message should be exactly 64 bits less than a multiple of 512.
If there were 512 bits, you have to pad it to 960 bits, then add 64 bits for length for a total of 1024. The same with 504, since 504 > 512-64. Otherwise you'd have a situation where the last 64 bits are sometimes the length bits and sometimes not, which doesn't seem right.
The only case where you wouldn't add any padding is if the data is already 512*n-64, e.g. if it were 448 bits. Then you add no padding but still add the length bits, and end up with 512.
According to RFC 62634, you should always pad data with atleast one bit set to 1 and append length (64-bit).
Even if input is 448 bit long you should pad it resulting in two 512-bit chunks:
[448 bits of input]['1']['0' x 511][64 bits representing input length]
So this is wrong:
The only case where you wouldn't add any padding is if the data is already 512*n-64, e.g. if it were 448 bits. Then you add no padding but still add the length bits, and end up with 512.
Note: If you accept input only as byte array, minimum padding is 1 byte set to 10000000 (binary).

How is a MongoDB ObjectID 12 bytes?

From what I know, Strings in MongoDB are stored in UTF-8, so each character is between 1 and 4 bytes.
MongoDB documentation says the following about ObjectID:
Returns a new ObjectId value. The 12-byte ObjectId value consists of:
a 4-byte value representing the seconds since the Unix epoch,
a 5-byte random value, and
a 3-byte counter, starting with a random value.
In an example, it shows ObjectId("507f1f77bcf86cd799439011"). This String is 24 bytes though in UTF-8, so I don't understand where the 12 bytes come into play.
As per the ObjectId documentation, that string you see is a hex representation of the 12 bytes. It's not Unicode or even a string. It's actually a number.
A byte is 8 bits, meaning that it can have 2^8 == 256 possible values (see Byte).
How do you represent a number with 256 possible values succinctly? How about representing it as 16^2 instead? You can achieve this by using 2 hexadecimal values (base 16). The only thing you need to invent is a numbering system that goes to 16 instead of 10.
As a matter of fact, we use letters from a to f to represent values 10 to 15.
Thus, one byte can be represented in two hexadecimal numbers. It just happens to use a to f since we couldn't be bothered to invent special symbols for them. They are not letters. They are numbers.
So no, the string you see in ObjectId does not represent 24 bytes. Every 2 characters represent a byte instead. 24 hex numbers == 12 bytes.
Supplementing the #kevinadi answer , let's examine a bit the code of ArrayBuffer presented below for 12 bytes of ObjectId reflected representation where as every indexed character (elements, characters – both are synonyms in this particular context) within our chosen TypedArray is in the condition of "2 BYTES_PER_ELEMENT". As far as I am considered MongoDB 12 bytes ObjectId + 4 unused indexed allocations within fixed-size TypedArray composes 16 byte (hexadecimal approach) , where as if not compressed , this would be equivalent of 24 elements + 8 unused indexed allocations (in total 32). In every part of scenario 2 times 16 (MongoDB ObjectId is presented in hexadecimal string) is equal to 32 . Simple math , right ? Now let's examine a bit the possible interpretation of ArrayBuffer in use with TypedArray chosen to mock the process of allocation vs. compression and their properties of: .length vs. .byteLength respectively :
const init_buffer = new ArrayBuffer(16); // init allocation
const uint8_inst1 = new Uint8Array(init_buffer);
const uint8_inst2 = new Uint16Array(uint8_inst1);
console.log("2 BYTES_PER_ELEMENT i.e. 2 times less than total quantity of characters available [compressed approach]: ", uint8_inst2.length);
console.log("How many characters in total available [uncompressed approach]: ", uint8_inst2.byteLength);

3-bit encoding = Octal; 4-bit encoding = Hexadecimal; 5-bit encoding =?

Is there an encoding that uses 5 bits as one group to encode a binary data?
A-Z contain 26 chars and 0-9 contain 10 chars. There are totally 36 chars which are sufficient for a 5-bit encoding (32 combinations only).
Why don't we use a 5-bit encoding instead of Octal or Hexadecimal?
As mentioned in a comment by #S.Lott, base64 (6-bit) is often used for encoding binary data as text when compactness is important.
For debugging purposes (e.g. hex dumps), we use hex because the size of a byte is evenly divisible by 4 bits, so each byte has one unique 2-digit hex representation no matter what other bytes are around it. That makes it easy to "see" the individual bytes when looking at a hex dump, and it's relatively easy to mentally convert between 8-bit binary and 2-digit hex as well. (In base64 there's no 1:1 correspondence between bytes and encoded characters; the same byte can produce different characters depending on its position and the values of other adjacent bytes.)
Yes. It's Base32. Maybe the name would be Triacontakaidecimal but it's too long and hard to remember so people simply call it base 32. Similarly there's also Base64 for groups of 6 bits
Base32 is much less common in use than hexadecimal and base64 because it wastes to much space compared to base64, and uses an odd number of bits compared to hexadecimal (which is exactly divisible by 8, 16, 32 or 64). 6 is also an even number, hence will be better than 5 on a binary computer
Sure, why not:
Welcome to Clozure Common Lisp Version 1.7-dev-r14614M-trunk (DarwinX8664)!
? (let ((*print-base* 32)) (print 1234567))
15LK7
1234567
? (let ((*print-base* 32)) (print (expt 45 19)))
KAD5A5KM53ADJVMPNTHPL
25765451768359987049102783203125
?

Why is a SHA-1 Hash 40 characters long if it is only 160 bit?

The title of the question says it all. I have been researching SHA-1 and most places I see it being 40 Hex Characters long which to me is 640bit. Could it not be represented just as well with only 10 hex characters 160bit = 20byte. And one hex character can represent 2 byte right? Why is it twice as long as it needs to be? What am I missing in my understanding.
And couldn't an SHA-1 be even just 5 or less characters if using Base32 or Base36 ?
One hex character can only represent 16 different values, i.e. 4 bits. (16 = 24)
40 × 4 = 160.
And no, you need much more than 5 characters in base-36.
There are totally 2160 different SHA-1 hashes.
2160 = 1640, so this is another reason why we need 40 hex digits.
But 2160 = 36160 log362 = 3630.9482..., so you still need 31 characters using base-36.
I think the OP's confusion comes from a string representing a SHA1 hash takes 40 bytes (at least if you are using ASCII), which equals 320 bits (not 640 bits).
The reason is that the hash is in binary and the hex string is just an encoding of that. So if you were to use a more efficient encoding (or no encoding at all), you could take only 160 bits of space (20 bytes), but the problem with that is it won't be binary safe.
You could use base64 though, in which case you'd need about 27-28 bytes (or characters) instead of 40 (see this page).
There are two hex characters per 8-bit-byte, not two bytes per hex character.
If you are working with 8-bit bytes (as in the SHA-1 definition), then a hex character encodes a single high or low 4-bit nibble within a byte. So it takes two such characters for a full byte.
My answer only differs from the previous ones in my theory as to the EXACT origin of the OP's confusion, and in the baby steps I provide for elucidation.
A character takes up different numbers of bytes depending on the encoding used (see here). There are a few contexts these days when we use 2 bytes per character, for example when programming in Java (here's why). Thus 40 Java characters would equal 80 bytes = 640 bits, the OP's calculation, and 10 Java characters would indeed encapsulate the right amount of information for a SHA-1 hash.
Unlike the thousands of possible Java characters, however, there are only 16 different hex characters, namely 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E and F. But these are not the same as Java characters, and take up far less space than the encodings of the Java characters 0 to 9 and A to F. They are symbols signifying all the possible values represented by just 4 bits:
0 0000 4 0100 8 1000 C 1100
1 0001 5 0101 9 1001 D 1101
2 0010 6 0110 A 1010 E 1110
3 0011 7 0111 B 1011 F 1111
Thus each hex character is only half a byte, and 40 hex characters gives us 20 bytes = 160 bits - the length of a SHA-1 hash.
2 hex characters mak up a range from 0-255, i.e. 0x00 == 0 and 0xFF == 255. So 2 hex characters are 8 bit, which makes 160 bit for your SHA digest.
SHA-1 is 160 bits
That translates to 20 bytes = 40 hex characters (2 hex characters per byte)

Would it be possible to have a UTF-8-like encoding limited to 3 bytes per character?

UTF-8 requires 4 bytes to represent characters outside the BMP. That's not bad; it's no worse than UTF-16 or UTF-32. But it's not optimal (in terms of storage space).
There are 13 bytes (C0-C1 and F5-FF) that are never used. And multi-byte sequences that are not used such as the ones corresponding to "overlong" encodings. If these had been available to encode characters, then more of them could have been represented by 2-byte or 3-byte sequences (of course, at the expense of making the implementation more complex).
Would it be possible to represent all 1,114,112 Unicode code points by a UTF-8-like encoding with at most 3 bytes per character? If not, what is the maximum number of characters such an encoding could represent?
By "UTF-8-like", I mean, at minimum:
The bytes 0x00-0x7F are reserved for ASCII characters.
Byte-oriented find / index functions work correctly. You can't find a false positive by starting in the middle of a character like you can in Shift-JIS.
Update -- My first attempt to answer the question
Suppose you have a UTF-8-style classification of leading/trailing bytes. Let:
A = the number of single-byte characters
B = the number of values used for leading bytes of 2-byte characters
C = the number of values used for leading bytes of 3-byte characters
T = 256 - (A + B + C) = the number of values used for trailing bytes
Then the number of characters that can be supported is N = A + BT + CT².
Given A = 128, the optimum is at B = 0 and C = 43. This allows 310,803 characters, or about 28% of the Unicode code space.
Is there a different approach that could encode more characters?
It would take a little over 20 bits to record all the Unicode code points (assuming your number is correct), leaving over 3 bits out of 24 for encoding which byte is which. That should be adequate.
I fail to see what you would gain by this, compared to what you would lose by not going with an established standard.
Edit: Reading the spec again, you want the values 0x00 through 0x7f reserved for the first 128 code points. That means you only have 21 bits in 3 bytes to encode the remaining 1,113,984 code points. 21 bits is barely enough, but it doesn't really give you enough extra to do the encoding unambiguously. Or at least I haven't figured out a way, so I'm changing my answer.
As to your motivations, there's certainly nothing wrong with being curious and engaging in a little thought exercise. But the point of a thought exercise is to do it yourself, not try to get the entire internet to do it for you! At least be up front about it when asking your question.
I did the math, and it's not possible (if wanting to stay strictly "UTF-8-like").
To start off, the four-byte range of UTF-8 covers U+010000 to U+10FFFF, which is a huge slice of the available characters. This is what we're trying to replace using only 3 bytes.
By special-casing each of the 13 unused prefix bytes you mention, you could gain 65,536 characters each, which brings us to a total of 13 * 0x10000, or 0xD0000.
This would bring the total 3-byte character range to U+010000 to U+0DFFFF, which is almost all, but not quite enough.
Sure it's possible. Proof:
224 = 16,777,216
So there is enough of a bit-space for 1,114,112 characters but the more crowded the bit-space the more bits are used per character. The whole point of UTF-8 is that it makes the assumption that the lower code points are far more likely in a character stream so the entire thing will be quite efficient even though some characters may use 4 bytes.
Assume 0-127 remains one byte. That leaves 8.4M spaces for 1.1M characters. You can then solve this is an equation. Choose an encoding scheme where the first byte determines how many bytes are used. So there are 128 values. Each of these will represent either 256 characters (2 bytes total) or 65,536 characters (3 bytes total). So:
256x + 65536(128-x) = 1114112 - 128
Solving this you need 111 values of the first byte as 2 byte characters and the remaining 17 as 3 byte. To check:
128 + 111 * 256 + 17 * 65536 = 1,114,256
To put it another way:
128 code points require 1 byte;
28,416 code points require 2 bytes; and
1,114,112 code points require 3 bytes.
Of course, this doesn't allow for the inevitable expansion of Unicode, which UTF-8 does. You can adjust this to the first byte meaning:
0-127 (128) = 1 byte;
128-191 (64) = 2 bytes;
192-255 (64) = 3 bytes.
This would be better because it's simple bitwise AND tests to determine length and gives an address space of 4,210,816 code points.