How was the range of a 32 bit register defined? - range

I have a doubt..range of a 32 bit register is 2^32 ..is it because a bit can store 2 values if yes please could you justify it..it's really confusing..

Say you have 2 bit. The possible different binary values you could create., with 2 bit is 00, 01,10, and 11 hence 2^2 = 4. Hence, for decimal you could store 0,1,2,3 (4 vlaues) with 2 bits.
Similar case applies to 32 bits.

Related

Why is this question worded like this regarding main memory?

I have this question:
1. How many bits are required to address a 4M × 16 main memory if main memory is word-addressable?
And before you say it, yes I have looked this question up and there have been posts on stackoverflow asking about how to answer it but my question is different.
This may sound like a silly question but I don't understand what it means when it says "How many bits are required to address...".
To my understanding and what I have been taught is that (if we're talking about word addressable) each cell would contain 16 bits in the RAM chip and the length would be 4M-1, with 2^22 words. But I don't understand what it is asking when it says 'How many bits are required...':
The answer says 22 bits would be required but I just don't understand. 22 bits for what? All I know is each word is 16 bits and each cell would be numbered from 0 - 4M-1. Can someone clear this up for me please?
Since you have 4 million cells, you need a number that is able to represent each cell. 22 bits is the size of the address to allow representing 2^22 cels (4,194,304 cells)
In computing, a word is the natural unit of data used by a particular processor design. A word is a fixed-sized piece of data handled as a unit by the instruction set or the hardware of the processor.
(https://en.m.wikipedia.org/wiki/Word)
Using this principle imagine a memory with a word that uses 2 bits only, and it is capable of storing 4 words:
XX|YY|WW|ZZ
Each word in this memory is represented by a number that tells to computer it's position.
XX is 0
YY is 1
WW is 2
ZZ is 3
The smallest binary number length that can represent 3 is a 2 bit binary length right? Now apply the same example to a largest memory. Doesn't matters if the word size is 16 bits or 2 bits. Only the length of words matters

How to hash two 32 bit integers into one 32 bit integers without collision?

I am looking for a one-way hashing to combine two 32 bits integers into one 32 bits integer. Not sure if it's feasible to do it without collision.
Edit:
I think my integers are generally small. One of them rarely takes more than 14 bits, and the other one rarely takes more than 20 bits.
Edit 2: Thanks for the help in the comments. I think for cases like if the combination of two integers took more than 32 bits, I can do something differently to not hash it. With that, how should I hash my integers?
Thanks!

Minimum and maximum length of X509 serialNumber

The CA/Browser Forum Baseline Requirements section 7.1 states the following:
CAs SHOULD generate non‐sequential Certificate serial numbers that exhibit at least 20 bits of entropy.
At the mean time, RFC 5280 section 4.1.2.2 specifies:
Certificate users MUST be able to handle serialNumber values up to 20 octets. Conforming CAs MUST NOT use serialNumber values longer than 20 octets.
Which integer range can I use in order to fullfill both requirements. It is my understanding that the max. value will be 2^159 (730750818665451459101842416358141509827966271488). What is the min. value?
The CAB requirement has changed to minimum of 64 bit entropy. Since the leading bit in the representation for positive integers must be 0 there are a number of strategies:
produce random bit string with 64 bits. If leading bit is 1 then prepend 7 more random bits. This has the disadvantage that it is not obvious if the generation considered 63 or 64 bits
produce random bit string with 64 bits and add leading 0 byte. This is still compact (wastes only 7-8 bit) but is obvious that it has 64 bit entropy
produce a random string longer than 64 bit, for example 71 or 127 bit (+leading 0 bit). 16 byte seems to be a common length and well under the 20 byte limit.
BTW since it is unclear if the 20 byte maximum length includes a potential 0 prefix for interop reasons you should not generate more than 159 bit
A random integer with x bit entropy can be produced with generating a random number between 0 and (2^x)-1 (inclusive)

Is there a 2 or 3-bit checksum algorithm

Is there a 2 or 3 bit checksum algorithm that I can use to check my 6 bits of data for errors which I am reading in from an optical sensor that detects 8 bit patterns? I was not able to find anything.
You can xor the first three bits with the second ones and append the result, then xor them again to check it on the other side.
You could possibly use two 3b/4b codes to validate the result:
https://en.wikipedia.org/wiki/8b/10b_encoding
Not alot you can do with 0, 1, 2 or 3. Maybe get an integer value from your 6 bits and then do mod 4.

Bits, Bytes and numbers. Shrink the size of the byte

It may be a very basic low level architecture questions. I am trying to get my head around it. Please correct if my understanding is wrong, as well.
Word = 64 bit, 32 bit, etc. This is a number of bits computer can read at a time.
Questions:
1.) Would this mean, we can send, 4 numbers (of a 8 bits/byte length each) for 32 bit? Or combination of 8 bit (byte), 32 bit (4 bytes), etc numbers at one time?
2.) If we need to send only 8 bit number, then how does it form a word? Only first byte is filled and rest all bytes are padded with 0s or last byte gets filled while rest of the bytes are padded with 0s? Or I saw somewhere like first byte has information as to how the rest of the bytes are filled. Does that apply here? For example, UTF-8. Here, ASCII is 1 byte, and some other chars take up to 4 bytes. So when we send one char, we send all 4 bytes together, but fill the bytes as required for the char and rest of the bytes 0s?
3.) Now to represent 8 digit number, we would need 27 bits (remember famous question, sorting 1 million 8 digit number with just 1 MB RAM). Can we exactly use 27 bits, which is 32 bits (4 bytes) - 5 bits? and use those 5 digits for something else?
Appreciate your answers!
1- Yes, four 8-bit integers can fit in a 32-bit integer. This can be done using bitwise operations, for example (using C operators):
((a & 255) << 24) | ((b & 255) << 16) | ((c & 255) << 8) | (d & 255)
This example uses C operators, but they are also used for the same purpose in several other languages (see below - a complete, compilable version of this example in C). You may want to look up the bitwise operators AND (&), OR (|), and Left Shift (<<);
2- Unused bits are generally 0. The first byte is sometimes used to represent the type of encoding (Look up "Magic Numbers"), but this is implementation dependent. Sometimes it is a different number of bits.
3- Groups of 8-digit numbers can be compressed to use only 27 bits each. This is very similar to the example, except the number of bits and size of the data are different. To do this, you will need 864-bit groups, i.e. 27 32-bit integers to store 32 27-bit numbers. This would be more complex than the example, but it would use the same principles.
Complete, compilable example in C:
#include <stdio.h>
/*Compresses four integers containing one byte of data in the least
*significant byte into a single 32-bit integer*/
__int32 compress(int a, int b, int c, int d){
__int32 compressed = ((a & 255) << 24) | ((b & 255) << 16) |
((c & 255) << 8) | (d & 255);
return compressed;
}
/*Test the compress() function and print the resuts*/
int main(){
printf("%x\n", (unsigned)compress(255, 0, 255, 0));
printf("%x\n", (unsigned)compress(192, 168, 0, 255));
printf("%x\n", (unsigned)compress(84, 94, 255, 2));
return 0;
}
I think that clarification on 2 points is required here :
1. Memory addressing.
2. Word
Memories can be addressed in 2 ways, they are generally either byte addressable or word addressable.
Byte addressable memory means that each byte is given a separate address.
a -> 0th byte
b -> 1st byte
Word addressable memories are those in which each group of bytes that is as wide as the word gets an address. Eg if the Word Length is 32 bits :
a->0th byte
b->4th byte
And so on.
Word
I would say that a word defines the maximum number of bits a processor can handle at a time. For 8086, for eg, it's 16.
It is usually the largest number on which the arithmetic can be performed by the processor. Continuing the example , 8086 can perform operations on 16 bit numbers at a time.
Now i'll try and answer the questions :
1.) Would this mean, we can send, 4 numbers (of a 8 bits/byte length each) for 32 bit? Or combination of 8 bit (byte), 32 bit (4 bytes),
etc numbers at one time?
You can always define your own interpretation for a bunch of bits.
For eg, If it is byte addressable, we can treat every byte individually and thus , we can write code at assemble level that treats each byte as a separate 8 bit number.
If it is not, you can use bit operations to extract individual bytes out.
The point is you can represent 4 8 bit numbers in 32 bits.
2) Mostly, leftover significant bits are stuffed with 0s ( for unsigned numbers)
3.) Now to represent 8 digit number, we would need 27 bits (remember famous question, sorting 1 million 8 digit number with just 1 MB RAM).
Can we exactly use 27 bits, which is 32 bits (4 bytes) - 5 bits? and
use those 5 digits for something else?
Yes, you can do this also. But you know the great space-time tradeoff.
You sure save 5 bits, per number. But you'll need to use bit operations and all the really cool but hard to read stuff. Shooting up time and making code more complex.
But i don't think you'll ever come across a situation where you need such level of saving, unless you are coding for a very constrained system. (embedded etc)