Virtually Indexed Caches - cpu-architecture

I have been having a problem answering the following question:
Explain the challenges if the number of bits in the page offset does not equal the number of bits in the sum of “Index” and “Byte”

Related

Why is this question worded like this regarding main memory?

I have this question:
1. How many bits are required to address a 4M × 16 main memory if main memory is word-addressable?
And before you say it, yes I have looked this question up and there have been posts on stackoverflow asking about how to answer it but my question is different.
This may sound like a silly question but I don't understand what it means when it says "How many bits are required to address...".
To my understanding and what I have been taught is that (if we're talking about word addressable) each cell would contain 16 bits in the RAM chip and the length would be 4M-1, with 2^22 words. But I don't understand what it is asking when it says 'How many bits are required...':
The answer says 22 bits would be required but I just don't understand. 22 bits for what? All I know is each word is 16 bits and each cell would be numbered from 0 - 4M-1. Can someone clear this up for me please?
Since you have 4 million cells, you need a number that is able to represent each cell. 22 bits is the size of the address to allow representing 2^22 cels (4,194,304 cells)
In computing, a word is the natural unit of data used by a particular processor design. A word is a fixed-sized piece of data handled as a unit by the instruction set or the hardware of the processor.
(https://en.m.wikipedia.org/wiki/Word)
Using this principle imagine a memory with a word that uses 2 bits only, and it is capable of storing 4 words:
XX|YY|WW|ZZ
Each word in this memory is represented by a number that tells to computer it's position.
XX is 0
YY is 1
WW is 2
ZZ is 3
The smallest binary number length that can represent 3 is a 2 bit binary length right? Now apply the same example to a largest memory. Doesn't matters if the word size is 16 bits or 2 bits. Only the length of words matters

Determining maximum memory size, maximum unique operations, and maximum value of an unsigned constant for 12 bit instruction microprocessor

I'm new to computer architecture and am having some trouble with this question:
A hypothetical microprocessor having 12-bit instructions is composed of three fields: the first 4 bits are reserved for the opcode, the next two bits are reserved for a register number and the remaining bits contain the operand memory. What is the maximum memory size, maximum number of different operations the system would be able to understand and the maximum value of an unsigned integer?
There’s not enough information to answer that question. You provided specs about the instructions but not about address size or data word size. The only thing we can say for sure is that 4 bits are enough to specify 16 different instructions.

Paged virtual memory

I am currently studing exam questions but stuck on this one, I hope someone can help me out to understand.
Question: Assume that we have a paged virtual memory with a page size of 4Ki byte.
Assume that each process has four segments (for example: code, data, stack,
extra) and that these can be of arbitrary but given size. How much will the
operating system loose in internal fragmentation?
The answer is: Each segment will in average give rise to 2Ki byte of fragmentation.
This will in average mean 8 Ki byte per process.
If we for example have 100 processes this is a total loss of 800 Ki byte.
My question:
How the answer get the 2Ki byte of fragmentation for each segement, how is that possible we can calculate the size, am I missing something here?
If we have 8Ki byte per process, that would not even fit in a 4Ki byte page isn't that actually a external fragmentation?
This is academic BS designed to make things confusing.
They are saying probability wise, the last page in the sections in the executable file will only use 1/2 the page size on average. You can't count that size, they are just doing simple combinatorics. That presumes behavior of the linker.

page replacement algorithms comparision in C

Can someone please help me with this question that was asked in my interview ?
Thanks in Advance :)
Implement a program in C language which compares performance of various page
­replacement algorithms. It
should take the following as input:
CPU address size in bits e.g. 64 bits.
Page size in bytes.
Physical memory size in bytes.
Length of page reference string.
Page reference locality factor which is a values between 0 to 1. It indicates what fraction
of the pages in page string are repeatedly accessed.
Memory access time in ns.
Page swap time in ms.
Your program should suitably compare the performance of the following algorithms a) FIFO b) LRU c) Least Frequently Used d) Random.
Performance should be measured in terms of: a) Page fault rate and b) Effective memory
access time.

In Voldemort, why does the hash ring only extend to 2^31-1?

On the project voldemort design page:
http://project-voldemort.com/design.php
It is stated that the hash ring covers the interval [0, 2^31-1].
Now, the interval [0, 2^31-1] represents 2^31 total numbers, and the largest number 2^31-1 is just 31 bits all set to 1. (To convince yourself of this, consider 2^3-1. 2^3=8 and is 0x1000. 2^3-1=7 and is 0x111).
Thus, if a normal 32-bit address word is used to store the value, you have 1 bit free.
Thus, why is 2^31-1 the upper limit? Is that extra bit used for some kind of system bookkeeping?
(e.g. 1 extra bit would provide room for safely adding two valid hash addresses without overflow).
And finally, is this choice specific to voldemort, or is it seen in other consistent hashing schemes?
I think you only have 1 bit free not 2. The -1 accounts for the fact that it starts with the number '0' instead of 1 (the same reason loops count from 0 to count-1). I would guess the reason they use 2^31 instead of 2^32 is that they're using a signed integer so that last bit is the sign bit and so is not useable.
Edit:
From the page you linked:
To visualize the consistent hashing method we can see the possible
integer hash values as a ring beginning with 0 and circling around to
2^31-1.
It specifies an integer so unless you want negative hash values you're stuck with 2^31 instead of 2^32.
Answering a question little late, just by 4 years :)
The reason is Voldemort is written in Java and Java has no unsigned int. 2^31 is already very high for the partitions. Generally we recommend running with only few thousand partitions.