page replacement algorithms comparision in C - operating-system

Can someone please help me with this question that was asked in my interview ?
Thanks in Advance :)
Implement a program in C language which compares performance of various page
­replacement algorithms. It
should take the following as input:
CPU address size in bits e.g. 64 bits.
Page size in bytes.
Physical memory size in bytes.
Length of page reference string.
Page reference locality factor which is a values between 0 to 1. It indicates what fraction
of the pages in page string are repeatedly accessed.
Memory access time in ns.
Page swap time in ms.
Your program should suitably compare the performance of the following algorithms a) FIFO b) LRU c) Least Frequently Used d) Random.
Performance should be measured in terms of: a) Page fault rate and b) Effective memory
access time.

Related

What is the benefit of having the registers as a part of memory in AVR microcontrollers?

Larger memories have higher decoding delay; why is the register file a part of the memory then?
Does it only mean that the registers are "mapped" SRAM registers that are stored inside the microprocessor?
If not, what would be the benefit of using registers as they won't be any faster than accessing RAM? Furthermore, what would be the use of them at all? I mean these are just a part of the memory so I don't see the point of having them anymore. Having them would be just as costly as referencing memory.
The picture is taken from Avr Microcontroller And Embedded Systems The: Using Assembly and C by Muhammad Ali Mazidi, Sarmad Naimi, and Sepehr Naimi
AVR has some instructions with indirect addressing, for example LD (LDD) – Load Indirect From Data Space to Register using Z:
Loads one byte indirect with or without displacement from the data space to a register. [...]
The data location is pointed to by the Z (16-bit) Pointer Register in the Register File.
So now you can move from a register by loading its data-space address into Z, allowing indirect or indexed register-to-register moves. Certainly one can think of some usage where such indirect access would save the odd instruction.
what would be the benefit of using registers as they won't be any faster than accessing RAM?
accessing General purpose Registers is faster than accessing Ram
first of all let us define how fast measured in microControllers .... fast mean how many cycle the instruction will take to excute ... LOOk at the avr architecture
See the General Purpose Registers GPRs are input for the ALU , and the GPRs are controlled by instruction register (2 byte width) which holds the next instruction from the code memory.
Let us examine simple instruction ADD Rd , Rr; where Rd,Rr are any two register in GPRs so 0<=r,d<=31 so each of r and d could be rebresented in 5 bit,now open "AVR Instruction Set Manual" page number 32 look at the op-code for this simple add instraction is 000011rdddddrrrr and becuse this op-code is two byte(code memory width) this will fetched , Decoded and excuit in one cycle (under consept of pipline ofcourse) jajajajjj only one cycle seems cool to me
I mean these are just a part of the memory so I don't see the point of having them anymore. Having them would be just as costly as referencing memory
You suggest to make the all ram as input for the ALU; this is a very bad idea: a memory address takes 2 bytes.
If you have 2 operands per instruction as in Add instruction you will need 4 Byte for saving only the operands .. and 1 more byte for the op-code of the operator itself in total 5 byte which is waste of memory!
And furthermore this architecture could only fetch 2 bytes at a time (instruction register width) so you need to spend more cycles on fetching the code from code memory which is waste of cycles >> more slower system
Register numbers are only 4 or 5 bits wide, depending on the instruction, allowing 2 per instruction with room to spare in a 16-bit instruction word.
conclusion GPRs' existence are crucial for saving code memory and program execution time
Larger memories have higher decoding delay; why is the register file a part of the memory then?
When cpu deal with GPRs it only access the first 32 position not all the data space
Final comment
don't disturb yourself by time diagram for different ram technology because you don't have control on it ,so who has control? the architecture designers , and they put the limit of the maximum crystal frequency you can use with there architecture and everything will be fine .. you only concern about cycles consuming with your application

Paged virtual memory

I am currently studing exam questions but stuck on this one, I hope someone can help me out to understand.
Question: Assume that we have a paged virtual memory with a page size of 4Ki byte.
Assume that each process has four segments (for example: code, data, stack,
extra) and that these can be of arbitrary but given size. How much will the
operating system loose in internal fragmentation?
The answer is: Each segment will in average give rise to 2Ki byte of fragmentation.
This will in average mean 8 Ki byte per process.
If we for example have 100 processes this is a total loss of 800 Ki byte.
My question:
How the answer get the 2Ki byte of fragmentation for each segement, how is that possible we can calculate the size, am I missing something here?
If we have 8Ki byte per process, that would not even fit in a 4Ki byte page isn't that actually a external fragmentation?
This is academic BS designed to make things confusing.
They are saying probability wise, the last page in the sections in the executable file will only use 1/2 the page size on average. You can't count that size, they are just doing simple combinatorics. That presumes behavior of the linker.

Finding minimum page size to allow TLB access to overlap with tag fetch [duplicate]

This question already has an answer here:
Minimum associativity for a PIPT L1 cache to also be VIPT, accessing a set without translating the index to physical
(1 answer)
Closed last year.
Homework question, so please just nudge me in the right direction.
Consider a system with physically-addressed caches, and assume that 40-bit virtual addresses and 32-bit physical addresses are used, and the memory is byte-addressable. Further assume that the cache is 4-way set-associative, the cache line size is 64 Bytes and the total size of the cache is 64 KBytes.
What should be the minimum page size in this system to allow for the overlap of the TLB access and the cache access?
I've been stuck on this question and have no idea how to even begin. Can someone give me a hint towards finding the solution?
I think the most important piece of information in the question is
overlap of the TLB access and the cache access
This means, we access the Cache at the same time we access the TLB. In practice, what we really do is, we index the cache with the index bits from the virtual address and by the time we have located the entry in the cache, we will have the data (physical address) from the TLB. Then we can do the tag comparison with physical address. In other words cache acts as a Virtually indexed, Physically tagged (VIPT) cache.
Even though the scheme sounds efficient, the thing to lookout is, number of bits used to index the cache, cannot be higher than the number of bits needed to represent the page size. Simply, size of a page can put an upper limit on the number of cache entries.
Now coming back to your question,
its a 64KBytes cache with 4 way set assoc. and cacheline of 64Bytes.
Number of cachelines = (64KBytes/4)/64Bytes = 2^8 cachelines
That means if a page is 256Bytes or bigger, we can use this mechanism. If a page is smaller than 256 Bytes, then we cannot assume the index bits of the virtual address and the physical address are going to be the same.
What should be the minimum page size in this system to allow for the
overlap of the TLB access and the cache access?
256Bytes

Virtual Memory Page Replacement Algorithms

I have a project where I am asked to develop an application to simulate how different page replacement algorithms perform (with varying working set size and stability period). My results:
Vertical axis: page faults
Horizontal axis: working set size
Depth axis: stable period
Are my results reasonable? I expected LRU to have better results than FIFO. Here, they are approximately the same.
For random, stability period and working set size doesnt seem to affect the performance at all? I expected similar graphs as FIFO & LRU just worst performance? If the reference string is highly stable (little branches) and have a small working set size, it should still have less page faults that an application with many branches and big working set size?
More Info
My Python Code | The Project Question
Length of reference string (RS): 200,000
Size of virtual memory (P): 1000
Size of main memory (F): 100
number of time page referenced (m): 100
Size of working set (e): 2 - 100
Stability (t): 0 - 1
Working set size (e) & stable period (t) affects how reference string are generated.
|-----------|--------|------------------------------------|
0 p p+e P-1
So assume the above the the virtual memory of size P. To generate reference strings, the following algorithm is used:
Repeat until reference string generated
pick m numbers in [p, p+e]. m simulates or refers to number of times page is referenced
pick random number, 0 <= r < 1
if r < t
generate new p
else (++p)%P
UPDATE (In response to #MrGomez's answer)
However, recall how you seeded your input data: using random.random,
thus giving you a uniform distribution of data with your controllable
level of entropy. Because of this, all values are equally likely to
occur, and because you've constructed this in floating point space,
recurrences are highly improbable.
I am using random, but it is not totally random either, references are generated with some locality though the use of working set size and number page referenced parameters?
I tried increasing the numPageReferenced relative with numFrames in hope that it will reference a page currently in memory more, thus showing the performance benefit of LRU over FIFO, but that didn't give me a clear result tho. Just FYI, I tried the same app with the following parameters (Pages/Frames ratio is still kept the same, I reduced the size of data to make things faster).
--numReferences 1000 --numPages 100 --numFrames 10 --numPageReferenced 20
The result is
Still not such a big difference. Am I right to say if I increase numPageReferenced relative to numFrames, LRU should have a better performance as it is referencing pages in memory more? Or perhaps I am mis-understanding something?
For random, I am thinking along the lines of:
Suppose theres high stability and small working set. It means that the pages referenced are very likely to be in memory. So the need for the page replacement algorithm to run is lower?
Hmm maybe I got to think about this more :)
UPDATE: Trashing less obvious on lower stablity
Here, I am trying to show the trashing as working set size exceeds the number of frames (100) in memory. However, notice thrashing appears less obvious with lower stability (high t), why might that be? Is the explanation that as stability becomes low, page faults approaches maximum thus it does not matter as much what the working set size is?
These results are reasonable given your current implementation. The rationale behind that, however, bears some discussion.
When considering algorithms in general, it's most important to consider the properties of the algorithms currently under inspection. Specifically, note their corner cases and best and worst case conditions. You're probably already familiar with this terse method of evaluation, so this is mostly for the benefit of those reading here whom may not have an algorithmic background.
Let's break your question down by algorithm and explore their component properties in context:
FIFO shows an increase in page faults as the size of your working set (length axis) increases.
This is correct behavior, consistent with Bélády's anomaly for FIFO replacement. As the size of your working page set increases, the number of page faults should also increase.
FIFO shows an increase in page faults as system stability (1 - depth axis) decreases.
Noting your algorithm for seeding stability (if random.random() < stability), your results become less stable as stability (S) approaches 1. As you sharply increase the entropy in your data, the number of page faults, too, sharply increases and propagates the Bélády's anomaly.
So far, so good.
LRU shows consistency with FIFO. Why?
Note your seeding algorithm. Standard LRU is most optimal when you have paging requests that are structured to smaller operational frames. For ordered, predictable lookups, it improves upon FIFO by aging off results that no longer exist in the current execution frame, which is a very useful property for staged execution and encapsulated, modal operation. Again, so far, so good.
However, recall how you seeded your input data: using random.random, thus giving you a uniform distribution of data with your controllable level of entropy. Because of this, all values are equally likely to occur, and because you've constructed this in floating point space, recurrences are highly improbable.
As a result, your LRU is perceiving each element to occur a small number of times, then to be completely discarded when the next value was calculated. It thus correctly pages each value as it falls out of the window, giving you performance exactly comparable to FIFO. If your system properly accounted for recurrence or a compressed character space, you would see markedly different results.
For random, stability period and working set size doesn't seem to affect the performance at all. Why are we seeing this scribble all over the graph instead of giving us a relatively smooth manifold?
In the case of a random paging scheme, you age off each entry stochastically. Purportedly, this should give us some form of a manifold bound to the entropy and size of our working set... right?
Or should it? For each set of entries, you randomly assign a subset to page out as a function of time. This should give relatively even paging performance, regardless of stability and regardless of your working set, as long as your access profile is again uniformly random.
So, based on the conditions you are checking, this is entirely correct behavior consistent with what we'd expect. You get an even paging performance that doesn't degrade with other factors (but, conversely, isn't improved by them) that's suitable for high load, efficient operation. Not bad, just not what you might intuitively expect.
So, in a nutshell, that's the breakdown as your project is currently implemented.
As an exercise in further exploring the properties of these algorithms in the context of different dispositions and distributions of input data, I highly recommend digging into scipy.stats to see what, for example, a Gaussian or logistic distribution might do to each graph. Then, I would come back to the documented expectations of each algorithm and draft cases where each is uniquely most and least appropriate.
All in all, I think your teacher will be proud. :)

Why LRU doesn't suffer Belady's Anomaly?

I have a question about page replacement algorithms. FIFO suffers from Belady's Anomaly but LRU doesn't. Does anyone know why LRU doesn't suffer? I've been searching for the reason on the internet but no luck.
Because LRU is a stacking algorithm, and using k frames will always be a subset of k + n frames for LRU. Thus, any page-faults that may occur for k + n frames will also occur for k frames, which in turn means that LRU doesn't suffer Belady's anomaly.
Because FIFO assumes that the mere fact that a page has been occupying memory for a long time that it is the safest to replace, where in reality that simply isn't the case. Rather, where FIFO fails is that statistically, if a page has been called frequently, it's more likely to be called again than another page which has been called recently. In other words, frequency is a far better determiner of page loading than age.
Similar to Caspar's answer, however I found the explanation from my textbook (slightly edited) to be a bit more clear.
[LRU belongs] to a class of page-replacement algorithms, called stack algorithms, [which] can never exhibit Belady’s anomaly.
A stack algorithm is an algorithm for which it can be shown that the set of pages in memory for N frames is always a subset of the set of pages that would be in memory with N + 1 frames. [Therefore an additional frame will never cause an additional page fault.]
For LRU replacement, the set of pages in memory would be the N most recently referenced pages. If the number of frames is increased, these N pages will still be the most recently referenced and so will still be in memory.
Silberschatz, A., Galvin, P. B., & Gagne, G. (2014). Operating System Concepts (9th ed.). Singapore: Wiley.