Relation between pagesize and pagefault - operating-system

I studied from book william stalling ,it was written there if we increase the size of page then pagefault first increases and then when pagesize become size of process then pagefault decreases.
I am not able to understand why pagefault increases as if i increase the pagesize,any one plz explain the reason.
Thank you.

pages are fixed size 'chunks' which are formed by dividing logical memory.if we increase the page size the number of pages will decrease ( consider an example for that matter , if you have to divide a large piece of bread among few people then you have to make sure that pieces are distributed to all now if you cut it into large chunks the number of pieces will not be sufficient enough to feed all people so some will remain hungry).similarly if number of pages decrease the CPU will have a very few addresses to refer to and thus increase the number of page faults.now if page size becomes the size of the process then there will almost as many number of pages as the number of processes so CPU will refer to it with no pagefaults.

Related

Paged virtual memory

I am currently studing exam questions but stuck on this one, I hope someone can help me out to understand.
Question: Assume that we have a paged virtual memory with a page size of 4Ki byte.
Assume that each process has four segments (for example: code, data, stack,
extra) and that these can be of arbitrary but given size. How much will the
operating system loose in internal fragmentation?
The answer is: Each segment will in average give rise to 2Ki byte of fragmentation.
This will in average mean 8 Ki byte per process.
If we for example have 100 processes this is a total loss of 800 Ki byte.
My question:
How the answer get the 2Ki byte of fragmentation for each segement, how is that possible we can calculate the size, am I missing something here?
If we have 8Ki byte per process, that would not even fit in a 4Ki byte page isn't that actually a external fragmentation?
This is academic BS designed to make things confusing.
They are saying probability wise, the last page in the sections in the executable file will only use 1/2 the page size on average. You can't count that size, they are just doing simple combinatorics. That presumes behavior of the linker.

maximum number of classes for ColumnDataClassifier

Is there a limit on the maximum number of classes i can have in using ColumnDataClassifier? I have about addresses that I want to assign to 10k orgs, but i kept running into memory issue even after I set the -xmx number to maximum.
There isn't an explicit limit for the size of the label set, but 10k is an extremely large set, and I am not surprised you are having memory issues. You should try some experiments with substantially smaller label sets (~ 100 labels) and see if your issues go away. I don't know how many labels will practically work, but I doubt it's anywhere near 10,000. I would try much smaller sets just to understand how the memory usage is growing at the label set size grows.
You may have to have a hierarchy of labels and different classifiers. You could imagine the first label being "California-organization", and then having a second classifier to select the various California organizations, etc...

How critical is page size in virtual memory

So I just read that virtual addresses are divided up into 1 - page number and 2 - offset.
I also read that page number directs you to be able to find the right page and the offset to get the right "byte" that you want to get the physical memory of.
So for example in 4KB sized page, we have 12bits reserved for offset since 2^12 = 4096, which is 4KB.
I get the concepts. But I don't get the reasoning behind using pages.
I mean, using the 4KB sized page or 8KB sized page, why couldn't we use 1byte big page?
I guess that could make everything byte by byte read and write, which you could say it would slow things down.
But aren't we already doing the same thing with first finding page and finding the correct byte with offset?
What is the motivation behind coming up with bigger sized pages than 1byte?
I get the reason behind the use of virtual memory: to avoid swapping. But why couldn't we do this with smaller, more direct one byte sized page?
This is the same question as cluster sizes on disks.
Larger pages => Lower overhead (smaller page tables)
Smaller pages => Greater overhead
Larger pages => More wasted memory and more disk reading/writing on paging
Smaller pages => Less wasted memory and less disk reading/writing on paging
In ye olde says page sizes tended to be much smaller than they are today (512 bytes being common). As memory has grown, the wasted memory paging problems have diminished while the overhead problem (due to more pages) has grown. Thus we have larger page sizes.
A one byte page gets you nothing. You have to write to disk in full disk blocks (typically 512bytes or larger). Paging single bytes would be tediously slow.
Now add in page protection and the page tables. With one-byte pages, there would be more page table overhead than useable memory.

If I Increase page size then internal fragmentation will increase. How?

I am recently learning Operating System. In paging, if we increase
the page size then how this internal fragmentation will increase.
Quoting Wikipedia:
Rarely do processes require the use of an exact number of pages. As a result, the last page will likely only be partially full, wasting some amount of memory. Larger page sizes increase the potential for wasted memory this way, as more potentially unused portions of memory are loaded into main memory. Smaller page sizes ensure a closer match to the actual amount of memory required in an allocation.
As an example, assume the page size is 1024KB. If a process allocates 1025KB, two pages must be used, resulting in 1023KB of unused space (where one page fully consumes 1024KB and the other only 1KB).
So lets say you have a process with a total memory footprint of (9*1024KB + 100KB) (text, data, stack, heap), And you use 1024KB as page size, there will be 10 page faults on behalf of the process throughout its execution. Internal fragmentation is ~924KB.
Instead of 1024KB you now use page 102400KB (100 times previous size), now throughout the process life time there will be only 1 page fault, but internal fragmentation is really large. So that's how page size causes internal fragmentation. Although you save time spent for all those page faults, you are spending more time in swapping this really big page from swap space to main memory, as there will be other processes contending for space on main memory.
We can't take pages as fraction we have to get full page always that's why if the no. of pages increase internal fragmentation will also increase.

Virtual Memory Page Replacement Algorithms

I have a project where I am asked to develop an application to simulate how different page replacement algorithms perform (with varying working set size and stability period). My results:
Vertical axis: page faults
Horizontal axis: working set size
Depth axis: stable period
Are my results reasonable? I expected LRU to have better results than FIFO. Here, they are approximately the same.
For random, stability period and working set size doesnt seem to affect the performance at all? I expected similar graphs as FIFO & LRU just worst performance? If the reference string is highly stable (little branches) and have a small working set size, it should still have less page faults that an application with many branches and big working set size?
More Info
My Python Code | The Project Question
Length of reference string (RS): 200,000
Size of virtual memory (P): 1000
Size of main memory (F): 100
number of time page referenced (m): 100
Size of working set (e): 2 - 100
Stability (t): 0 - 1
Working set size (e) & stable period (t) affects how reference string are generated.
|-----------|--------|------------------------------------|
0 p p+e P-1
So assume the above the the virtual memory of size P. To generate reference strings, the following algorithm is used:
Repeat until reference string generated
pick m numbers in [p, p+e]. m simulates or refers to number of times page is referenced
pick random number, 0 <= r < 1
if r < t
generate new p
else (++p)%P
UPDATE (In response to #MrGomez's answer)
However, recall how you seeded your input data: using random.random,
thus giving you a uniform distribution of data with your controllable
level of entropy. Because of this, all values are equally likely to
occur, and because you've constructed this in floating point space,
recurrences are highly improbable.
I am using random, but it is not totally random either, references are generated with some locality though the use of working set size and number page referenced parameters?
I tried increasing the numPageReferenced relative with numFrames in hope that it will reference a page currently in memory more, thus showing the performance benefit of LRU over FIFO, but that didn't give me a clear result tho. Just FYI, I tried the same app with the following parameters (Pages/Frames ratio is still kept the same, I reduced the size of data to make things faster).
--numReferences 1000 --numPages 100 --numFrames 10 --numPageReferenced 20
The result is
Still not such a big difference. Am I right to say if I increase numPageReferenced relative to numFrames, LRU should have a better performance as it is referencing pages in memory more? Or perhaps I am mis-understanding something?
For random, I am thinking along the lines of:
Suppose theres high stability and small working set. It means that the pages referenced are very likely to be in memory. So the need for the page replacement algorithm to run is lower?
Hmm maybe I got to think about this more :)
UPDATE: Trashing less obvious on lower stablity
Here, I am trying to show the trashing as working set size exceeds the number of frames (100) in memory. However, notice thrashing appears less obvious with lower stability (high t), why might that be? Is the explanation that as stability becomes low, page faults approaches maximum thus it does not matter as much what the working set size is?
These results are reasonable given your current implementation. The rationale behind that, however, bears some discussion.
When considering algorithms in general, it's most important to consider the properties of the algorithms currently under inspection. Specifically, note their corner cases and best and worst case conditions. You're probably already familiar with this terse method of evaluation, so this is mostly for the benefit of those reading here whom may not have an algorithmic background.
Let's break your question down by algorithm and explore their component properties in context:
FIFO shows an increase in page faults as the size of your working set (length axis) increases.
This is correct behavior, consistent with Bélády's anomaly for FIFO replacement. As the size of your working page set increases, the number of page faults should also increase.
FIFO shows an increase in page faults as system stability (1 - depth axis) decreases.
Noting your algorithm for seeding stability (if random.random() < stability), your results become less stable as stability (S) approaches 1. As you sharply increase the entropy in your data, the number of page faults, too, sharply increases and propagates the Bélády's anomaly.
So far, so good.
LRU shows consistency with FIFO. Why?
Note your seeding algorithm. Standard LRU is most optimal when you have paging requests that are structured to smaller operational frames. For ordered, predictable lookups, it improves upon FIFO by aging off results that no longer exist in the current execution frame, which is a very useful property for staged execution and encapsulated, modal operation. Again, so far, so good.
However, recall how you seeded your input data: using random.random, thus giving you a uniform distribution of data with your controllable level of entropy. Because of this, all values are equally likely to occur, and because you've constructed this in floating point space, recurrences are highly improbable.
As a result, your LRU is perceiving each element to occur a small number of times, then to be completely discarded when the next value was calculated. It thus correctly pages each value as it falls out of the window, giving you performance exactly comparable to FIFO. If your system properly accounted for recurrence or a compressed character space, you would see markedly different results.
For random, stability period and working set size doesn't seem to affect the performance at all. Why are we seeing this scribble all over the graph instead of giving us a relatively smooth manifold?
In the case of a random paging scheme, you age off each entry stochastically. Purportedly, this should give us some form of a manifold bound to the entropy and size of our working set... right?
Or should it? For each set of entries, you randomly assign a subset to page out as a function of time. This should give relatively even paging performance, regardless of stability and regardless of your working set, as long as your access profile is again uniformly random.
So, based on the conditions you are checking, this is entirely correct behavior consistent with what we'd expect. You get an even paging performance that doesn't degrade with other factors (but, conversely, isn't improved by them) that's suitable for high load, efficient operation. Not bad, just not what you might intuitively expect.
So, in a nutshell, that's the breakdown as your project is currently implemented.
As an exercise in further exploring the properties of these algorithms in the context of different dispositions and distributions of input data, I highly recommend digging into scipy.stats to see what, for example, a Gaussian or logistic distribution might do to each graph. Then, I would come back to the documented expectations of each algorithm and draft cases where each is uniquely most and least appropriate.
All in all, I think your teacher will be proud. :)