So I just read that virtual addresses are divided up into 1 - page number and 2 - offset.
I also read that page number directs you to be able to find the right page and the offset to get the right "byte" that you want to get the physical memory of.
So for example in 4KB sized page, we have 12bits reserved for offset since 2^12 = 4096, which is 4KB.
I get the concepts. But I don't get the reasoning behind using pages.
I mean, using the 4KB sized page or 8KB sized page, why couldn't we use 1byte big page?
I guess that could make everything byte by byte read and write, which you could say it would slow things down.
But aren't we already doing the same thing with first finding page and finding the correct byte with offset?
What is the motivation behind coming up with bigger sized pages than 1byte?
I get the reason behind the use of virtual memory: to avoid swapping. But why couldn't we do this with smaller, more direct one byte sized page?
This is the same question as cluster sizes on disks.
Larger pages => Lower overhead (smaller page tables)
Smaller pages => Greater overhead
Larger pages => More wasted memory and more disk reading/writing on paging
Smaller pages => Less wasted memory and less disk reading/writing on paging
In ye olde says page sizes tended to be much smaller than they are today (512 bytes being common). As memory has grown, the wasted memory paging problems have diminished while the overhead problem (due to more pages) has grown. Thus we have larger page sizes.
A one byte page gets you nothing. You have to write to disk in full disk blocks (typically 512bytes or larger). Paging single bytes would be tediously slow.
Now add in page protection and the page tables. With one-byte pages, there would be more page table overhead than useable memory.
Related
This question already has an answer here:
Minimum associativity for a PIPT L1 cache to also be VIPT, accessing a set without translating the index to physical
(1 answer)
Closed last year.
Homework question, so please just nudge me in the right direction.
Consider a system with physically-addressed caches, and assume that 40-bit virtual addresses and 32-bit physical addresses are used, and the memory is byte-addressable. Further assume that the cache is 4-way set-associative, the cache line size is 64 Bytes and the total size of the cache is 64 KBytes.
What should be the minimum page size in this system to allow for the overlap of the TLB access and the cache access?
I've been stuck on this question and have no idea how to even begin. Can someone give me a hint towards finding the solution?
I think the most important piece of information in the question is
overlap of the TLB access and the cache access
This means, we access the Cache at the same time we access the TLB. In practice, what we really do is, we index the cache with the index bits from the virtual address and by the time we have located the entry in the cache, we will have the data (physical address) from the TLB. Then we can do the tag comparison with physical address. In other words cache acts as a Virtually indexed, Physically tagged (VIPT) cache.
Even though the scheme sounds efficient, the thing to lookout is, number of bits used to index the cache, cannot be higher than the number of bits needed to represent the page size. Simply, size of a page can put an upper limit on the number of cache entries.
Now coming back to your question,
its a 64KBytes cache with 4 way set assoc. and cacheline of 64Bytes.
Number of cachelines = (64KBytes/4)/64Bytes = 2^8 cachelines
That means if a page is 256Bytes or bigger, we can use this mechanism. If a page is smaller than 256 Bytes, then we cannot assume the index bits of the virtual address and the physical address are going to be the same.
What should be the minimum page size in this system to allow for the
overlap of the TLB access and the cache access?
256Bytes
I am trying to understand both paradigms of memory management;however, I fail to see the big picture and the difference between both. Paging consists of taking fixed size pages from a secondary to a primary storage in order to do some task requested by a process. Segmentation consists of assigning to each unit in a process an address space, so they are allowed to grow. I don't quiet see how they are related and that's because there are still a lot of holes in my understanding. Can someone fill them up?
I think you have something confused. One problem you have is that the term "segment" had multiple meanings.
Segmentation is a method of memory management. Memory is managed in segments that are of variable or fixed length, depending upon the processor. Segments originated on 16-bit processors as a means to access more than 64K of memory.
On the PDP-11, programmers used segments to map different memory into the 64K address space. At any given time a process could only access 64K of memory but the memory that made up that 64K could change.
The 8086 and it successors used segments with base registers. Each segment could have 64K (that grew with the processors) but a process could have 4 segments (more in later processors).
Paging allows a process to have a larger address space than there is physical memory available.
The 8086's successors used the kludge of paging on top of segments. However, that bit of ugliness has finally gone away in 64-bit mode.
You got your answer right there, paging relates with fixed size pages in a storage while segmentation deals with units in a page. 'Segments' are objects in the class 'Page'
I am recently learning Operating System. In paging, if we increase
the page size then how this internal fragmentation will increase.
Quoting Wikipedia:
Rarely do processes require the use of an exact number of pages. As a result, the last page will likely only be partially full, wasting some amount of memory. Larger page sizes increase the potential for wasted memory this way, as more potentially unused portions of memory are loaded into main memory. Smaller page sizes ensure a closer match to the actual amount of memory required in an allocation.
As an example, assume the page size is 1024KB. If a process allocates 1025KB, two pages must be used, resulting in 1023KB of unused space (where one page fully consumes 1024KB and the other only 1KB).
So lets say you have a process with a total memory footprint of (9*1024KB + 100KB) (text, data, stack, heap), And you use 1024KB as page size, there will be 10 page faults on behalf of the process throughout its execution. Internal fragmentation is ~924KB.
Instead of 1024KB you now use page 102400KB (100 times previous size), now throughout the process life time there will be only 1 page fault, but internal fragmentation is really large. So that's how page size causes internal fragmentation. Although you save time spent for all those page faults, you are spending more time in swapping this really big page from swap space to main memory, as there will be other processes contending for space on main memory.
We can't take pages as fraction we have to get full page always that's why if the no. of pages increase internal fragmentation will also increase.
I studied from book william stalling ,it was written there if we increase the size of page then pagefault first increases and then when pagesize become size of process then pagefault decreases.
I am not able to understand why pagefault increases as if i increase the pagesize,any one plz explain the reason.
Thank you.
pages are fixed size 'chunks' which are formed by dividing logical memory.if we increase the page size the number of pages will decrease ( consider an example for that matter , if you have to divide a large piece of bread among few people then you have to make sure that pieces are distributed to all now if you cut it into large chunks the number of pieces will not be sufficient enough to feed all people so some will remain hungry).similarly if number of pages decrease the CPU will have a very few addresses to refer to and thus increase the number of page faults.now if page size becomes the size of the process then there will almost as many number of pages as the number of processes so CPU will refer to it with no pagefaults.
I had this problem in an exam today:
Suppose you have a computer system with a 38-bit logical address, page size of 16K, and 4 bytes per page table entry.
How many pages are there in the logical address space? Suppose we use two level paging and each page table can fit completely in a frame.
For the above mentioned system, give the breakup of logical address bits clearly indicating number of offset bits, page table index bits and page directory index bits.
Suppose we have a 32MB program such that the entire program and all necessary page tables (using two level paging) are in memory. How much memory (in number of frames) is used by the program, including its page tables?
How do I go about solving such a problem? Until now I would have thought page size = frame size, but that won't happen in this case.
Here is what I think:
Since the page size is 16K, my offset will be 17 bits (2^17 = 16K). Now how do I divide the rest of the bits, and what will be the frame size? Do I divide the rest of the bits in half?
238 / 16384 = 16777216 pages.
On one hand, the remaining 38-log216384=24 bits of address may be reasonable to divide equally between the page directory and page table portions of the logical address since such a symmetry will simplify the design. On the other hand, each page table should have the same size as a page so they can be offloaded to the disk in exactly the same way as normal/leaf pages containing program code and data. Fortunately, in this case using 12 bits for page directory indices and page table indices gets us both since 212 * 4 bytes of page table entry size = 16384. Also, since a page address always has 14 least significant bits set to zero due to the natural page alignment, only 38-14=24 bits of the page address need to be stored in a page table entry and that gives you 32-24=8 bits for the rest of control data (present, supervisor/user, writable/non-writable, dirty, accessed, etc bits). This is what we get assuming that the physical address is also not longer than 38 bits. The system may have slightly more than 38 bits of the physical address at the expense of having fewer control bits. Anyway, everything fits. So, there, 38=12(page directory index)+12(page table index)+14(offset).
32MB/16KB = 2048 pages for the program itself. Each page table covers 212=4096 pages, so you will need about 2048/4096=0 page tables for this program. We round this up to 1 page table. Then there's also the page directory. 2048+1+1=2050 is how many pages are necessary to contain the entire program with its related pages tables in the memory.