I was learning to on how to snapshot of javascript heap memory and I came across 2 terms
"Summery View" and "Containment view".
The definition provided by there documentation is not clear and doesn't helps in understanding there usage
Can any one please help me to get clarity on following questions please?
When to use summary view and containment view, with a small example
In containment view the "retained" size of all element doesn't sum up to be total size of snapshot, then were i can find rest of memory being used
In summery view, total retained memory size looks greater then total heap size, why?
Related
What I understand is we can't guarantee large amount (larger than page size) of contiguous memory. If size of page table itself is large enough that can't be stored in 1 page that is a problem. So we again do paging on page table what is called multilevel page table. But multilevel page table is not a good choice if address is greater than 32 bit because more leveling cost most computation.
To avoid this hashed page table is used.
From my understanding hashed page table [indexable] size should be under page size. So for large address size there is going to be lots of collisions. If page size is 12 bit page table consist 2^52 entries and hashtable size is going to 2^12 ( approx don't know the exact calculation) and then per index 2^40 sized linked list. So how is this going to be feasible. So my assumption is hashtable is going to store using other methods or elsewhere. Operating system concepts book dint explain much about it and others sites also.
I have read operating system concepts ninth edition page 380.
What I understand is we can't guarantee large amount (larger than page size) of contiguous memory.
Why? Often a physical memory manager has to be able to handle the allocation of physically contiguous buffers for (some) device drivers.
So we again do paging on page table what is called multilevel page table. But multilevel page table is not a good choice if address is greater than 32 bit because more leveling cost most computation.
Why? Most CPUs use multilevel page tables; and then have a TLB ("translation look-aside buffer") to avoid the cost of looking things up in the page tables. Modern 80x86 goes further and also has higher level paging structure caches (in addition to TLBs).
From my understanding hashed page table [indexable] size should be under page size. So for large address size there is going to be lots of collisions. If page size is 12 bit page table consist 2^52 entries and hashtable size is going to 2^12 ( approx don't know the exact calculation) and then per index 2^40 sized linked list. So how is this going to be feasible.
The thing is; if the translation isn't in the hash table (e.g. because of limited hash table size) usually the CPU generates a fault to ask the OS for assistance, and the OS figures out the translation and shoves it into the hash table (after evicting something else from the hash table to make room). Of course the OS will probably use its own multilevel page table to figure out the translation (to shove into the hash table); so the whole "hash table" thing ends up being a whole layer of annoying extra bloat (compared to CPUs that support multilevel page tables themselves).
I am currently taking an Operating Systems course and will have my first exam tomorrow. The professor has provided us with a list of topics to be prepared for and one of them is:
Simple Heap Implementation
Based on the course material so far, I have an idea of what this entails but was wondering if anyone can possibly elaborate on this or direct me to some further resources to continue studying the topic.
What are some things I should be aware of and how can I go about implementing them?
Thanks
You can build your own memory manager using the data structure linked list. Heap is used for dynamic memory allocation. For example: malloc in C allocates memory from Heap.
In a Dynamic storage allocation model , memory is made up of series of variable sized blocks. Some are allocated and some are free. So you will basically create linked lists( to be specific doubly linked lists ), for free memory blocks and allocated memory blocks.
Take look at this and this links for details. I suggest you better have a good understanding of the data structure linked list before doing anything else.
Could anyone explains to me what is the MEM%, displayed by pg_activity?
I would like to know if MEM% represents \postgres total memory usage\, or 'the system memory usage'.
Thanks,
I am building an iPhone application .In the database I have 5000 records. Among them I am displaying only 50 in the app. But I want to ask would there be any memory issue if I create 5000 empty cells in the iPhone view initially even though I am displaying 50 rows of data?
If you build your table appropriately, you will only be using a handful to perhaps a dozen actual UITableViewCell objects which are constantly recycled as things show on screen.
Even 50 would be safe.
Having 5000 data objects in memory with 50 UITableViewCells should be pretty acceptable.
Especially if those data objects are small, or you are allowing CoreData to do some work for you with managing your data set.
The important thing is DO NOT MAKE 5000 TABLE CELL VIEWS. That is extremely poor practice.
iPhone has a limited amount of memory so you should always be careful to display only the data that is necessary for that view. You can implement infinite scrolling where when you reach the bottom of the screen through scrolling you trigger an event and load the next 25-50 records.
http://nsscreencast.com/episodes/8-automatic-uitableview-paging
One thing you'll quickly learn with the canonical way of handling tables is that regardless of the size of your model (i.e., the number of rows you intend to create), only a handful of rows are actually created, therefore the memory footprint remains low.
In essence, the UITableView initially creates and renders a screenful of rows (plus a few more for good measure). When you begin scrolling down, the controller recognises that it needs to draw a new row. But, it also realises that rows from the top of the table have disappeared from view. So, rather than create a whole new cell it simply takes one of the cells no longer in view and reconfigures it with the new info. No matter how many rows your table has, only those few cells live in memory.
So in your case, the memory bottleneck will likely be the model that is feeding the cell configuration. If you loaded all your 5000 rows into memory at once then that may be slow and memory consuming. But there is help at hand: you get a hint from the table controller that basically tells you that it wants to set up the *n*th row. So your model can in effect be more targeted and only load the data you need. E.g., since you know the 15th row is being rendered, then go and grab the 15th row from your database rather than preloading the entire model up-front.
This is the approach I've used to create apps with many more than 5000 rows without the need for paging. Of course it depends on your dataset as to how your user may navigate.
With respect to operating systems and page tables, it seems there are 4 general methods to paging and page tables
Basic - A single page table which stores the page number and the offset
Hierarchical - A multi-tiered table which breaks up the virtual address into multiple parts
Hashed - A hashed page table which may often include multiple hashings mapping to the same entry
Inverted - The logical address also includes the PID, page number and offset. Then the PID is used to find the page in to the table and the number of rows down the table is added to the offset to find the physical address for main memory. (Rough, and probably terrible definition)
I am just wondering what are the pros and cons of each method? It seems like basic is the easier method but may also take up more space in memory for a larger address space.
What else?
The key to building a usable page model is minimizing the unused space for entries that are not necessary. You want to minimize the amount of memory needed while keeping the computation cost of a memory lookup low.
Basic can take up a lot of memory (for a modern system using 4GB of memory, that might amount to 300 MB only for the table) and is therefore impractical.
Hierarchical reduces that memory a lot by only adding subtables that are actually in use. Still, every process has a root page table. And if the memory footprint of the processes is scattered, there may still be a lot of unnecessary entries in secondary tables. This is a far better solution regarding memory than Basic and introduces only a marginal computation increase.
Hashed does not work because of hash collisions
Inverted is the solution to make Hashed work. The memory use is very small (as big as a Basic table for a single process, plus some PID and chaining overhead). The problem is, if there is a hash collision (several processes use the same virtual address) you will have to follow the chain information (just as in a linked list) until you find the entry with a matching PID. This may produce a lot of computing overhead in addition to the hash computing, but will keep the memory footprint as small as possible.