How to order all assets by their usage percentage in Coverage tab in Chrome DevTools? - google-chrome-devtools

Is there any way to sort the assets by their usage percentage in Coverage tab in Chrome DevTools?
Currently I can only sort by amount of unused bytes or usage visualization, like below:
So it is always difficult to pick out the 100% unused file manually.
Another issue is that whether I can add a column of "Used bytes"? Sometimes though it is indicate 100% unused, but the unused bytes is smaller than total bytes, which means there are still some bytes being used. With "Used bytes", I can accurately know whether 0 or several bytes are used.
Thanks.

Related

Different Page Sizes for Processes

As part of the virtual to physical address conversion, for each process a table of mappings between virtual to physical addresses is stored. If a process is scheduled next the content of the page table is loaded into the MMU.
1) Where is the page table for each process stored? As part of the process control block?
2) Does the page table contain entries for not allocated memory so a segfault can be detected (more easily)?
3) Is it possible (and used in any known relevant OS) that one process does have multiple page frame sizes? Especially if question 2 is true it is very convenient to map huge page tables to non existing memory to keep the page table as small as possible. It will still allow high precision in mapping smaller frames to the memory to keep external (and internal) fragmentation as small as possible? This of course requires an extra field storing the frame size for each entry. Please point out the reason(s) if my "idea" cannot exist.
1) They could be, but most OS's have a notion of an address space which a process is attached to. The address space typically contains a description of the sorts of mappings that have been established, and pointers to the page structure(s). If you consider the operation of exec(2), at a certain level of abstraction it merely involves creating a new address space, populating it, then attaching the process to it. Once the operation is known to succeed, the old address space can simply be discarded.
2) It depends upon the mmu architecture of the machine. In a forward mapped arrangement (x86, armv[78]), the page tables form a sort of tree structure, but instead of having the conventional 2 or 3 items per node, there are hundreds or thousands of them. The x86-classic has a 2 level structure, where each of the 1024 entries in the first level points to a pagetable which covers 2^20 bytes of address space. Invalid entries, either at the inner or leaf level, can represent unmapped space; so in x86-classic, if you have a very small address space, you only need a root table, and a single leaf level table.
3) Yes, multiple page size has been supported by most OSes since the early 2000s. Again, in forward mapped ones, each of the levels of the tree can be replaced by a single large page for the same address space as that table level. x86-classic only had one size; later editions supported many more.
3a) There is no need to use large pages to do this -- simply having an invalid page table is sufficient. In x86-classic, the least significant bit of the page table/descriptor entry indicates the validity of the entry.
Your idea exists.
1) Where is the page table for each process stored? As part of the process control block?
Usually it's not "a page table". For some CPUs there's only TLB entries (Translation Lookaside Buffer entries - like a cache of what the translations are) where software has to handle "TLB miss" by loading whatever it feels like into the TLB itself, and where the OS might not use tables at all (e.g. could use "list of arbitrary length zones"). For some CPUs it's a hierarchy of multiple levels (e.g. for modern 64-bit 80x86 there's 4 levels); and in this case some of the levels may be in physical memory and some may be in swap space or somewhere else and some may be generated as needed from other data (a little bit like it would've been for "software handling of TLB miss"). In any case, if each process has its own virtual address space (e.g. and it's not some kind of "single-address space shared by many processes" scheme) its likely that the process control block (directly or indirectly) contains a reference to whatever the OS uses (e.g. maybe a single "physical address for the highest level page table", but maybe a virtual address of a "list of arbitrary length zones" and maybe anything else).
2) Does the page table contain entries for not allocated memory so a segfault can be detected (more easily)?
If there are page tables then there must be a way to indicate "page not present", where "page not present" may mean that the memory isn't allocated but could also mean that the (virtual) memory was allocated but the entry for it hasn't been set (either because OS is generating the tables on demand, or because the actual data is in swap space, or...).
3) Is it possible (and used in any known relevant OS) that one process does have multiple page frame sizes?
Yes. It's relatively common for 64-bit 80x86 where there's 4 KiB pages, 2 MiB (or 4 MiB) "large pages" (plus maybe 1 GiB "huge pages"); and done to reduce the chance of TLB misses (while also reducing memory consumed by page tables). Note that this is mostly an artifact of having multiple levels of page tables - an entry in a higher level table can say "this entry is a large page" or it can say "this entry is a lower level page table that might contain smaller pages". Note that in this case it's not "multiple page sizes in the same table", but is "fixed page size for each level".
Especially if question 2 is true it is very convenient to map huge page tables to non existing memory to keep the page table as small as possible. It will still allow high precision in mapping smaller frames to the memory to keep external (and internal) fragmentation as small as possible? This of course requires an extra field storing the frame size for each entry. Please point out the reason(s) if my "idea" cannot exist.
Converting a virtual address into a physical address (or some kind of fault to indicate the translation doesn't exist) needs to be very fast (because it happens extremely often). When you have "fixed page size for each level" it means you can extract some bits of the virtual address and use them as the index into the table; which is fast.
When you have "multiple page sizes in the same table" there's 2 options. The first option is to duplicate entries in the page table so that you can still extract some bits of the virtual address and use them as the index into the table; which (apart from minor differences in the way TLBs are managed - e.g. auto-detecting adjacent translations vs. being manually told) is effectively identical to not bothering at all; but there are some CPUs (ARM I think) that do this.
The other alternative is searching multiple entries in the page table to find the right entry, where the cost of searching reduces performance. I don't know of any CPU that supports this - performance is too important.

Where does page size store in operation system?

I know that page size is fixed in some operation system, such as pg size is 4K in i386. However, how does memory manager know the size of the page? Does it store it somewhere of memory so that MMU can read it when translating address?
Page size has a direct impact in processor architecture. It defines how a page address is interpreted by hardware for the virtual-physical translation.
In-page part of the address (often called offset or displacement) is not translated and is sent unchanged to the cache, while the upper bits (page virtual address) are translated by the TLB, and modifying page size (and offset) would require changes in datapath width. Depending on the size of this offset and on the L1 cache characteristics (size and associativity), the cache can or not use a virtual index which can have a direct performance impact and would imply a redesign.
The virtual address size also determines the way page tables are organized and accessed after a TLB miss (page walk). The MMU and cache are a highly critical part of processor design that has a direct performance impact, and they need to be optimized that generally exclude flexibility.
So changing page size requires major changes in the processor architecture and page sizes are generally constant or have a limited number of values. Recent Pentium can have regular 4K or huge 4G pages. Older arm versions (v4 and v5) add subpages that allowed to divide page size by 4. On Arm v8, you can also have 64kB pages. But besides that processor are generally designed for fixed page size,
and the operating system must adapt to the processor pages.
There a three ways I am aware of that processors define page sizes:
The page size is constant and never changes.
The page size is the same but is configurable. In this case the page size is set in a system register. Usually, the page size had to be certain values so it is a bit setting rather than a numeric value. This seems to be what you are asking. The readers digest version on the intel chips is that setting a bit in the CR4 register switches between 4KB and 4 MB pages.
There are some systems that can have variable page sizes. In that case, the page size is usually set in the page table.

Concept of "block size" in a cache

I am just beginning to learn the concept of Direct mapped and Set Associative Caches.
I have some very elementary doubts . Here goes.
Supposing addresses are 32 bits long, and i have a 32KB cache with 64Byte block size and 512 frames, how much data is actually stored inside the "block"? If i have an instruction which loads from a value from a memory location and if that value is a 16-bit integer, is it that one of the 64Byte blocks now stores only a 16 bit(2Bytes) integer value. What of the other 62 bytes within the block? If i now have another load instruction which also loads a 16bit integer value, this value now goes into another block of another frame depending on the load address(If the address maps to the same frame of the previous instruction, then the previous value is evicted and the block again stores only 2bytes in 64 bytes). Correct?
Please forgive me if this seems like a very stupid doubt, its just that i want to get my concepts correctly.
I typed up this email for someone to explain caches, but I think you might find it useful as well.
You have 32-bit addresses that can refer to bytes in RAM.
You want to be able to cache the data that you access, to use them later.
Let's say you want a 1-MiB (220 bytes) cache.
What do you do?
You have 2 restrictions you need to meet:
Caching should be as uniform as possible across all addresses. i.e. you don't want to bias toward any particular kind of address.
How do you do this? Use remainder! With mod, you can evenly distribute any integer over whatever range you want.
You want to help minimize bookkeeping costs. That means e.g. if you're caching in blocks of 1 byte, you don't want to store 4 bytes of data just to keep track of where 1 byte belongs to.
How do you do that? You store blocks that are bigger than just 1 byte.
Let's say you choose 16-byte (24-byte) blocks. That means you can cache 220 / 24 = 216 = 65,536 blocks of data.
You now have a few options:
You can design the cache so that data from any memory block could be stored in any of the cache blocks. This would be called a fully-associative cache.
The benefit is that it's the "fairest" kind of cache: all blocks are treated completely equally.
The tradeoff is speed: To find where to put the memory block, you have to search every cache block for a free space. This is really slow.
You can design the cache so that data from any memory block could only be stored in a single cache block. This would be called a direct-mapped cache.
The benefit is that it's the fastest kind of cache: you do only 1 check to see if the item is in the cache or not.
The tradeoff is that, now, if you happen to have a bad memory access pattern, you can have 2 blocks kicking each other out successively, with unused blocks still remaining in the cache.
You can do a mixture of both: map a single memory block into multiple blocks. This is what real processors do -- they have N-way set associative caches.
Direct-mapped cache:
Now you have 65,536 blocks of data, each block being of 16 bytes.
You store it as 65,536 "rows" inside your cache, with each "row" consisting of the data itself, along with the metadata (regarding where the block belongs, whether it's valid, whether it's been written to, etc.).
Question:
How does each block in memory get mapped to each block in the cache?
Answer:
Well, you're using a direct-mapped cache, using mod. That means addresses 0 to 15 will be mapped to block 0 in the cache; 16-31 get mapped to block 2, etc... and it wraps around as you reach the 1-MiB mark.
So, given memory address M, how do you find the row number N? Easy: N = M % 220 / 24.
But that only tells you where to store the data, not how to retrieve it. Once you've stored it, and try to access it again, you have to know which 1-MB portion of memory was stored here, right?
So that's one piece of metadata: the tag bits. If it's in row N, all you need to know is what the quotient was, during the mod operation. Which, for a 32-bit address, is 12 bits big (since the remainder is 20 bits).
So your tag becomes 12 bits long -- specifically, the topmost 12 bits of any memory address.
And you already knew that the lowermost 4 bits are used for the offset within a block (since memory is byte-addressed, and a block is 16 bytes).
That leaves 16 bits for the "index" bits of a memory address, which can be used to find which row the address belongs to. (It's just a division + remainder operation, but in binary.)
You also need other bits: e.g. you need to know whether a block is in fact valid or not, because when the CPU is turned on, it contains invalid data. So you add 1 bit of metadata: the Valid bit.
There's other bits you'll learn about, used for optimization, synchronization, etc... but these are the basic ones. :)
I'm assuming you know the basics of tag, index, and offset but here's a short explanation as I have learned in my computer architecture class. Blocks are replaced in 64 byte blocks, so every time a new block is put into cache it replaces all 64 bytes regardless if you only need one byte. That's why when addressing the cache there is an offset that specifies the byte you want to get from the block. Take your example, if only 16 bit integer is being loaded, the cache will search for the block by the index, check the tag to make sure its the right data and then get the byte according to the offset. Now if you load another 16 bit value, lets say with the same index but different tag, it will replace the 64 byte block with the new block and get the info from the specified offset. (assuming direct mapped)
I hope this helps! If you need more info or this is still fuzzy let me know, I know a couple of good sites that do a good job of teaching this.

How much can SQLite store on the iPhone?

I have an idea for a webapp for the iPhone but its unknown to me how much data can be stored in mobile Safari's SQLite db. I tried searching through the Apple docs but found nothing:
Safari Client-Side Storage and Offline Applications Programming Guide: Using the JavaScript Database
Most of these answers are totally wrong. Safari will not allow you to create SQLite databases over 50MB (or expand existing databases beyond that size).
This is a limit imposed by Safari - as other people have noted, SQLite itself supports much larger databases that you can use from native apps. But webapps are limited to 50MB.
It might be useful to note that this is per database - if you really need the extra space, you can create multiple databases, although this would obviously cause a lot of hassle.
It's as the other posters say. You're only limited by the drive space on the device.
You also need to consider your in memory footprint though. There is a finite amount of memory on the iphone, and in general it's quiet small, so the amount of data/hydrated objects you'll be able to have in memory is another potential limitation for your app.
There are a LOT of people answering that have clearly never tested it. I am on the latest version of iOS (4.3.3) and have set up a system to create multiple databases and keep them under 45 MB but found that the 50 MB cap is for the site as a whole. So, no matter how much you split the data up, it still restricts it to an aggregated cap of 50 MB.
The database size limit on safari mobile, is 50 mb per site not per database. i have tested this. even if you have an extra empty database you cannot add to it if the total size of all databases on a single site is 50 mb
whats worth noting as well is that characters are saved as double bytes on websql, that is 2 million characters will be 4 megabytes not 2 megabytes on disk.
You are only limited by the amount of free space on the device.
I'm not sure. If you were doing your own application you'd be limited by free space on the device and to some extent in memory footprint (as Bryan McLemore points out).
However since you're looking at using JavaScript inside of Safari there's no easy way to tell. According to the document you found it looks like it may be limited by site, but there's nothing telling you how much. I'd suggest writing a quick script to fill up the database and figure out how much it actually is. After that, I'd probably halve that value and assume I'd be always be able to use that much.
Be sure to report back so we'll all know!
It's most likely 32 terabytes... which is well over the available disk space.
I reached this number by multiplying the maximum page size by the maximum page count listed at the bottom of the SQLite limits page.
Limits In SQLite
"Limits" in the context of this article means sizes or quantities that can not be exceeded. We are concerned with things like the maximum number of bytes in a BLOB or the maximum number of columns in a table.
SQLite was originally designed with a policy of avoiding arbitrary limits. Of course, every program that runs on a machine with finite memory and disk space has limits of some kind. But in SQLite, those limits were not well defined. The policy was that if it would fit in memory and you could count it with a 32-bit integer, then it should work.
Unfortunately, the no-limits policy has been shown to create problems. Because the upper bounds were not well defined, they were not tested, and bugs (including possible security exploits) were often found when pushing SQLite to extremes. For this reason, newer versions of SQLite have well-defined limits and those limits are tested as part of the test suite.
As of version 3.6.19 (all statistics in the report are against that release of SQLite), the SQLite library consists of approximately 65.7 KSLOC of C code. (KSLOC means thousands of "Source Lines Of Code" or, in other words, lines of code excluding blank lines and comments.) By comparison, the project has 690 times as much test code and test scripts - 45409.7 KSLOC.
The default storage limit on iPhone seems to be 5mb
davibe has done some work to raise the limit up to 1GB with his PhoneGap plugin.
https://github.com/davibe/Phonegap-SQLitePlugin
The plugin calls the native sqlite3 API, with a wrapper on the Javascript side.
The relevant code extracted from sqlite.js are:
update origins set quota = '999999999999' where origin = 'file__0';
"update databases set estimatedSize = '999999999999' where name = '" + dbName + "';'";
Caution: my iphone is jailbroken! But I don't suspect that this changes anything.
The limit of 50MB is no longer correct.
On my iPhone 4S with iOS 6.1 I have a database of 58.66 MB (448496 records) for my webclip (website pinned to the springboard).
No special tricks, just standard HTML5 usage.
Maximum Database Size
Please refer Official Sqlite site
Every database consists of one or more "pages". Within a single database, every page is the same size, but different database can have page sizes that are powers of two between 512 and 65536, inclusive. The maximum size of a database file is 2147483646 pages. At the maximum page size of 65536 bytes, this translates into a maximum database size of approximately 1.4e+14 bytes (140 terabytes, or 128 tebibytes, or 140,000 gigabytes or 128,000 gibibytes).
This particular upper bound is untested since the developers do not have access to hardware capable of reaching this limit. However, tests do verify that SQLite behaves correctly and sanely when a database reaches the maximum file size of the underlying filesystem (which is usually much less than the maximum theoretical database size) and when a database is unable to grow due to disk space exhaustion.

Meaning of SIZE and RSS values in prstat output

Can somebody give some clear explanation of the meaning of the SIZE and RSS values we get from prstat in Solaris?
I wrote a testing C++ application that allocates memory with new[], fills it and frees it with delete[].
As I understood, the SIZE value should be related to how much virtual memory has been "reserved" by the process, that is memory "malloced" or "newed".
That memory doesn't sum up in the RSS value unless I really use it (filling with some values). But then even if I free the memory, the RSS doesn't drop.
I don't understand what semantic I can correctly assign to those 2 values.
RSS is (AFAIK reliably) representing how much physical memory a process is using. Using Solaris default memory allocator, freeing memory doesn't do anything about RSS as it just changes some pointers and values to tell that memory is free to be reused.
If you don't use again that memory by allocating it again, it will eventually be paginated out and the RSS will drop.
If you want freed memory to be returned immediately after a free, you can use the Solaris mmap allocator like this:
export LD_PRELOAD=libumem.so
export UMEM_OPTIONS=backend=mmap
Size is the total virtual memory size of the process, including all mapped files and devices, and RSS should be the resident set size, but is completely unreliable, you should try to get that information from pmap.
As a general rule once memory is allocated to a process it will never be given back to the operating system. On Unix systems the sbrk() call is used to extend the processes address space, and there is not analogous call to go in the other direction.