I've been reading up on memory models recently and I was sort of confused on how this worked.
To quote http://cis.poly.edu/muller/CS623/weakmemory.htm
if processor writes a new X then writes a new Y, all other processors that subsequently
execute a read Y then a read X, will
access either the new Y and new X, the
old Y and the new X, or old X and the
old Y: but no processor will access
the new Y and the old X. This
assumption of strong ordering was, at
one time, reasonable. Current computer
manufactures, however, recommend that
programmers not rely on memory
ordering. This is because newer memory
management systems attempt to reorder
memory access for optimization
purposes. Systems that are allowed to
reorder memory request are called
weakly-ordered memory systems
(models). To examine how a reordering
might be used to improve performance,
consider the following assembler code
[2].
Load reg1, A // register1 = contents of memory A
Load reg2, B // register2 = contents of memory B
ADD reg3, reg1, reg2 // register3 = register1 + register2
Store reg3, C // contents of memory C = contents of register3
If we assume that location B is
currently in cache and location A is
not cached, then loading A will take
longer than B. Instead of waiting for
A, the CPU can fetch B from its cache,
hiding B’s latency: thus the CPU can
perform the addition as soon as A is
available. By relaxing the strong
(sequential) memory model of execution
(i.e., A must load first, followed by
B), greater performance is
possible----but reordering may not be
transparent to software. Consider the
code fragment below, it is part of the
code that can be used to implements a
spinlock semaphore [2].
My question is, how is it possible that with a weaker memory model could lead to the case that a processor could access the new Y and old X. Isn't it written to the same memory (ram) or does it work differently? I assume if one process modifies a variable and another reads it after that, it reads the latest value.
Another thing I'm not sure about is which component allows memory access reordering, my assumption now is that a compiler is allowed to reorder instructions. But could a CPU also reorder them?
Thanks
You must remember that cache sits between the CPU and memory. When to software writes a value it certainly lands in the cache, but it may never go to main memory at all if the value is written again before the cache copies it to RAM (think loop variables and locals). This whole discussion revolves around the different models for when data is actually placed in RAM or read from RAM. Within a core it doesn't really matter, as they will use that last value written weather it comes from cache or RAM.
Both the compiler and the CPU can reorder instructions. See Memory ordering
Related
Modern CPUs use a store buffer to delay commit into cache until retirement, also avoiding WAR and WAW memory hazards. I'm wondering how weak ISAs resolve WAW hazards using the store buffer, which is otherwise not a FIFO, allowing StoreStore reordering? Do they insert an implicit barrier?
More specifically, if two stores to the same memory address retire in-order on a weak ISA, e.g. ARM/POWER, they could theoretically commit to cache out-of-order, since the store buffer is not FIFO, thus breaking the WAW dependency.
According to Wikipedia:
...the store instructions, including the memory address and store data, are buffered in a store queue until they reach the retirement point. When a store retires, it then writes its value to the memory system. This avoids the WAR and WAW dependence problems...
My guess; I'm not familiar with the details of any real-world designs
Even if the store buffer is a full scheduler that can "grab" any graduated store for commit to L1d, I'd assume it would use an oldest-ready first order. (Like an instruction / uop scheduler aka RS Reservation Station.)
"Ready" would mean the cache line is exclusively owned (Modified or Exclusive state). Every graduated store itself is implicitly ready to commit because by definition the associated store instruction has retired.
In-order retirement means that stores become eligible for commit in program-order, so you can't have an older store that's temporarily hidden from the oldest-ready-first scheduling.
Together, those things would ensure that for any given byte, stores overlapping it are in program order and thus maintain consistency of global-visibility order and final value for any given group of bytes within a cache line.
A memory barrier might work by fencing off the store buffer like a divider on a grocery-store checkout conveyor belt, preventing grabbing of stores past it while committing ones to the same place in the same line.
We do know real-world weakly-ordered store buffers like PowerPC RS64-III (in-order exec) and Alpha 21264 (OoO exec) do merging to help them create whole 4-byte or 8-byte aligned commits to L1d, e.g. out of multiple byte stores. That's also fine, assuming your merge algorithm respects order for any given byte e.g. by putting the data from a younger store into an older SB entry or vice versa and marking the other entry as "already committed". Obviously this must respect store barriers.
I think this is all fine even with unaligned stores, although preserving atomicity guarantees for unaligned stores could be tricky with merging. (Intel P6-family and later does provide atomicity guarantees for unaligned cached stores that don't cross a cache-line boundary, but we don't think Intel does merging in the store buffer proper; maybe just some stuff with LFBs for cache-miss back-to-back stores to the same line.)
It's likely that real hardware might not be a full scheduler that can merge any 2 SB entries, e.g. maybe only over limited range to reduce the amount of different addresses (and sizes) to compare at once. Also, you'd probably still only free up SB entries in program order, so it can basically be a circular buffer (unlike the RS). Alloc in program order, and having the order be tracked by the layout of the SB itself, makes it much cheaper for memory barriers to work, and to track where the youngest "graduated" store is.
Disclaimer: IDK if this is exactly how real HW works
Possible corner case: unaligned 4-byte store to [cache_line+63] (split across a CL boundary) and then to [cache_line+60] (fully contained in the lower cache line). If the older store-buffer entry can't commit right away because we don't yet own the next cache line, but we do own cache_line, we still can't let the younger store to cache_line+60 commit first, if we're depending on that not happening to avoid WAW hazards.
So you'd probably want a line-split SB entry to be able to commit the data to one line but not the other, allowing oldest-ready-first to happen for each location separately, not tying together order across 2 cache lines.
Related: I wrote my own answer explaining what a store buffer is. I tried to avoid mistakes like Wikipedia makes ("when a store retires, it writes its value to the memory system": In fact retirement just makes it eligible to commit; such stores are called "graduated" stores.)
I have been studying Operating Systems Concepts and the book I am referring to is Operating System Concepts by Peter B. Galvin, Greg Gagne and Abraham Silberschatz.
In the chapter of Virtual Memory, book starts to talk about Paging and number of memory access it would require for the system to read data stored in a particular frame in memory given a logical address. The author states that when Page Table is present in Main Memory, system would need two memory accesses to read data stored in a frame. The first access is made to the page table to read the correct frame number and the next access is for reading the byte/word from the frame.
After a few sections, the book talks about Demand Paging and page fault. Author state that in case of no page fault, one memory access is needed and in case of a page fault, we will consider Page Fault Service time (which comprises of swap in time, swap out time, one memory access etc.) and presents readers with the formula
Effective Access Time = (1-p) x one memory access time + p x page fault service time
where p = page fault rate
I cannot wrap my head around why the author suggests that, in case of no page fault, only one memory access will be needed. Applying the line of thought used with standard paging scheme earlier introduced by same author(s), we should need one memory access to read page table and another to read the data from frame.
Is it because we are talking about the time frame after the access to page table is made? Then why the same standard of calculation not applies to standard version of paging?
Note: I haven't read/seen this book.
For educational material; if the author describes reality accurately with all the details the reader will just get confused and won't be able to learn. To work around that, authors simplify (omit details and ignore reality) while introducing different concepts, so that the reader is able to learn each concept one at a time while building up the knowledge needed to comprehend the complexity of reality.
The problem is that different simplifications make sense at different stages, and authors are human (imperfect), so sometimes the simplifications that were beneficial at one point (in one chapter) conflict with simplifications that are beneficial at a later point (in a different chapter).
For an example, I might (initially) tell someone "each access from virtual memory involves a second memory fetch from RAM to determine the translation" to help them understand how page tables work and that there's (potential) performance problems involved (twice as many memory accesses). Then I might introduce the concept of "translation look-aside buffers" (after the reader understands the how page tables work and knows about the problem that TLBs are designed to solve). Then I might explain that often real systems have multiple levels of page tables (e.g. on 64-bit 80x86 it's four levels, potentially involving 4 memory accesses to determine a translation) and that there might be higher level caches/buffers involved (and not just TLBs that cache final translations). In this case, my original statement ("each access from virtual memory involves a second memory fetch from RAM to determine the translation") is a deliberate lie (a simplification) to avoid the complexity of a statement like "each access from virtual memory may or may not involve one or more additional fetches from some or all levels of page tables" (which is too confusing for beginners initially, because it creates lots of questions that they don't have answers to yet).
I cannot wrap my head around why the author suggests that, in case of no page fault, only one memory access will be needed.
One reality is (for one real 80x86 CPU in long mode but not all 80x86 CPUs in long mode and not any 80x86 in other modes, if virtualisation is not being used), for a read from virtual memory that does not lead to a page fault, if the access is not misaligned/split across page boundaries (where CPU would have to do it all twice to fetch bytes from 2 different pages and merge the bytes):
* if the translation is not in the TLB, then:
* if the area is not in the "page directory cache"
* fetch the PML4 entry to determine address of PDPT (try L1 cache, then L2 cache, then L3 cache, then RAM)
* do access checks based on flags in PML4 entry
* fetch the PDPT entry to determine address of PD (try L1 cache, then L2 cache, then L3 cache, then RAM)
* do access checks based on flags in PDPT entry
* insert data into "page directory cache"
* if the area is in the "page directory cache"
* do access checks based on flags in "page directory cache entry"
* fetch the PD entry to determine address of PT (try L1 cache, then L2 cache, then L3 cache, then RAM)
* do access checks based on flags in PD entry
* fetch the PT entry to determine address of page (try L1 cache, then L2 cache, then L3 cache, then RAM)
* do access checks based on flags in PT entry
* insert data into TLB (including setting the "accessed" flag in the page table entry)
* if the translation is in the TLB, then:
* do access checks based on flags in "TLB entry"
* do the "physical address = physical address of page + offset in page" calculation
* read the data for the physical address (try L1 cache, then L2 cache, then L3 cache, then RAM)
For this reality (with the restrictions mentioned); the number of fetches from RAM can be anything from zero to 5.
Can you see why the author (while trying to explain page faults and not trying to explain translation costs) might want to avoid showing something like this and might simplify (by assuming that only one fetch is needed because the translation is in the TLB) instead?
The fundamental source of your problem is that you are reading a book that is only fit for lining a cat box. What you are describing is nonsensical gibberish that textbooks use to create confusion among students. This is not a case of over simplification because the authors apparently throw in a nonsensical formula for access times.
A formula like this
Effective Access Time = (1-p) x one memory access time + p x page fault service time
is total bovine fecal waste matter with no basis in reality.
The author states that when Page Table is present in Main Memory, system would need two memory accesses to read data stored in a frame.
The processor has to translate logical addresses to physical addresses using the page tables. Assuming that there is no caching in the CPU, the CPU has read the page table for each memory access.
The number reads depends upon the page table format used by the CPU.
Let's suppose your process has a multi-level page table. In that case the CPU has to make a read for each level of the table.
If you have a CPU that has separate linear system and user page tables, with the user tables in logical addresses, each access to the system space requires one memory read and each access to the user space requires at least two memory accesses and might, in fact, trigger a page fault. The first read is to system page table to find the user page table entry. The second read is to the user page table. The third is to the data.
In reality, every CPU on the planet does page table caching so separate reads are not required (all the time).
I cannot wrap my head around why the author suggests that, in case of no page fault, only one memory access will be needed.
It sounds like the book is not being consistent in its BS.
The reality is that logical memory translation requires a number of steps. However, what those steps are depends upon the state of the processor, something that is unpredictable. These steps take place transparently behind the scenes and you do not even need to grasp all of them to understand operating systems.
What you need to know in the real world is that the CPU translates logical addresses to physical addresses. If the CPU is unable to make that translation, it triggers a page fault.
I have to deliver an application as a standalone Matlab executable to a client. The code include a series of calls to a function that internally creates several cell arrays.
My problem is that an out-of-memory error happens when the number of calls to this function increases in response to the increase in the user load. I guess this is low-level memory fragmentation as the workspace variables are independent from the number of loops.
As mentioned here, quitting and restarting Matlab is the only solution for this type of out-of-memory errors at the moment.
My question is that how I can implement such a mechanism in a standalone application to save data, quit and restart itself in the case of out-of-memory error (or when high likelihood of such an error is predicted somehow).
Is there any best practice available?
Thanks.
This is a bit of a tough one. Instead of looking to restart to clear things out, could you change the code to break the work in to chunks to make it more efficient? Fragmentation is mostly proportional to the peak cell-related memory usage and how much the size of data items varies, and less to the total usage over time. If you can break a large piece of work in to smaller pieces done in sequence, this can lower the "high water mark" of your fragmented memory usage. You can also save on memory usage by using "flyweight" data structures that share their backing data values, or sometimes converting to cell-based structures to reference objects or numeric codes. Can you share an example of your code and data structure with us?
In theory, you could get a clean slate by saving your workspace and relevant state out to a mat file and having the executable launch another instance of itself with an option to reload that state and proceed, and then having the original executable exit. But that's going to be pretty ugly in terms of user experience and your ability to debug it.
Another option would be to offload the high-fragmentation code in to another worker process which could be killed and restarted, while the main executable process survives. If you have the Parallel Computation Toolbox, which can now be compiled in to standalone Matlab executables, this would be pretty straightforward: open a worker pool of one or two workers, and run the fraggy code inside them using synchronous calls, periodically killing the workers and bringing up new ones. The workers are independent processes which start out with non-fragmented memory spaces. If you don't have PCT, you could roll your own by compiling your application as two separate apps - the driver app and worker app - and have the main app spin up a worker and control it via IPC, passing your data back and forth as MAT files or bytestreams. That's not going to be a lot of fun to code, though.
Perhaps you could also push some of the fraggy code down in to the Java layer, which handles cell-like data structures more gracefully.
Changing the code to be less fraggy in the first place is probably the simpler and easier approach, and results in a less complicated application design. In my experience it's often possible. If you share some code and data structure details, maybe we can help.
Another option is to periodically check for memory fragmentation with a function like chkmem.
You could integrate this function to be called silently from you code each couple of iterations, or use a timer object to have it called every X minutes...
The idea is to use thse undocumented functions feature memstats and feature dumpmem to get the largest free memory blocks available in addition to the largest variables currently allocated. Using that you could make a guess if there is a sign of memory fragmentation.
When detected, you would warn the user and instruct them you how to save their current session (export to MAT-file), restart the app, and restore the session upon restart.
Stonebraker's paper (Operating System Support for Database Management) explains that, "the overhead to fetch a block from the buffer pool manager usually includes that of a system call and a core-to-core move." Forget about the buffer-replacement strategy, etc. The only point I question is the quoted.
My understanding is that when a DBMS wants to read a block x it issues a common read instruction. There should be no difference from that of any other application requesting a read.
I'm not looking for generic answers (I got them, and read papers). I seek a detailed answer of the described problem.
See Does a file read from a Java application invoke a system call?
Reading from your other question, and working forward:
When the DBMS must bring a page from disk it will involve at least one system call. At his point most DBMSs place the page into their own buffer. (They also end up in the OS' buffer, but that's unimportant).
So, we have one system call. However, we can avoid any further system calls. This is possible because the DBMS is caching pages in its own memory space. The first thing the DBMS will do when it decides it needs a page is check and see if it has it in its cache. If it does, it retrieves it from there without ever invoking a system call.
The DBMS is free to expire pages in its cache in whatever way is most beneficial for its IO needs. The OS's cache is expired in a more general way since the OS has other things to worry about. One example of this is that a DBMS will typically use a great deal of memory to cache pages as it knows that disk IO is one of the most expensive things it can do. The OS won't do this as it has to balance the cost of disk IO against having memory for other applications to use.
The operating system disk i/o must be generalised to work for a variety of situations. The DBMS can sometimes gain significant performance using less general code that is optimised to its own needs.
The DBMS does its own caching, so doesn't want to work through the O/S caching. It "owns" the patch of disk, so it doesn't need to worry about sharing with other processes.
Update
The link to the paper is a help.
Firstly, the paper is almost thirty years old and is referring to long-obsolete hardware. Notwithstanding that, it makes quite interesting reading.
Firstly, understand that disk i/o is a layered process. It was in 1981 and is even more so now. At the lowest point, a device driver will issue physical read/write instructions to the hardware. Above that may be the o/s kernel code then the o/s user space code then the application. Between a C program's fread() and the disk heads moving, there are at least three or four levels and might be considerably more. The DBMS may seek to improve performance might seek to bypass some layers and talk directly with the kernel, or even lower.
I recall some years ago installing Oracle on a Sun box. It had an option to dedicate a disk as a "raw" partition, where Oracle would format the disk in its own manner and then talk straight to the device driver. The O/S had no access to the disk at all.
It's mainly a performance issue. A dbms has highly specific and unusual I/O demands.
The OS may have any number of processes doing I/O and filling its buffers with the assorted cached data that this produces.
And of course there is the issue of size and what gets cached (a dbms may be able to peform better cache for its needs than the more generic device buffer caching).
And then there is the issue that a generic “block” may in fact amount to a considerably larger I/O burden (this depends on partitioning and such like) than what a dbms ideally would like to bear; its own cache may be tuned to work better with the layout of the data on the disk and thereby able to minimise I/O.
A further thing is the issue of indexes and similar means to speed up queries, which of course works rather better if the cache actually knows what these mean in the first place.
The real issue is that the file buffer cache is not in the filesystem used by the DBMS; it's in the kernel and shared by all of the filesystems resident in the system. Any memory read out of the kernel must be copied into user space: this is the core-to-core move you read about.
Beyond this, some other reasons you can't rely on the system buffer pool:
Often, DBMS's have a really good idea about its upcoming access patterns, and it can't communicate these patterns to the kernel. This can lead to lower performance.
The buffer cache is traditional stored in a fixed-size kernel memory range, so it cannot grow or shrink. That also means the cache is much smaller than main memory, so by using the buffer cache a DBMS would be unable to take advantage of system resources.
I know this is old, but it came up as unanswered.
Essentially:
The OS uses a separate address spaces for every process.
Retrieving information from any other address space requires a system call or page fault. **(see below)
The DBMS is a process with its own address space.
The OS buffer pool Stonebraker describes is in the kernel address space.
So ... to get data from the kernel address space to the DBMS's address space, a system call or page fault is unavoidable.
You're correct that accessing data from the OS buffer pool manager is no more expensive than a normal read() call. (In fact, it's done with a normal read call.) However, Stonebraker is not talking about that. He's specifically discussing the caching needs of DBMSes, after the data has been read from the disk and is present in RAM.
In essence, he's saying that the OS's buffer pool cache is too slow for the DBMS to use because it's stored in a different address space. He's suggesting using a local cache in the same process (and therefore same address space), which can give you a significant speedup for applications like DBMSes which hit the cache heavily, because it will eliminate that syscall overhead.
Here's the exact paragraph where he discusses using a local cache in the same process:
However, many DBMSs including INGRES
[20] and System R [4] choose to put a
DBMS managed buffer pool in user space
to reduce overhead. Hence, each of
these systems has gone to the
trouble of constructing its own
buffer pool manager to enhance
performance.
He also mentions multi-core issues in the excerpt you quote above. Similar effects apply here, because if you can have just one cache per core, you may be able to avoid the slowdowns from CPU cache flushes when multiple CPUs are reading and writing the same data.
** BTW, I believe Stonebraker's 1981 paper is actually pre-mmap. He mentions it as future work. "The trend toward providing the file system as a part of shared virtual memory (e.g., Pilot [16]) may provide a solution to this problem."
I havea question regarding mmap functionality. when mmap is used in asynchronous mode where the kernel takes care of persisting the data to the mapped file on the disk , is it possible to have the former updates overwrite the later updates ?
Lets say at time T, we modify a location in memory that is memory mapped to a file on disk and again at time T+1 we modify the same location in memory. As the writes to the file are not synchronous, is it possible that kernel first picks up the modifications at time T+1 and then picks up the modifications at time T resulting in inconsistency in the memory mapped file ?
It's not exactly possible. The file is allowed to be inconsistent till msync(2) or munmap(2) - when that happens, dirty (modified) pages are written to disk page by page (sometimes more, depends on filesystem in newer kernels). msync() allows you to specify synchronous operation and invalidation of caches after finished write, which allows you to ensure that the data in cache is the same as data in file. Without that, it's possible that your program will see newer data but file contains older - exact specifics of the rather hairy situation depend on CPU architecture and specific OS implementation of those routines.