Atomicity of small PCIE TLP writes - x86-64

Are there any guarantees about how card to host writes from a PCIe device targeting regular memory are implemented from a software process' perspective, where a single TLP write is fully contained within a single CPU cache-line?
I'm wondering about a case where my device may write some number of words of data followed by a byte to indicate that the structure is now valid (for example an event completion), for example:
struct PCIE_COMPLETION_T {
uint64_t data_a;
uint64_t data_b;
uint64_t data_c;
uint64_t data_d;
uint8_t valid;
} alignas(SYSTEM_CACHE_LINE_SIZE);
Can I use a single TLP to write this structure, such that when software sees the valid member change to 1 (having been previously cleared to zero by software), then will the other data members will also reflect the values that I had written and not a previous value?
Currently I'm performing 2 writes, first writing the data and secondly marking it as valid, which doesn't have any apparent race conditions but does of course add unwanted overhead.
The most relevant question I can see on this site seems to be Are writes on the PCIe bus atomic? although this appears to relate to the relative ordering of TLPs.
Perusing the PCIe 3.0 specification, I didn't find anything that seemed to explicitly cover my concerns, I don't think that I need AtomicOps particularly. Given that I'm only concerned about interactions with x86-64 systems, I also dug through the Intel architecture guide but also came up no clearer.
Instinctively it seems that it should be possible for such a write to be perceived atomically -- especially as it is said to be a transaction -- but equally I can't find much in the way of documentation explicitly confirming that view (nor am I quite sure what I'd need to look at, probably the CPU vendor?). I also wonder if such a scheme can be extended over multiple cachelines -- ie if the valid sits on a second cacheline written from the same TLP transaction can I be assured that the first will be perceived no later than the second?

The write may be broken into smaller units, as small as dwords, but if it is, they must be observed in increasing address order.
PCIe revision 4, section 2.4.3:
If a single write transaction containing multiple DWs and the Relaxed
Ordering bit Clear is accepted by a Completer, the observed ordering
of the updates to locations within the Completer's data buffer must be
in increasing address order. This semantic is required in case a PCI
or PCI-X Bridge along the path combines multiple write transactions
into the single one. However, the observed granularity of the updates
to the Completer's data buffer is outside the scope of this
specification.
While not required by this specification, it is
strongly recommended that host platforms guarantee that when a PCI
Express write updates host memory, the update granularity observed by
a host CPU will not be smaller than a DW.
As an example of update
ordering and granularity, if a Requester writes a QW to host memory,
in some cases a host CPU reading that QW from host memory could
observe the first DW updated and the second DW containing the old
value.
I don't have a copy of revision 3, but I suspect this language is in that revision as well. To help you find it, Section 2.4 is "Transaction Ordering" and section 2.4.3 is "Update Ordering and Granularity Provided by a Write Transaction".

Related

How do weak ISAs resolve WAW memory hazards using the store buffer?

Modern CPUs use a store buffer to delay commit into cache until retirement, also avoiding WAR and WAW memory hazards. I'm wondering how weak ISAs resolve WAW hazards using the store buffer, which is otherwise not a FIFO, allowing StoreStore reordering? Do they insert an implicit barrier?
More specifically, if two stores to the same memory address retire in-order on a weak ISA, e.g. ARM/POWER, they could theoretically commit to cache out-of-order, since the store buffer is not FIFO, thus breaking the WAW dependency.
According to Wikipedia:
...the store instructions, including the memory address and store data, are buffered in a store queue until they reach the retirement point. When a store retires, it then writes its value to the memory system. This avoids the WAR and WAW dependence problems...
My guess; I'm not familiar with the details of any real-world designs
Even if the store buffer is a full scheduler that can "grab" any graduated store for commit to L1d, I'd assume it would use an oldest-ready first order. (Like an instruction / uop scheduler aka RS Reservation Station.)
"Ready" would mean the cache line is exclusively owned (Modified or Exclusive state). Every graduated store itself is implicitly ready to commit because by definition the associated store instruction has retired.
In-order retirement means that stores become eligible for commit in program-order, so you can't have an older store that's temporarily hidden from the oldest-ready-first scheduling.
Together, those things would ensure that for any given byte, stores overlapping it are in program order and thus maintain consistency of global-visibility order and final value for any given group of bytes within a cache line.
A memory barrier might work by fencing off the store buffer like a divider on a grocery-store checkout conveyor belt, preventing grabbing of stores past it while committing ones to the same place in the same line.
We do know real-world weakly-ordered store buffers like PowerPC RS64-III (in-order exec) and Alpha 21264 (OoO exec) do merging to help them create whole 4-byte or 8-byte aligned commits to L1d, e.g. out of multiple byte stores. That's also fine, assuming your merge algorithm respects order for any given byte e.g. by putting the data from a younger store into an older SB entry or vice versa and marking the other entry as "already committed". Obviously this must respect store barriers.
I think this is all fine even with unaligned stores, although preserving atomicity guarantees for unaligned stores could be tricky with merging. (Intel P6-family and later does provide atomicity guarantees for unaligned cached stores that don't cross a cache-line boundary, but we don't think Intel does merging in the store buffer proper; maybe just some stuff with LFBs for cache-miss back-to-back stores to the same line.)
It's likely that real hardware might not be a full scheduler that can merge any 2 SB entries, e.g. maybe only over limited range to reduce the amount of different addresses (and sizes) to compare at once. Also, you'd probably still only free up SB entries in program order, so it can basically be a circular buffer (unlike the RS). Alloc in program order, and having the order be tracked by the layout of the SB itself, makes it much cheaper for memory barriers to work, and to track where the youngest "graduated" store is.
Disclaimer: IDK if this is exactly how real HW works
Possible corner case: unaligned 4-byte store to [cache_line+63] (split across a CL boundary) and then to [cache_line+60] (fully contained in the lower cache line). If the older store-buffer entry can't commit right away because we don't yet own the next cache line, but we do own cache_line, we still can't let the younger store to cache_line+60 commit first, if we're depending on that not happening to avoid WAW hazards.
So you'd probably want a line-split SB entry to be able to commit the data to one line but not the other, allowing oldest-ready-first to happen for each location separately, not tying together order across 2 cache lines.
Related: I wrote my own answer explaining what a store buffer is. I tried to avoid mistakes like Wikipedia makes ("when a store retires, it writes its value to the memory system": In fact retirement just makes it eligible to commit; such stores are called "graduated" stores.)

What exactly is a machine instruction?

The user's program in main memory consists of machine instructions and
data. In contrast, the control memory holds a fixed microprogram that
cannot be altered by the occasional user. The microprogram consists of
microinstructions that specify various internal control signals for
execution of register microoperations. Each machine instruction
initiates a series of micro instructions in control memory. These
microsinstructions generates microoperations to fetch the instruction
for main memory; to evaluate the effective address, to execute the
operation specified by the instruction, and to return control the
fetch phase in order to repeat the cycle for the next instruction
I don't exactly understand here the difference between machine instruction, microinstruction and micropeerations. i certainly do understand that microinstructions according to the paragraph given are the intermediate level of instructions but which of the other 2 is the one that is more close to the machine language. Are CLA, ADD, STA, BUN, BSA, AND etc machine instructions or microoperations?
A CPU presents itself to the outside as a device capable of executing machine instructions. For example,
mov (%esi,%ebx,4), %edx
is a machine instruction that moves 4 bytes of data at address ESI+4*EBX into register EDX. Machine instructions are public - they are published by CPU manufacturer in a user manual. Compilers such as gcc will output files that contain machine instructions, and these will typically end up in EXE/DLL files.
If you look closely at the above instruction, you will see that it is a fairly complex operation. It involves some arithmetic (multiplying and addition) to get the memory address, then moving data from that address into a register. From CPU's perspective, it would also make sense to use the arithmetical unit that is already there. So it makes natural sense to break down this instruction into microinstructions. In essence, mov instruction is implemented internally by CPU as a microprogram written in microinstructions. This is, however, an implementation detail of a CPU. Microinstructions are internal to CPU and they are invisible to anybody except to CPU manufacturer.
Microinstructions have several benefits:
they simplify internal CPU architecture, design and testing, thus lowering cost per unit
they make it easy to create rich and powerful sets of machine instructions (you just have to combine microinstrcutions in different ways)
they provide a consistent machine language across different CPUs (e.g. Xeon and Pentium both implement basic x86_64 instruction set even though they are very different in hardware)
create optimizations (i.e. the same instruction on one CPU can be implemented by a hardware, the other can be emulated in microinstructions)
fix bugs (e.g. you can fix Spectre vulnerability while the machine is running and without buying a new CPU and opening your server)
For more information, see https://en.wikipedia.org/wiki/Micro-operation
I think the answer to your question is in these three sentences:
The user's program in main memory consists of machine instructions and data
Each machine instruction initiates a series of micro-instructions in control memory.
These micro-instructions generate micro-operations.
So:
The user supplies machine instructions
Those get translated into micro-instructions
Those get translated into micro-operations
The mnemonics you mentioned are what the user might use to write or read a list of machine instructions (the actual instructions just being patterns of bits understood by the processor). The "occasional user" (i.e. everyone other than the chip's designer) never needs to deal directly in micro-instructions or micro-operations, so would never know individual names for them.

How does OrientDB's master-less architecture handle conflicting writes?

I read in the documentation here:
http://orientdb.com/docs/last/Distributed-Architecture.html
That OrientDB has a master-less architecture where replicas can handle both reads and writes. In the case of two clients writing concurrently to two replicas, how does the database handle conflict resolution between the two versions?
For instance in Riak KV they use vector clocks (or dotted version vectors now) to detect conflicts which either get punted to the user to handle the merge, or a default policy can be set in place to pick something like last-write-wins.
I'm wondering how OrientDB handles this.
Here is the Conflict resolution strategy used by orientdb.
In case of an even number of servers or when database are not aligned, OrientDB uses a Conflict Resolution Strategy chain.
This default chain is defined as a global setting (distributed.conflictResolverRepairerChain):
-Ddistributed.conflictResolverRepairerChain=majority,content,version
The Conflict Resolution Strategy implementation are called in chain following the declaration order until a winner is selected. In the default configuration (above):
is first checked if there is a strict majority for the record in terms of record versions. If the majority exists, the winner is selected
if no strict majority was found, the record content is analyzed. If the majority is reached by founding a record with different versions but equal content, then that record will be the winner by using the higher version between them
if no majority has been found with the content, then the higher version wins (supposing an higher version means the most update record)
OrientDB Enterprise Edition supports the additional Data Center Conflict Resolution (dc).
At the end of the chain, if no winner is found, the records are untouched and only a manual intervention can decide who is the winner. In this case a WARNING message is displayed in the console with text Auto repair cannot find a winner for record and the following groups of contents:[records].
Pretty similar to old school dynamo, only they are super lazy about it, essentially they just use hazlecast for a dht with a low hop count, and to keep partial ordering of events,(hazlecast has something similar to version vectors in each entry of their collections, but I think they use a timestamp rather than a logical clock for some reason), then they slap on a sloppy quorum for read repair, in a query the successor / coordinator, gets the highest clock, fetch the value from that peer and applies it to itself and everyone else, and for writes they just usually needs an n-1 (n is the replication factor), to do a write again it has to do with the logical clock.
Now as an actual DB, apart from the total lack of effort on the replication, it's pretty excellent, and I have to hand it to them that they make sound technical decisions.
Before you say oh you couldn't do it, I have done it, and I have implemented dynamo's chord variant, riaks plum tree gossip protocol, DVV, and a shit load of CRDTS, mind you mostly from scratch only using RPC frameworks, message passing, or an event loop when I am in C.
Don't call me an old man because I am 20.

Memory Mapped files and atomic writes of single blocks

If I read and write a single file using normal IO APIs, writes are guaranteed to be atomic on a per-block basis. That is, if my write only modifies a single block, the operating system guarantees that either the whole block is written, or nothing at all.
How do I achieve the same effect on a memory mapped file?
Memory mapped files are simply byte arrays, so if I modify the byte array, the operating system has no way of knowing when I consider a write "done", so it might (even if that is unlikely) swap out the memory just in the middle of my block-writing operation, and in effect I write half a block.
I'd need some sort of a "enter/leave critical section", or some method of "pinning" the page of a file into memory while I'm writing to it. Does something like that exist? If so, is that portable across common POSIX systems & Windows?
The technique of keeping a journal seems to be the only way. I don't know how this works with multiple apps writing to the same file. The Cassandra project has a good article on how to get performance with a journal. The key thing is to make sure of, is that the journal only records positive actions (my first approach was to write the pre-image of each write to the journal allowing you to rollback, but it got overly complicated).
So basically your memory-mapped file has a transactionId in the header, if your header fits into one block you know it won't get corrupted, though many people seem to write it twice with a checksum: [header[cksum]] [header[cksum]]. If the first checksum fails, use the second.
The journal looks something like this:
[beginTxn[txnid]] [offset, length, data...] [commitTxn[txnid]]
You just keep appending journal records until it gets too big, then roll it over at some point. When you startup your program you check to see if the transaction id for the file is at the last transaction id of the journal -- if not you play back all the transactions in the journal to sync up.
If I read and write a single file using normal IO APIs, writes are guaranteed to be atomic on a per-block basis. That is, if my write only modifies a single block, the operating system guarantees that either the whole block is written, or nothing at all.
In the general case, the OS does not guarantee "writes of a block" done with "normal IO APIs" are atomic:
Blocks are more of a filesystem concept - a filesystem's block size may actually map to multiple disk sectors...
Assuming you meant sector, how do you know your write only mapped to a sector? There's nothing saying the I/O was well aligned to that of a sector when it's gone through the indirection of a filesystem
There's nothing saying your disk HAS to implement sector atomicity. A "real disk" usually does but it's not mandatory or a guaranteed property. Sadly your program can't "check" for this property unless its an NVMe disk and you have access to the raw device or you're sending raw commands that have atomicity guarantees to a raw device.
Further, you're usually concerned with durability over multiple sectors (e.g. if power loss happens was the data I sent before this sector definitely on stable storage?). If there's any buffering going on, your write may have still only been in RAM/disk cache unless you used another command to check first / opened the file/device with flags requesting cache bypass and said flags were actually honoured.

Why can't DMBSes rely on the OS buffer pool?

Stonebraker's paper (Operating System Support for Database Management) explains that, "the overhead to fetch a block from the buffer pool manager usually includes that of a system call and a core-to-core move." Forget about the buffer-replacement strategy, etc. The only point I question is the quoted.
My understanding is that when a DBMS wants to read a block x it issues a common read instruction. There should be no difference from that of any other application requesting a read.
I'm not looking for generic answers (I got them, and read papers). I seek a detailed answer of the described problem.
See Does a file read from a Java application invoke a system call?
Reading from your other question, and working forward:
When the DBMS must bring a page from disk it will involve at least one system call. At his point most DBMSs place the page into their own buffer. (They also end up in the OS' buffer, but that's unimportant).
So, we have one system call. However, we can avoid any further system calls. This is possible because the DBMS is caching pages in its own memory space. The first thing the DBMS will do when it decides it needs a page is check and see if it has it in its cache. If it does, it retrieves it from there without ever invoking a system call.
The DBMS is free to expire pages in its cache in whatever way is most beneficial for its IO needs. The OS's cache is expired in a more general way since the OS has other things to worry about. One example of this is that a DBMS will typically use a great deal of memory to cache pages as it knows that disk IO is one of the most expensive things it can do. The OS won't do this as it has to balance the cost of disk IO against having memory for other applications to use.
The operating system disk i/o must be generalised to work for a variety of situations. The DBMS can sometimes gain significant performance using less general code that is optimised to its own needs.
The DBMS does its own caching, so doesn't want to work through the O/S caching. It "owns" the patch of disk, so it doesn't need to worry about sharing with other processes.
Update
The link to the paper is a help.
Firstly, the paper is almost thirty years old and is referring to long-obsolete hardware. Notwithstanding that, it makes quite interesting reading.
Firstly, understand that disk i/o is a layered process. It was in 1981 and is even more so now. At the lowest point, a device driver will issue physical read/write instructions to the hardware. Above that may be the o/s kernel code then the o/s user space code then the application. Between a C program's fread() and the disk heads moving, there are at least three or four levels and might be considerably more. The DBMS may seek to improve performance might seek to bypass some layers and talk directly with the kernel, or even lower.
I recall some years ago installing Oracle on a Sun box. It had an option to dedicate a disk as a "raw" partition, where Oracle would format the disk in its own manner and then talk straight to the device driver. The O/S had no access to the disk at all.
It's mainly a performance issue. A dbms has highly specific and unusual I/O demands.
The OS may have any number of processes doing I/O and filling its buffers with the assorted cached data that this produces.
And of course there is the issue of size and what gets cached (a dbms may be able to peform better cache for its needs than the more generic device buffer caching).
And then there is the issue that a generic “block” may in fact amount to a considerably larger I/O burden (this depends on partitioning and such like) than what a dbms ideally would like to bear; its own cache may be tuned to work better with the layout of the data on the disk and thereby able to minimise I/O.
A further thing is the issue of indexes and similar means to speed up queries, which of course works rather better if the cache actually knows what these mean in the first place.
The real issue is that the file buffer cache is not in the filesystem used by the DBMS; it's in the kernel and shared by all of the filesystems resident in the system. Any memory read out of the kernel must be copied into user space: this is the core-to-core move you read about.
Beyond this, some other reasons you can't rely on the system buffer pool:
Often, DBMS's have a really good idea about its upcoming access patterns, and it can't communicate these patterns to the kernel. This can lead to lower performance.
The buffer cache is traditional stored in a fixed-size kernel memory range, so it cannot grow or shrink. That also means the cache is much smaller than main memory, so by using the buffer cache a DBMS would be unable to take advantage of system resources.
I know this is old, but it came up as unanswered.
Essentially:
The OS uses a separate address spaces for every process.
Retrieving information from any other address space requires a system call or page fault. **(see below)
The DBMS is a process with its own address space.
The OS buffer pool Stonebraker describes is in the kernel address space.
So ... to get data from the kernel address space to the DBMS's address space, a system call or page fault is unavoidable.
You're correct that accessing data from the OS buffer pool manager is no more expensive than a normal read() call. (In fact, it's done with a normal read call.) However, Stonebraker is not talking about that. He's specifically discussing the caching needs of DBMSes, after the data has been read from the disk and is present in RAM.
In essence, he's saying that the OS's buffer pool cache is too slow for the DBMS to use because it's stored in a different address space. He's suggesting using a local cache in the same process (and therefore same address space), which can give you a significant speedup for applications like DBMSes which hit the cache heavily, because it will eliminate that syscall overhead.
Here's the exact paragraph where he discusses using a local cache in the same process:
However, many DBMSs including INGRES
[20] and System R [4] choose to put a
DBMS managed buffer pool in user space
to reduce overhead. Hence, each of
these systems has gone to the
trouble of constructing its own
buffer pool manager to enhance
performance.
He also mentions multi-core issues in the excerpt you quote above. Similar effects apply here, because if you can have just one cache per core, you may be able to avoid the slowdowns from CPU cache flushes when multiple CPUs are reading and writing the same data.
** BTW, I believe Stonebraker's 1981 paper is actually pre-mmap. He mentions it as future work. "The trend toward providing the file system as a part of shared virtual memory (e.g., Pilot [16]) may provide a solution to this problem."