Why do longer pipelines make a single delay slot insufficient? - cpu-architecture

I read the following statement in Patterson & Hennessy's Computer Organization and Design textbook:
As processors go to both longer pipelines and issuing multiple instructions per clock cycle, the branch delay becomes longer, and a single delay slot is insufficient.
I can understand why "issuing multiple instructions per clock cycle" can make a single delay slot insufficient, but I don't know why "longer pipelines" cause it.
Also, I do not understand why longer pipelines cause the branch delay to become longer. Even with longer pipelines (step to finish one instruction), there's no guarantee that the cycle will increase, so why will the branch delay increase?

If you add any stages before the stage that detects branches (and evaluates taken/not-taken for conditional branches), 1 delay slot no longer hides the "latency" between the branch entering the first stage of the pipeline and the correct program-counter address after the branch being known.
The first fetch stage needs info from later in the pipeline to know what to fetch next, because it doesn't itself detect branches. For example, in superscalar CPUs with branch prediction, they need to predict which block of instructions to fetch next, separately and earlier from predicting which way a branch goes after it's already decoded.
1 delay slot is only sufficient in MIPS I because branch conditions are evaluated in the first half of a clock cycle in EX, in time to forward to the 2nd half of IF which doesn't need a fetch address until then. (Original MIPS is a classic 5-stage RISC: IF ID EX MEM WB.) See Wikipedia's article on the classic RISC pipeline for much more details, specifically the control hazards section.
That's why MIPS is limited to simple conditions like beq (find any mismatches from an XOR), or bltz (sign bit check). It cannot do anything that requires an adder for carry propagation (so a general blt between two registers is only a pseudo-instruction).
This is very restrictive: a longer front-end can absorb the latency from a larger/more associative L1 instruction cache that takes more than half a cycle to respond on a hit. (MIPS I decode is very simple, though, with the instruction format intentionally designed so machine-code bits can be wired directly as internal control signals. So you can maybe make decode the "half cycle" stage, with fetch getting 1 full cycle, but even 1 cycle is still low with shorter cycle times at higher clock speeds.)
Raising the clock speed might require adding another fetch stage. Decode does have to detecting data hazards and set up bypass forwarding; original MIPS kept that simpler by not detecting load-use hazards, instead software had to respect a load-delay slot until MIPS II. A superscalar CPU has many more possible hazards, even with 1-cycle ALU latency, so detecting what has to forward to what requires more complex logic for matching destination registers in old instructions against sources in younger instructions.
A superscalar pipeline might even want some buffering in instruction fetch to avoid bubbles. A multi-ported register file might be slightly slower to read, maybe requiring an extra decode pipeline stage, although probably that can still be done in 1 cycle.
So, as well as making 1 branch delay slot insufficient by the very nature of superscalar execution, a longer pipeline also increases branch latency, if the extra stages are between fetch and branch resolution. e.g. an extra fetch stage and a 2-wide pipeline could have 4 instructions in flight after a branch instead of 1.
But instead of introducing more branch delay slots to hide this branch delay, the actual solution is branch prediction. (However some DSPs or high performance microcontrollers do have 2 or even 3 branch delay slots.)
Branch-delay slots complicate exception handling; you need a fault-return and a next-after-that address, in case the fault was in a delay slot of a taken branch.

Related

What happens with nested branches and speculative execution?

Alright, so I know that if a particular conditional branch has a condition that takes time to compute (memory access, for instance), the CPU assumes a condition result and speculatively executes along that path. However, what would happen if, along that path, yet another slow conditional branch pops up (assuming, of course, that the first condition hasn't been resolved yet and the CPU can't just commit the changes)? Does the CPU just speculate inside the speculation? What happens if the last condition is mispredicted but the first wasn't? Does it just rollback all the way?
I'm talking about something like this:
if (value_in_memory == y){
// computations
if (another_val_memory == x){
//computations
}
}
Speculative execution is the regular state of execution, not a special mode that an out of order CPU enters when it sees a branch and then leaves when the branch is no longer in flight.
This is easier to see if you consider that it's not just branches that can fault, but many instructions, including those that access memory, have restrictions on their input values, etc. So any substantial out of order execution implies constant speculation, and CPUs are built around that idea.
So "nested branches" doesn't end up being special in that sense.
Now, modern CPUs have a variety of methods for quick branch misprediction recovery, faster than recovery from other types of faults1. For example they may snapshot the state of the register mapping at some branches, to allow recovery to start before the branch is at the head of the reorder buffer. Since it is not always feasible to snapshot at all branches, there might be complicated heuristics involved to decide where to take snapshots.
I mention this last part because it is one way in which nested branches might matter: when there are lots of branches in flight, you might hit some microarchitectural limits related to the tracking of these branches for recovery purposes. For more details, you can look through patents for "branch order buffer" (for Intel techniques, but there are no doubt others).
1 The basic recovery method is keep executing until the faulting instruction is the next to retire, and then throw away all younger instructions. In the context of branch mispredictions, this means you could actually suffer two or more mispredictions only the oldest of which actually takes effect: e.g., a younger branch mispredicts, and while executing up to that branch (at which point recovery can occur), another mispredict occurs, so the younger one ends up getting discarded.
(Maybe not a complete answer, but I had some of this written when #BeeOnRope posted an answer. Posting this anyway for some more links and technical details in case anyone's curious.)
Everything is always speculative until it reaches retirement and becomes non-speculative, definitely happened, part of the architectural state.
e.g. any load might fault with a bad address, any div might trap on divide by zero. See also Out-of-order execution vs. speculative execution That and What exactly happens when a skylake CPU mispredicts a branch? mention that branch mispredicts are handled specially, because they're expected to be frequent. Fast-recovery can start before a mis-predicted branch reaches retirement, unlike the behaviour for a faulting load for example. (That's part of why Meltdown is exploitable.)
So even "regular" instructions are executed speculatively before being commited, and the only distinction between them is a human-made distinction, not computer-made? I presume, then, that the CPU stores multiple, possible rollback points? For instance if I have load instructions that may lead to page faults or simply use stale values, inside a conditional branch, the CPU identifies such instructions and scenarios and saves a state for each of them? I feel like I misunderstood because this may lead to a lot of storing register states and complicated dependencies.
The retirement state is always consistent so you can always roll back to there and discard all in-flight work, e.g. if an external interrupt arrives you want to handle it without waiting for a chain of a dozen cache miss loads to all execute. When an interrupt occurs, what happens to instructions in the pipeline?
This tracking basically happens for free or is something you need to do anyway to be able to detect which instruction faulted, not just that there was a problem somewhere. (This is called "precise exceptions")
The real distinction humans can usefully make is speculation that has a real chance of being wrong during execution of non-error cases. If your code gets a bad pointer, it doesn't really matter how it performs; it's going to page-fault and that's going to be very slow compared to local OoO exec details.
You're talking about a modern out-of-order (OoO) execution (not just fetch) CPU, like modern Intel or AMD x86, high-end ARM, MIPS r10000, etc.
The front-end is in-order (with speculation down predicted paths), and so is commit (aka retirement) from the out-of-order back-end into non-speculative retirement state. (aka known-good architectural state).
The CPU uses two major structures to track instructions (or on x86, uops = parts of instructions) in the back-end. The last stage of the front-end (after fetch / decode) allocates/renames instructions and adds them into both of these structures at once.
RS = Reservation Station = scheduler: not-yet-executed instructions, waiting for an execution unit. The RS tracks dependencies and sends the oldest-ready uops to execution units that are ready.
ROB = ReOrder Buffer: not-yet-retired instructions. Instructions enter and leave in-order so it can just be a circular buffer.
Includes a flag to mark each entry as executed or not, set once the RS has sent it to an execution unit which reports success. The oldest instructions in the ROB that all have their done-executing bit set can "retire".
Also includes a flag which indicates "fault if this reaches retirement". This avoids spending time handling page faults from load instruction on the wrong path of execution (that might well have pointers into an unmapped page), for example. Either in the shadow of a branch mispredict, or just after another instruction (in program order) that should have faulted first but OoO exec got to it later.
(I'm also leaving out register-renaming onto a large physical register file.
That's the "rename" part. Allocate includes choosing which execution port an instruction will use, and reserving a load or store buffer entry for memory instructions.)
(There's also a store-buffer; stores don't write directly to L1d cache, they write to the store buffer. This makes it possible to speculatively execute stores and still roll back without them becoming visible to other cores. It also decouples cache-miss stores from execution. Once a store instruction retires, the store-buffer entry "graduates" and is eligible to commit to L1d cache, once MESI gets exclusive access to the cache line, and once memory-ordering rules are satisfied.)
Execution units detect whether an instruction should fault, or was mis-speculated and should roll back, but don't necessarily act on that until the instruction reaches retirement.
In-order retirement is the step that recovers program-order after OoO exec, including the case of exceptions of mis-speculation.
Terminology: Intel calls it "issue" when instructions are sent from the front-end into the ROB + RS. Other computer architecture people often call that "dispatch".
Sending uops from the RS to execution units is called "dispatch" by Intel, "issue" by other people.

How far does the branch prediction go?

I understand there is a branch predictor in modern CPU designs trying to guess which branch to go.
Assuming there is a jump instruction that will transfer control flow to either basic block A or basic block B.
If the predictor decides to go to A, when the actual calculation comes to the jump instruction, and finds out B should be the right choice instead of A, at this time, how far does the execution in basic block A go?
Are all the instructions in basic block A are done executed? Or just the first instruction is executed?
How can we find out the actual result and know more about the branch prediction strategies?
The CPU assumes branch prediction was correct and continues unless/until it discovers it wasn't. (HW can't detect "basic blocks": it doesn't know when it reaches an address that some other instruction branches to. And you wouldn't want to stop anyway. Modern branch prediction good enough to be usable in an out-of-order CPU is usually correct like 95 to 99% of the time.)
Discovering a mispredict (or confirming a correct prediction) happens when the branch instruction itself is decoded (unconditional direct branch) or executed (conditional and/or indirect).
In case of a mispredict, the CPU has to re-steer the front-end (fetch/decode) to the correct path. On an in-order CPU, no instructions after a branch can execute until the branch itself executes, so it's always just a matter of restarting fetch/decode.
(In-order superscalar could actually execute an instruction after a branch, but an in-order pipeline makes it relatively easy to squash before it reaches write-back and actually changes architectural state. A store is probably the trickiest because you need to discard that store-buffer entry; its visible effect would be on memory, not write-back to registers. But anyway, that along with decoupling execution from cache-miss stores, and other reasons, is why even in-order pipelined CPUs have store buffers.)
Or for an out-of-order CPU with speculative execution that allows instruction from the wrong path to actually execute while a conditional or indirect branch is waiting for its input, it has to flush the back-end and restart issue from the correct path of execution.
(With fast-recovery and a branch-order-buffer, this can happen even if some of the instructions before the branch haven't finished executing yet. e.g. in a loop with a simple loop condition and a longer dependency chain in the loop body, so execution of the loop-condition dependency chain can run ahead and discover a mispredict in the last iteration when the loop branch falls through, without waiting until that instruction is ready to retire. i.e. without waiting for the loop body to execute that far.)
Multiple branches can be in flight at once. And of course any load or store can fault. An OoO exec CPU basically treats everything as speculative until retirement.
How can we ... know more about the branch prediction strategies?
https://danluu.com/branch-prediction/ is pretty good. See also the first chapter of Agner Fog's microarch guide (for x86) where he covers what real Intel and AMD CPUs do, as well as some background. https://agner.org/optimize/

Are users punished for bidding for wrong fork in Ouroboros algorithm?

I am confused by the description of PoS algorithm here https://hackernoon.com/a-hitchhikers-guide-to-consensus-algorithms-d81aae3eb0e3
In PoS, the blocks aren’t created by miners doing work, but by minters
staking their tokens to “bet” on which blocks are valid. In the case
of a fork, minters spend their tokens voting on which fork to support.
Assuming most people vote on the correct fork, validators who voted on
the wrong fork would “lose their stake” in the correct one.
Is this how Ouroboros algorithm works?
No.
A users stake is not directly affected by the staking process in any variant of the Ouroboros protocol. In practice, if a user extends the "wrong fork", they simply end up not getting any rewards for this block down the line.
Slashing algorithms are not necessary for Ouroboros, as it employs cryptography and probabalistic analysis to rule out the attacks it is designed to prevent.
Even if it were necessary, however, typically it comes in the form of punishing provably bad behaviour, and not honest "mistakes" (of which extending a shorter chain is one). Specifically, the variants I've seen will punish users if they create two blocks at the same point, i.e. they actively fork the chain.
In the Ouroboros Proof of Stake protocol, there is no penalty for betting on the wrong arm of a fork. And this does not only apply to deliberate, malicious fork attempts.
In Ouroboros it happens constantly and "by design" that two block producers each produce a block for the same slot. This happens because the allocation of a slot happens based on mathematical probabilities and the weight of the stake delegations. You can think of it as rolling dice. For example, a small pool must roll a 1 to become a slot leader. A larger pool is empowered to fill a slot if it rolls a 1 or 2. (the probabilities are of course much lower than described here on the basis of a 1-6 dice).
Each pool constantly rolls its own secret VRF key, and thus independently determines when it is slot leader. (in the Cardano mainnet which uses Ouroboros PoS it is a 2 second cycle) All other pools can check if this is a legitimate block when they receive the new block using the public VRF key.
Now, in some cases (about 5% of all blocks), chance may make two or even more stake pools legitimate slot leaders. So here several legitimate blocks are created without any bad intention at all.
However, Cardano's Ouroboros implementation has implemented a relatively fair solution for this, which allows a quick resolution of this fork. The winner is the block whose creator rolled the lowest number. This is a slight advantage for small pools, because they basically have to roll low to become the leader at all. But on average a pool loses about half (2.5%) of all generated blocks in these so-called slot battles because of the random collisions.
However, it is important that every node in the network can immediately determine the correct valid block without further data and time loss and that there is consensus on this. So these (by design) forks can be solved without much impact, and there is no need at all to punish anyone for this, especially not by slashing the participating tokens.

How the speculative load and store happen in modern Intel processor? [duplicate]

Given the small program shown below (handcrafted to look the same from a sequential consistency / TSO perspective), and assuming it's being run by a superscalar out-of-order x86 cpu:
Load A <-- A in main memory
Load B <-- B is in L2
Store C, 123 <-- C is L1
I have a few questions:
Assuming a big enough instruction-window, will the three instructions be fetched, decoded, executed at the same time? I assume not, as that would break execution in program order.
The 2nd load is going to take longer to fetch A from memory than B. Will the later have to wait until the first is fully executed? Will the fetching of B only start after Load A is fully executed? or until when does it have to wait?
Why would the store have to wait for the loads? If yes, will the instruction just wait to be committed in the store buffer until the loads finish or after decoding it will have to sit and wait for the loads?
Thanks
Terminology: "instruction-window" normally means out-of-order execution window, over which the CPU can find ILP. i.e. ROB or RS size. See Understanding the impact of lfence on a loop with two long dependency chains, for increasing lengths
The term for how many instructions can go through the pipeline in a single cycle is pipeline width. e.g. Skylake is 4-wide superscalar out-of-order. (Parts of its pipeline, like decode, uop-cache fetch, and retirement, are wider than 4 uops, but issue/rename is the narrowest point.)
Terminology: "wait to be committed in the store buffer" store data + address gets written into the store buffer when a store executes. It commits from the store buffer to L1d at any point after retirement, when it's known to be non-speculative.
(In program order, to maintain the TSO memory model of no store reordering. A store buffer allows stores to execute inside this core out of order but still commit to L1d (and become globally visible) in-order. Executing a store = writing address + data to the store buffer.)
Can a speculatively executed CPU branch contain opcodes that access RAM?
Also what is a store buffer? and
Size of store buffers on Intel hardware? What exactly is a store buffer?
The front-end is irrelevant. 3 consecutive instructions might well be fetched in the same 16-byte fetch block, and might go through pre-decode and decode in the same cycle as a group. And (also or instead) issue into the out-of-order back-end as part of a group of 3 or 4 uops. IDK why you think any of that would cause any potential problem.
The front end (from fetch to issue/rename) processes instructions in program order. Processing simultaneously doesn't put later instructions before earlier ones, it puts them at the same time. And more importantly, it preserves the information of what program order is; that's not lost or discarded because it matters for instructions that depend on the previous one1!
There are queues between most pipeline stages, so (for example on Intel Sandybridge) instructions that pre-decode as part of a group of up-to-6 instructions might not hit the decoders as part of the same group of up-to-4 (or more with macro-fusion). See https://www.realworldtech.com/sandy-bridge/3/ for fetch, and the next page for decode. (And the uop cache.)
Executing (dispatching uops to execution ports from the out-of-order scheduler) is where ordering matters. The out-of-order scheduler has to avoid breaking single threaded code.2
Usually issue/rename is far ahead of execution, unless you're bottlenecked on the front-end. So there's normally no reason to expect that uops that issued together will execute together. (For the sake of argument, let's assume that the 2 loads you show do get dispatched for execution in the same cycle, regardless of how they got there via the front-end.)
But anyway, there's no problem here starting both loads and the store the same time. The uop scheduler doesn't know whether a load will hit or miss in L1d. It just sends 2 load uops to the load execution units in a cycle, and a store-address + store-data uop to those ports.
[load ordering]
This is the tricky part.
As I explained in an answer + comments on your last question, modern x86 CPUs will speculatively use the L2 hit result from Load B for later instructions, even though the memory model requires that this load happens after Load A.
But if no other cores write to cache line B before Load A completes, then nothing can tell the difference. The Memory-Order Buffer takes care of detecting invalidations of cache lines that were loaded from before earlier loads complete, and doing a memory-order mis-speculation pipeline flush (rollback to retirement state) in the rare case that allowing load re-ordering could change the result.
Why would the store have to wait for the loads?
It won't, unless the store-address depends on a load value. The uop scheduler will dispatch the store-address and store-data uops to execution units when their inputs are ready.
It's after the loads in program order, and the store buffer will make it even farther after the loads as far as global memory order is concerned. The store buffer won't commit the store data to L1d (making it globally visible) until after the store has retired. Since it's after the loads, they'll have also retired.
(Retirement is in-order to allow precise exceptions, and to make sure no previous instructions took an exception or were a mispredicted branch. In-order retirement allows us to say for sure that an instruction is non-speculative after it retires.)
So yes, this mechanism does ensure that the store can't commit to L1d until after both loads have taken data from memory (via L1d cache which provides a coherent view of memory to all cores). So this prevents LoadStore reordering (of earlier loads with later stores).
I'm not sure if any weakly-ordered OoO CPUs do LoadStore reordering. It is possible on in-order CPUs when a cache-miss load comes before a cache-hit store, and the CPU uses scoreboarding to avoid stalling until the load data is actually read from a register, if it still isn't ready. (LoadStore is a weird one: see also Jeff Preshing's Memory Barriers Are Like Source Control Operations). Maybe some OoO exec CPUs can also track cache-miss stores post retirement when they're known to be definitely happening, but the data just still hasn't arrived yet. x86 doesn't do this because it would violate the TSO memory model.
Footnote 1: There are some architectures (typically VLIW) where bundles of simultaneous instructions are part of the architecture in a way that's visible to software. So if software can't fill all 3 slots with instructions that can execute simultaneously, it has to fill them with NOPs. It might even be allowed to swap 2 registers with a bundle that contained mov r0, r1 and mov r1, r0, depending on whether the ISA allows instructions in the same bundle to read and write the same registers.
But x86 is not like that: superscalar out-of-order execution must always preserve the illusion of running instructions one at a time in program order. The cardinal rule of OoO exec is: don't break single-threaded code.
Anything that would violate this can only be done with checking for hazards, or speculatively with rollback upon detection of mistakes.
Footnote 2: (continued from footnote 1)
You can fetch / decode / issue two back-to-back inc eax instructions, but they can't execute in the same cycle because register renaming + the OoO scheduler has to detect that the 2nd one reads the output of the first.

Why don't operating systems schedule a new task whenever an unknown branch occurs?

This is a purely conceptual question. Why don't OS's switch tasks whenever an branch that has never been taken occurs? Dynamic branch prediction only works with branches that have been taken in the past, and static branch prediction is only correct in certain scenarios. If you have no data on a branch, it seems like the OS and the processor should start putting a separate task into the pipeline rather than blindly guess the branch. Then you can compute the result of the branch, and execute that branch when the original task is scheduled again. Next time the branch is encountered, the processor can use dynamic prediction.
Is there a reason this method isn't used? Or is it used and I'm just unaware?
The overhead of context switching is so high compared to just executing the branch.
This is why SMT - simultanions multi-threading was invented, while one thread fumbles around with its branch mispredict the other threads on the same core can advance.
The new Power processor even has 8 hardware threads on each core to ensure maximum throughput in face of
branch mis-predictions
instruction cache misses
data cache misses
data dependencies