What determines CPU cycle time - cpu-architecture

I'd like to find out whether there is a relationship between CPU cycle time and pipeline depth. I have always thought that CPU cycle time is entirely determined by CPU frequency (opposite to frequency). This video, however, mentions that with a larger number of pipeline stages the cycle time can be decreased since every cycle we would do less work per stage. So what actually determines the CPU cycle time: the frequency or the number of stages in the pipeline? Or can we say that pipeline depth affects the frequency?

cycle time is literally defined as the inverse of frequency. This is just basic physics: f = 1/t where t is the period. https://en.wikipedia.org/wiki/Frequency#Period_versus_frequency. Frequency has dimensions of 1/seconds.
Saying you can shorten the cycle time by lengthening the pipeline is just another way of stating the same thing as raising the frequency.
(And yes, chopping up one stage into two means you have two shorter critical paths instead of one long one that has to be ready before the end of a cycle to get latched for the next stage, removing that upper limit on cycle time. For a given gate delay propagation time, you can only fit a certain number of boolean operations into one clock cycle, and every stage has to have its output ready in time.)
See also Modern Microprocessors
A 90-Minute Guide!

Related

How can a Neural Network learn from testing outputs against external conditions which it can not directly control

In order to simplify the question and hopefully the answer I will provide a somewhat simplified version of what I am trying to do.
Setting up fixed conditions:
Max Oxygen volume permitted in room = 100,000 units
Target Oxygen volume to maintain in room = 100,000 units
Maximum Air processing cycles per sec == 3.0 cycles per second (min is 0.3)
Energy (watts) used per second is this formula : (100w * cycles_per_second)SQUARED
Maximum Oxygen Added to Air per "cycle" = 100 units (minimum 0 units)
1 person consumes 10 units of O2 per second
Max occupancy of room is 100 person (1 person is min)
inputs are processed every cycle and outputs can be changed each cycle - however if an output is fed back in as an input it could only affect the next cycle.
Lets say I have these inputs:
A. current oxygen in room (range: 0 to 1000 units for simplicity - could be normalized)
B. current occupancy in room (0 to 100 people at max capacity) OR/AND could be changed to total O2 used by all people in room per second (0 to 1000 units per second)
C. current cycles per second of air processing (0.3 to 3.0 cycles per second)
D. Current energy used (which is the above current cycles per second * 100 and then squared)
E. Current Oxygen added to air per cycle (0 to 100 units)
(possible outputs fed back in as inputs?):
F. previous change to cycles per second (+ or - 0.0 to 0.1 cycles per second)
G. previous cycles O2 units added per cycle (from 0 to 100 units per cycle)
H. previous change to current occupancy maximum (0 to 100 persons)
Here are the actions (outputs) my program can take:
Change cycles per second by increment/decrement of (0.0 to 0.1 cycles per second)
Change O2 units added per cycle (from 0 to 100 units per cycle)
Change current occupancy maximum (0 to 100 persons) - (basically allowing for forced occupancy reduction and then allowing it to normalize back to maximum)
The GOALS of the program are to maintain a homeostasis of :
as close to 100,000 units of O2 in room
do not allow room to drop to 0 units of O2 ever.
allows for current occupancy of up to 100 people per room for as long as possible without forcibly removing people (as O2 in room is depleted over time and nears 0 units people should be removed from room down to minimum and then allow maximum to recover back up to 100 as more and more 02 is added back to room)
and ideally use the minimum energy (watts) needed to maintain above two conditions. For instance if the room was down to 90,000 units of O2 and there are currently 10 people in the room (using 100 units per second of 02), then instead of running at 3.0 cycles per second (90 kw) and 100 units per second to replenish 300 units per second total (a surplus of 200 units over the 100 being consumed) over 50 seconds to replenish the deficit of 10,000 units for a total of 4500 kw used. - it would be more ideal to run at say 2.0 cycle per second (40 kw) which would produce 200 units per second (a surplus of 100 units over consumed units) for 100 seconds to replenish the deficit of 10,000 units and use a total of 4000 kw used.
NOTE: occupancy may fluctuate from second to second based on external factors that can not be controlled (lets say people are coming and going into the room at liberty). The only control the system has is to forcibly remove people from the room and/or prevent new people from coming into the room by changing the max capacity permitted at that next cycle in time (lets just say the system could do this). We don't want the system to impose a permanent reduction in capacity just because it can only support outputting enough O2 per second for 30 people running at full power. We have a large volume of available O2 and it would take a while before that was depleted to dangerous levels and would require the system to forcibly reduce capacity.
My question:
Can someone explain to me how I might configure this neural network so it can learn from each action (Cycle) it takes by monitoring for the desired results. My challenge here is that most articles I find on the topic assume that you know the correct output answer (ie: I know A, B, C, D, E inputs all are a specific value then Output 1 should be to increase by 0.1 cycles per second).
But what I want is to meet the conditions I laid out in the GOALS above. So each time the program does a cycle and lets say it decides to try increasing the cycles per second and the result is that available O2 is either declining by a lower amount than it was the previous cycle or it is now increasing back towards 100,000, then that output could be considered more correct than reducing cycles per second or maintaining current cycles per second. I am simplifying here since there are multiple variables that would create the "ideal" outcome - but I think I made the point of what I am after.
Code:
For this test exercise I am using a Swift library called Swift-AI (specifically the NeuralNet module of it : https://github.com/Swift-AI/NeuralNet
So if you want to tailor you response in relation to that library it would be helpful but not required. I am more just looking for the logic of how to setup the network and then configure it to do initial and iterative re-training of itself based on those conditions I listed above. I would assume at some point after enough cycles and different conditions it would have the appropriate weightings setup to handle any future condition and re-training would become less and less impactful.
This is a control problem, not a prediction problem, so you cannot just use a supervised learning algorithm. (As you noticed, you have no target values for learning directly via backpropagation.) You can still use a neural network (if you really insist). Have a look at reinforcement learning. But if you already know what happens to the oxygen level when you take an action like forcing people out, why would you learn such a simple facts by millions of evaluations with trial and error, instead of encoding it into a model?
I suggest to look at model predictive control. If nothing else, you should study how the problem is framed there. Or maybe even just plain old PID control. It seems really easy to make a good dynamical model of this process with few state variables.
You may have a few unknown parameters in that model that you need to learn "online". But a simple PID controller can already tolerate and compensate some amount of uncertainty. And it is much easier to fine-tune a few parameters than to learn the general cause-effect structure from scratch. It can be done, but it involves trying all possible actions. For all your algorithm knows, the best action might be to reduce the number of oxygen consumers to zero permanently by killing them, and then get a huge reward for maintaining the oxygen level with little energy. When the algorithm knows nothing about the problem, it will have to try everything out to discover the effect.

CPU clock cycle misunderstanding

I can't well understand about CPU clock such as 3.4Ghz. I know this is that 3.4 billions clock cycle per second.
So here if machine use single clock cycle instruction, then It can execute about 3.4 billions instructions per second.
But in pipeline, basically it needs more cycles per instruction, but each cycle length is shorter than single clock cycle.
But although pipeline has more throughput, anyway cpu can do 3.4 billions cycle per second. So, it can execute 3.4 billions/5 instructions(if one instruction needs 5 cycles), which means less than single cycle implementation(3.4 > 3.4/5). What am I missing?
Does CPU clock such as 3.4Ghz just means for based on pipeline cycle, not for based on single cycle implentation?
Pipelining
Pipelining doesn't involve cycles shorter than a single clock cycle. Here's how pipelining works:
We have a complicated task to do. We take that task and break it down into a number of stages, each of which is relatively simple to carry out. We study the amount of work in each stage to make sure each stage takes about the same amount of time as any other.
With a processor, we do roughly the same thing--but in this case, it's not "install these fourteen bolts", it's things like fetching and decoding instructions, reading operands, executing (often a couple of stages here), and writing back results.
Like the automotive production line, we provide each stage of the pipeline with a specialized set of tools for doing exactly (and only) what is needed at that stage. When we finish doing one stage of processing on a car/instruction, it moves along to the next stage, and this stage gets the next car/instruction to process.
In an ideal situation, the process works (roughly) like this:
It took Ford about 12 hours to build one N car (the predecessor to the model T). Thanks primarily to pipelining the production line, it took only about 2 and a half hours to build a Model T. More importantly, even though a model T took 2.5 hours start to finish, that time was broken down into no fewer than 84 discrete steps, so when everything ran smoothly the production line as a whole could produce another car (about) every two minutes.
That didn't always happen though. If one stage ran short of parts, the stages after it had to wait. If the pause lasted very long, it would back things up so the preceding stages had to wait too.
The same can happen in a processor pipeline. For example, when a branch happens, the processor may have to wait a while before the next instruction can be fetched. If an instruction needs an operand from memory, that can lead to a pause (a "pipeline bubble") as well.
To prevent pauses in his pipeline, Henry Ford hired people to study the stages, figure out how many of each kind of part would need to be on hand for each stage, and so on. I don't know for sure, but I think it's a fair guess that there were probably a few people designated to watch the supply of parts at different stations, and send somebody running to let a warehouse manager know if (for whatever reason) the supply of parts for a particular stage looked like it was running short so they'd need more soon.
Processors do a little of the same thing--they have things like branch predictors and prefetchers that attempt to figure out ahead of time what will be needed by the stream of instructions being executed, and trying to ensure that everything is on hand when its needed (with caches, for example, to temporarily store things that seem likely to be needed soon).
So, like the Model T, it takes some relatively long amount of time for each instruction to execute start to finish, but we get another product finished at much shorter intervals--ideally once a clock (but see my other answer--modern designs often execute more than one instruction per clock).
A typical modern CPU can execute a number of unrelated instructions (those that don't depend on the same resources) concurrently.
To do that, it typically ends up with a basic structure vaguely like this:
So, we have an instruction stream coming in on the left. We have three decoders, each of which can decode one instruction each clock cycle (but there may be limitations, so complex instructions all have to pass through one decoder, and the other two decoders can only do simple instructions).
From there, the instructions pass into a reorder buffer, which keeps a "scoreboard" of which resources are used by each instruction, and which resources are affected that instruction (where a "resource" would typically be something like a CPU register or a flag in the flags register).
The circuitry then compares those scoreboards to determine dependencies. For example, if one instruction writes to register 0, and a later one reads from register 0, then those instructions must execute serially. At each clock, it tries to find the N oldest instructions that don't have dependencies for execution.
There are then a number of independent execution units. Each of these is basically a "pure" function--it takes some inputs, carries out a specified transformation on it, and produces an output. This makes it easy to replicate them as needed, and have as many running in parallel as we want/can afford. Those are typically grouped, with one port going to each group. In each clock, we can send one instruction through that port to one of the execution units in that group. Once an instruction arrives at the execution unit, it may take more than one clock to finish execution.
Once those execute, we have a set of retirement units that take the results, and write them back to the registers in execution order. Again we have multiple units so we can retire multiple instructions per clock.
Note: this drawing tries to be semi-realistic about the rough number of decoders, retirement units, and ports that it depicts, but what it shows is a general idea--different CPUs will have more or fewer specific resources. For almost any of them, the number of decoded instructions in the scoreboard units is low though--a realistic number would be more like 50 instructions.
In any case, actual execution of instructions is one of the hardest parts of this to measure or reason about. The number of ports gives us a hard upper limit on the number of instructions that can start executing in any given clock. The number of decoders and retirement units give an upper limit on the number of instructions that can be started/finished per clock. The execution itself...well, there are a lot of execution units, and each one (at least potentially) takes a different number of clocks to execute an instruction.
With the design as shown above, you'd have a hard upper limit of three instructions per clock. That's the most you can decode or retire. With a different design, that could obviously go up or down (e.g., with 4 decoders, 4 ports and 4 retirement units, the upper limit could go up to 4).
Realistically, with that design you wouldn't normally expect to see three instructions execute in most clock cycles. There are enough dependencies between instructions that you'd probably expect closer to 2 as a long term average (and much more likely a little less than 2). Increasing the available resources (more decoders, more retirement units, etc.) will rarely help that a whole lot--you might get to an average of three instructions per clock, but hoping for four is probably unrealistic.
As others have noted the full details of how a modern CPU operates are complicated. But part of your question has a simple answer:
Does CPU clock such as 3.4Ghz just means for based on pipeline cycle,
not for based on single cycle implentation?
The clock frequency of a CPU refers to how many times per second the clock signal switches. The clock signal is not divided into smaller pipelined segments. The purpose of pipelining is to allow for faster clock switching speeds. So 3.4GHz refers to the number of times per second that a single pipeline stage can perform whatever work it needs to do when executing an instruction. The total work for executing an instruction is done over multiple cycles each of which could be in a different pipeline stage.
Your question also shows a some misconceptions about how pipelining works:
But although pipeline has more throughput, anyway cpu can do 3.4
billions cycle per second. So, it can execute 3.4 billions/5
instructions(if one instruction needs 5 cycles), which means less than
single cycle implementation(3.4 > 3.4/5). What am I missing?
In the simple case the throughput of a single cycle CPU and a pipelined CPU is the same. The latency of the pipelined CPU is higher because it requires more cycles (i.e. 5 in your example) to execute a single instruction. But after the pipeline is full the throughput could be the same as for a single cycle non-pipelined CPU. So in the simple case using your example a single-cycle CPU could execute 3.4 billion instructions in 1 seconds, while the pipelined CPU with 5 stages could execute 3.4 billion minus 5 instructions in 1 second. Subtracting 5 from 3.4 billion is a negligible difference, whereas dividing by 5 would be a very significant difference.
A couple of other things to note are that the simple case I described isn't really true because of dependencies between instructions that require pipeline stalls. And most modern CPUs can execute more than one instructions per cycle.

Simulation: send packets according to exponential distribution

I am trying to build a network simulation (aloha like) where n nodes decide at any instant whether they have to send or not according to an exponential distribution (exponentially distributed arrival times).
What I have done so far is: I set a master clock in a for loop which ticks and any node will start sending at this instant (tick) only if a sample I draw from a uniform [0,1] for this instant is greater than 0.99999; i.e. at any time instant a node has 0.00001 probability of sending (very close to zero as the exponential distribution requires).
Can these arrival times be considered exponentially distributed at each node and if yes with what parameter?
What you're doing is called a time-step simulation, and can be terribly inefficient. Each tick in your master clock for loop represents a delta-t increment in time, and in each tick you have a laundry list of "did this happen?" possible updates. The larger the time ticks are, the lower the resolution of your model will be. Small time ticks will give better resolution, but really bog down the execution.
To answer your direct questions, you're actually generating a geometric distribution. That will provide a discrete time approximation to the exponential distribution. The expected value of a geometric (in terms of number of ticks) is 1/p, while the expected value of an exponential with rate lambda is 1/lambda, so effectively p corresponds to the exponential's rate per whatever unit of time a tick corresponds to. For instance, with your stated value p = 0.00001, if a tick is a millisecond then you're approximating an exponential with a rate of 1 occurrence per 100 seconds, or a mean of 100 seconds between occurrences.
You'd probably do much better to adopt a discrete-event modeling viewpoint. If the time between network sends follows the exponential distribution, once a send event occurs you can schedule when the next one will occur. You maintain a priority queue of pending events, and after handling the logic of the current event you poll the priority queue to see what happens next. Pull the event notice off the queue, update the simulation clock to the time of that event, and dispatch control to a method/function corresponding to the state update logic of that event. Since nothing happens between events, you can skip over large swatches of time. That makes the discrete-event paradigm much more efficient than the time step approach unless the model state needs updating in pretty much every time step. If you want more information about how to implement such models, check out this tutorial paper.

Why speedup reduces with increase in number of pipeline stages?

I am watching a video tutorial on pipelining at link.
At time 4:30, the instructor says that with increase in number of stages, we need to also add pipeline registers, which creates overhead, and due to this speedup cannot increase beyond an optimal value with increase in number of stages.
Can someone please elaborate this ? My doubt is that the pipeline register might be adding some delay to individual stage cycle time, so why does it become a problem when number of stages are large compared to a few ?
Thanks.
The latches themselves do have a small delay (they are after all "doing work", i.e., switching). By itself, this would only produce an asymptotic approach to a fixed peak performance value. E.g., starting from the (already unrealistically) tiny actual work time of each stage equaling the latch delay, a doubling of pipeline depth (excluding other, real-world constraints) would reduce cycle time to latch delay plus 1/2 latch delay (increasing clock speed by just over 33%) but doubling pipeline depth again would only reduce cycle time to latch delay plus 1/4 latch delay.
Even at an infinite number of pipeline stages with each stage (somehow) doing infinitesimal work, the minimum cycle time would equal one latch delay, doubling the clock speed relative to a pipeline depth where latch delay equals actual work time. On a slightly practical level, one transistor switch delay of real work is a relatively hard constraint.
However, before latch delay itself, prevents further improvement, other real-world factors limit the benefit of increased pipeline depth.
On the more physical level, excluding area and power/thermal density constraints, getting the clock signal to transition uniformly at very high precision across an entire design is challenging at such high clock speeds. Clock skew and jitter become more significant when there is less margin in work time to absorb variation. (This is even excluding variation in manufacturing or environmental conditions like temperature.)
Besides such more physical constraints, dependence constraints tend to prevent a deeper pipeline from increasing performance. While control dependencies (e.g., condition evaluation of a branch) can often be hidden by prediction, as Gabe notes in his answer, a branch misprediction can require a pipeline flush. Even at 99% prediction accuracy and one branch every ten instructions (95% and one branch every five instructions are more likely), a thousand-stage branch resolution delay (i.e., excluding stages after branch resolution and assuming branch target is available no later than branch direction) would mean half of the performance is taken by branch mispredictions.
Instruction cache misses would also be a problem. If one had perfect control flow prediction, one could use prefetching to hide the delay. This effectively becomes a part of the branch prediction problem. Also, note that increasing cache size to reduce miss rate (or branch predictor size to reduce misprediction rate) increases access latency (pipeline stage count).
Data value dependencies are more difficult to handle. If the execution takes two cycles, then two sequential instructions with a data dependence will not be able to execute back-to-back. While value prediction could, theoretically, help in some cases, it is most helpful in relatively limited cases. It is also possible for some operations to be width-pipelined (e.g., add, subtract, bitwise logical operations, and left shifts). However, such tricks have limits.
Data cache misses becoming part of this data dependence issue. Data memory addresses tend to be much more difficult to predict than instruction addresses.
This Google Scholar search provides some more detailed (and technical) reading on this subject.
Without watching the hour-long video, I would say that the actual problem when there are a large number of stages is that pipeline stalls are worse. If you have a 14-stage pipeline and mispredict a branch, that's 14 stages that you have to fill up again before issuing another instruction.

How does operating system knows execution time of process

I was revisiting Operating Systems CPU job scheduling and suddenly a question popped in my mind, How the hell the OS knows the execution time of process before its execution, I mean in the scheduling algorithms like SJF(shortest job first), how the execution time of process is calculated apriori ?
From Wikipedia:
Another disadvantage of using shortest job next is that the total execution time of a job must be known before execution. While it is not possible to perfectly predict execution time, several methods can be used to estimate the execution time for a job, such as a weighted average of previous execution times.[1]
More on http://en.wikipedia.org/wiki/Shortest_job_next
Also, O.S can compute the total needed time for each task, by means of first calculating its CPI.
(CPI: cycles per instruction)
There is a weighted average CPI for each job.
For example, floating point instructions weigh much more than fixed point instructions, meaning they take more time to perform. So a job dealing with fixed point operations: like add or increment is perceived to be shorter. Hence in a shortest job first, it shall be executed prior to the aforementioned job.