I studying various CPU scheduling algorithm in OS, I was asked this question once I was not able to answer it. Can anyone explain me?
OS can compute the total needed time for each task, by means of first calculating its CPI (cycles per instruction). There is a weighted average CPI for each job. I hope this answers the question. But if you are talking about burst time then there is a default value to those unknown processes.
Related
I am doing a question about calculating the idle time of the CPU when using the priority order method in task scheduling. According to the paper, the correct answer is b, but I have no idea how the 3 milliseconds is calculated. The total idle time for the CPU below should be 9 milliseconds according to my calculation. Can anyone explain to me what I am missing from the question? Thank you.
In SJF Algorithm, we predict the next CPU Burst time using the formula:
τ(n+1) = α*t(n) + (1-α)*τ(n). And then we select the process with the shortest predicted burst time.
Now my question is: do we already have an idea about the CPU burst times of the processes arriving?
If yes, then why predict the CPU burst time? We could rather just use the shortest time process for scheduling.
If no i.e., we do not have any idea about the burst times of the processes, how is the predicted burst time τ(n+1) helping us to pick a process?
Hope I am able to explain my confusion.
Thanks.
The answer is in the question itself. The later condition is true we don't have an idea about burst time of incoming processes this is the reason that we are predicting their burst time τ(n+1). Our prediction may not be 100% right all the time but it'll server the purpose of SJF to a very great extent !
I hope you would've coded this and saw the results if not then i recommend to do so, it'll help a lot understanding this.
This is the application I developed, for my teacher, on some scheduling techniques.enter image description here
According to Flynn's Bottleneck, the speedup due to instruction level parallelism (ILP) can be at best 2. Why is it so?
That version of Flynn's Bottleneck originates in Detection and Parallel Execution of Independent Instructions where the authors empirically conclude that ILP for most programs is less than 2. That was 1970 technology and that was an empirical conclusion. You can contrast it with Fisher's Optimism which said there was lots of ILP out there and proposed trace scheduling and VLIW to exploit it.
So the literal answer to your question is because that's what they measured within basic blocks back then.
The ILP less than 2 meaning isn't really used anymore because superscalars and better compilers have blown past the number 2. So instead, over time Flynn's Bottleneck has come to mean You cannot retire more than you fetch which stems from his earlier paper Some Computer Organizations and Their Effectiveness.
The execution bandwidth of a system is usually referred to as being
the maximum number of operations that can be performed per unit time
by the execution area. Notice that due to bottlenecks in issuing
instructions, for example, the execution bandwidth is usually
substantially in excess of the maximum performance of a system.
I have a question here about round-robin and I was wondering if this answer is correct(it is the dr.'s answer).
We are supposed to get the average turnaround time of the processes who all come at same time which came in order as in the picture, it uses round robin with quantum of 5 seconds
Given answer was like that
But I think there are some extra counted time in there so my answer is like that
(26+27+12+16+30) / 5 = 22.2
I was wondering which one is correct, in my answer I tracked each process got its finish time and subtracted from start time which is 0 in this case.
According to the question given, the correct answer is 22.2
I was revisiting Operating Systems CPU job scheduling and suddenly a question popped in my mind, How the hell the OS knows the execution time of process before its execution, I mean in the scheduling algorithms like SJF(shortest job first), how the execution time of process is calculated apriori ?
From Wikipedia:
Another disadvantage of using shortest job next is that the total execution time of a job must be known before execution. While it is not possible to perfectly predict execution time, several methods can be used to estimate the execution time for a job, such as a weighted average of previous execution times.[1]
More on http://en.wikipedia.org/wiki/Shortest_job_next
Also, O.S can compute the total needed time for each task, by means of first calculating its CPI.
(CPI: cycles per instruction)
There is a weighted average CPI for each job.
For example, floating point instructions weigh much more than fixed point instructions, meaning they take more time to perform. So a job dealing with fixed point operations: like add or increment is perceived to be shorter. Hence in a shortest job first, it shall be executed prior to the aforementioned job.