Is it possible to calculate AWT and ATA for SJF( shortest job first ) based on priority? - operating-system

so far I know that in SJF the execution is done first which BT is low based on the AT.
but if there also attach Priority, how do reach the solution
Process Arrival Time Burst Time Priority
P1 7 3 2
P2 5 2 4
P3 4 5 1
P4 0 4 3

Related

Shortest job first job with preemption allowed Anamoly

Consider the following scenario, and take this as preemptive Shortest first job executing algorithm.
[1]
The problem here is at the timeline (3), p2 has 1 burst time available, but p4 which is now available has 2 burst time, so my question is why is p2 is not continuing the execution, and why p4 is starting?, Is this diagram wrong or have i misunderstood in any way.
Gantt chart has to be like:
Average waiting time should be [(0+11) + 0 + 4 + 9] /4 =6.

Related to Scheduling: aging

Can any one guide how to implement Process Aging?
I would Like to know on what factors does this Aging factor depend on?
According to me, it should depend on present priority and average waiting time.
How do I implement averaging waiting time?
Can any one please give clear idea on it?
The process aging should depend only on the waiting time (according to the original concept) of the process. Based on its original idea, the priority of a process will be increased accordingly to the time it is waiting in the ready queue.
read the process arrival time, burst time, process id, priority from the text file and calculate the average waiting time and show the cpu utilization of each cycle like this,
Time 1 P3 arrives
Time 1 P3 runs
Time 2 P5 arrives
Time 2 P3 runs
Time 10 P1 arrives
Time 10 P3 runs
Time 13 P3 finishes
Time 13 P5 runs
Time 16 P4 arrives
Time 16 P5 runs
Time 53 P5 finishes
Time 53 P1 runs
Time 82 P1 finishes
Time 82 P4 runs
Time 112 P4 finishes

Calculate the time for executin instructions with pipeline

Suppose that one instructions requires 10 clock cycles from fetch state to write back state. And we want to calculate the time required to execute 1,000,000 instructions. Each clock cycle takes 2 ns.
(a) Calculate the time required.
The answer says that 1,000,009*2 ns. The last digit 9 is for the number of clock cycles for filling the pipeline. Why is this?? I thought since each instruction fetch is happenin in each clock cycle, it would be 1000000*2 ns.
1 2 3 4 5 6 7 8 9 0
1 2 3 4 5 6 7 8 9 0
1 2 3 4 5 6 7 8 9 0
Let's consider these three instructions.Here you can see for the first instruction it has taken 10 clock cycles and and when coming to next two it will only take 2 more clock cycles, so that for the rest 999 999 instructions it will take more 999 999 clock cycles.Therefore 1 000 000 instructions it will take (10+999 999) 1 000 009 clock cycles.

Round Robin Scheduling : What happens when all jobs arrive at the same time?

Problem :
Five batch jobs A through E, arrive at a computer center at almost the
same time. They have estimated running times 10, 6, 2, 4, and 8
minutes. Their (externally determined) priorities are 3, 5, 2, 1, and
4, respectively, with 5 being the highest priority. Determine the mean process turn
around time. Ignore process switching overhead. For Round Robin Scheduling, assume that the system is multiprogramming, and that each job gets it fair share of the CPU.All jobs are completely CPU bound.
Solution #1 The following solution comes from this page :
For round robin, during the first 10 minutes, each job gets 1/5 of the
CPU. At the end of the 10 minutes, C finishes. During the next 8
minutes, each job gets 1/4 of the CPU, after which time D finishes.
Then each of the three remaining jobs get 1/3 of the CPU for 6
minutes, until B finishes and so on. The finishing times for the five
jobs are 10, 18, 24. 28, 30, for an average of 22 minutes.
Solution #2 the following solution comes from Cornell University here, which is different (and this one makes more sense to me) :
Remember that the turnaround time is the amount of time that elapses
between the job arriving and the job completing. Since we assume that
all jobs arrive at time 0, the turnaround time will simply be the
time that they complete. (a) Round Robin: The table below gives a
break down of which jobs will be processed during each time quantum.
A * indicates that the job completes during that quantum.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
A B C D E A B C* D E A B D E A B D* E A B E A B* E A E A E* A A*
The results are different: In the first one C finishes after 10 minutes, for example, whereas in the second one C finishes after 8 minutes.
Which one is the correct one and why? I'm confused.. Thanks in advance!
Q1: I believe that the "fair share" requirement means you can assume the time is evenly divided amongst running processes, and thus the particular order won't matter. You could also think of this as the quantum being so low that any variation introduced by a particular ordering is too small to worry about.
Q2: From the above, assuming the time is evenly divided, it will take 10 minutes for all processes to get 2 minutes of their own, at which point C will be done.

Round Robin Scheduling : Two different solutions - How is that possible?

Problem :
Five batch jobs A through E, arrive at a computer center at almost the same time. They have estimated running times 10, 6, 2, 4, and 8 minutes. Their (externally determined) priorities are 3, 5, 2, 1, and 4, respectively, with 5 being the highest priority. Determine the mean process turn around time. Ignore process switching overhead. For Round Robin Scheduling, assume that the system is multiprogramming, and that each job gets it fair share of the CPU.All jobs are completely CPU bound.
Solution #1 The following solution comes from this page :
For round robin, during the first 10 minutes, each job gets 1/5 of the
CPU. At the end of the 10 minutes, C finishes. During the next 8
minutes, each job gets 1/4 of the CPU, after which time D finishes.
Then each of the three remaining jobs get 1/3 of the CPU for 6
minutes, until B finishes and so on. The finishing times for the five
jobs are 10, 18, 24. 28, 30, for an average of 22 minutes.
Solution #2 the following solution comes from Cornell University, can be found here, and is obviously different from the previous one even though the problem is given in exactly the same form (this solution, by the way, makes more sense to me) :
Remember that the turnaround time is the amount of time that elapses
between the job arriving and the job completing. Since we assume that
all jobs arrive at time 0, the turnaround time will simply be the time
that they complete. (a) Round Robin: The table below gives a break
down of which jobs will be processed during each time quantum. A *
indicates that the job completes during that quantum.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
A B C D E A B C* D E A B D E A B D* E A B E A B* E A E A E* A A*
The results are different: In the first one C finishes after 10 minutes, for example, whereas in the second one C finishes after 8 minutes.
Which one is the correct one and why? I'm confused.. Thanks in advance!
The problems are different. The first problem does not specify a time quantum, so you have to assume the quantum is very small compared to a minute. The second problem clearly specifies a one minute scheduler quantum.
The mystery with the second solution is why it assumes the tasks run in letter order. I can only assume that this an assumption made throughout the course and so students would be expected to know to make it here.
In fact, there is no such thing as a 'correct' RR algorithm. RR is merely a family of algorithms, based on the common concept of scheduling several tasks in a circular order. Implementations may vary (for example, you may consider task priorities or you may discard them, or you may manually set the priority as a function of task length or whatever else).
So the answer is - both algorithms seem to be correct, they are just different.