Round Robin Scheduling : Two different solutions - How is that possible? - operating-system

Problem :
Five batch jobs A through E, arrive at a computer center at almost the same time. They have estimated running times 10, 6, 2, 4, and 8 minutes. Their (externally determined) priorities are 3, 5, 2, 1, and 4, respectively, with 5 being the highest priority. Determine the mean process turn around time. Ignore process switching overhead. For Round Robin Scheduling, assume that the system is multiprogramming, and that each job gets it fair share of the CPU.All jobs are completely CPU bound.
Solution #1 The following solution comes from this page :
For round robin, during the first 10 minutes, each job gets 1/5 of the
CPU. At the end of the 10 minutes, C finishes. During the next 8
minutes, each job gets 1/4 of the CPU, after which time D finishes.
Then each of the three remaining jobs get 1/3 of the CPU for 6
minutes, until B finishes and so on. The finishing times for the five
jobs are 10, 18, 24. 28, 30, for an average of 22 minutes.
Solution #2 the following solution comes from Cornell University, can be found here, and is obviously different from the previous one even though the problem is given in exactly the same form (this solution, by the way, makes more sense to me) :
Remember that the turnaround time is the amount of time that elapses
between the job arriving and the job completing. Since we assume that
all jobs arrive at time 0, the turnaround time will simply be the time
that they complete. (a) Round Robin: The table below gives a break
down of which jobs will be processed during each time quantum. A *
indicates that the job completes during that quantum.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
A B C D E A B C* D E A B D E A B D* E A B E A B* E A E A E* A A*
The results are different: In the first one C finishes after 10 minutes, for example, whereas in the second one C finishes after 8 minutes.
Which one is the correct one and why? I'm confused.. Thanks in advance!

The problems are different. The first problem does not specify a time quantum, so you have to assume the quantum is very small compared to a minute. The second problem clearly specifies a one minute scheduler quantum.
The mystery with the second solution is why it assumes the tasks run in letter order. I can only assume that this an assumption made throughout the course and so students would be expected to know to make it here.

In fact, there is no such thing as a 'correct' RR algorithm. RR is merely a family of algorithms, based on the common concept of scheduling several tasks in a circular order. Implementations may vary (for example, you may consider task priorities or you may discard them, or you may manually set the priority as a function of task length or whatever else).
So the answer is - both algorithms seem to be correct, they are just different.

Related

Anylogic: How to block a line by a probability?

So I'm modelling a production line (simple, with 5 processes which I modelled as Services). I'm simulating for 1 month, and during this one month, my line stops approximately 50 times (due to a machine break down). This stop can last between 3 to 60 min and the avg = 12 min (depending on a triangular probability). How could I implement this to the model? I'm trying to create an event but can't figure out what type of trigger I should use.
Have your services require a resource. If they are already seizing a resource like labor, that is ok, they can require more than one. On the resourcePool, there is an area called "Shifts, breaks, failures, maintenance..." Check "Failures/repairs:" and enter your downtime distribution there.
If you want to use a triangular, you need min/MODE/max, not min/AVERAGE/max. If you really wanted an average of 12 minutes with a minimum of 3 and maximum of 60; then this is not a triangular distribution. There is no mode that would give you an average of 12.
Average from triangular, where X is the mode:
( 3 + X + 60 ) / 3 = 12
Means X would have to be negative - not possible for there to be a negative delay time for the mode.
Look at using a different distribution. Exponential is used often for time between failures (or poisson for failures per hour).

Cylic Executive (CE) scheduling

I am writing pseudocode for a CE scheduling algo. By the looks of it, task E is never going to be complete. Can anyone see where I'm going wrong? Am I choosing the correct interrupt time of 25 msec for this cyclic executive schedule?
Task Period p msec Exec Time msec
A 25 10
B 25 5
C 50 5
D 50 5
E 100 2
while(true)
wait_for_int (waits 25ms)
taskA()
taskB()
taskC()
taskD()
wait_for_int (waits 25ms)
taskA()
taskB()
wait_for_int (waits 25ms)
taskA()
taskB()
taskC()
taskD()
wait_for_int (waits 25ms)
taskA()
taskB()
endloop;
You are going wrong by thinking that all five tasks need to run in the same 25 millisecond period. That's not the case. All five tasks need to run every 100 milliseconds, and some tasks need to run more than once in that 100 millisecond period, but never do all five tasks need to run in the same 25 millisecond period.
For example, tasks C and D run every 50 milliseconds. But they don't have to run in the same 25 millisecond phase. They can run out of phase by 25 milliseconds. If you divide the 100 millisecond period into 25 millisecond phases then at most you need to run only four tasks in any given phase.
(If you break the 100 milliseconds into smaller phases, such as 5 milliseconds, then you might be able to design it such that no two tasks ever need to run in the same phase.)
Read this article, Multi-rate Main Loop Tasking, for a detailed explanation of what you're trying to do along with a great example.
You need to interleave C and D so that E can be executed in any 25ms period:
Period 0ms 25ms 50ms 100ms
-----------------------
A A A A
B B B B
C D C D
- - - E
-----------------------
Exec Time 20ms 20ms 20ms 22ms

Detect when 2 recurring (time triggered) scripts will sync [Theoretical]

This is a somewhat theoretical question. I have 2 CRON jobs that run in a staggered interval, one every 13 minutes, and the other every 15 minutes.
I know it's very easy to stop them running at the same time etc with locks/stops.
However it got me thinking in a theoretical sense, how the synchronisations in time; when they both run at the same minute could be visualised.
So far it's actually a pretty interesting logic, as it's a case of converting the 13 minutes and 15 minutes into similar 24 based hour formats, and then detecting if any of them match. This bit isn't too hard, and the basic logic I have sort of got in this jsFiddle (Very ugly and janky/long way around, but it sort of works): http://jsfiddle.net/wigster/kdo5bwk9/
I can't quite visualise how/when these sync/simultaneousness' will occur though (I know they WILL occur, I just don't know if that's day's/weeks/months apart etc).
This may possibly be a question I should ask on https://math.stackexchange.com/ but StackOverflow is my frequent, so I'll try here first.
As I said, this is not for any real world application, simply a maths logic that has intrigued me today.
TL:DR 2 "Things" running at 13 minutes and 15 minutes. At first it may seem simple, numbers that are divisible by both 13 and 15. Eg, we know they will both run at 9:30 eventually. But HOW OFTEN / regularly will the 13 min cycle hit 9:30 at the same time the 15 minute cycle hits 9:30 as it's likely to be ahead/behind most of the time.
#MrWigster

Shortest Job First Scheduling

Suppose that following processes arrive for the execution at the times indicated. Each process will run the listed amount of time.
Process [Arrival Time(ms) , Burst Time(ms)]
A[0 , 5] , B[3 , 5] , C[5 , 3] , D[7 , 2]
I want to draw Gantt chart and calculate average waiting time for preemptive Shortest Job First Scheduling.
Solution
http://imgur.com/fP8u61C
Waiting Time is 2ms.
Just Please tell me if this is correct.
The step where I have doubt is that at 3ms when process B arrives, will the scheduler complete the process A or start process B.
Yes, your answer is correct. In fact the problem as posed is ambiguous, but both possibilities give the same answer.
First, the ambiguity : Shortest Job First scheduling is not usually considered preemptive. The preemptive variant is called Shortest Remaining Time First Scheduling (see for instance the Shortest Job Next entry on Wikipedia. However, your exercice states "preemptive Job First scheduling", and that's ambiguous...
Second, however, the only time when there might be a difference between these two scheduling policies is, as you mentioned, at t=3 when both A and B are eligible. But if the scheduling is non-preemptive, of course A continues to execute. It it's preemptive, we must consider the remaining time : A has 2 ms left while B has its whole 5... so A still gets the CPU.
Finally, the waiting times are : A -> 0 ms, B -> 7 ms, C -> 0 ms, D -> 1ms, the average of which is indeed 2 ms.
You probably have to do on your own your homework.
Show your try on that and say what are you questions and your issues.
Don't wait for a complete ready solution!

Round Robin Scheduling : What happens when all jobs arrive at the same time?

Problem :
Five batch jobs A through E, arrive at a computer center at almost the
same time. They have estimated running times 10, 6, 2, 4, and 8
minutes. Their (externally determined) priorities are 3, 5, 2, 1, and
4, respectively, with 5 being the highest priority. Determine the mean process turn
around time. Ignore process switching overhead. For Round Robin Scheduling, assume that the system is multiprogramming, and that each job gets it fair share of the CPU.All jobs are completely CPU bound.
Solution #1 The following solution comes from this page :
For round robin, during the first 10 minutes, each job gets 1/5 of the
CPU. At the end of the 10 minutes, C finishes. During the next 8
minutes, each job gets 1/4 of the CPU, after which time D finishes.
Then each of the three remaining jobs get 1/3 of the CPU for 6
minutes, until B finishes and so on. The finishing times for the five
jobs are 10, 18, 24. 28, 30, for an average of 22 minutes.
Solution #2 the following solution comes from Cornell University here, which is different (and this one makes more sense to me) :
Remember that the turnaround time is the amount of time that elapses
between the job arriving and the job completing. Since we assume that
all jobs arrive at time 0, the turnaround time will simply be the
time that they complete. (a) Round Robin: The table below gives a
break down of which jobs will be processed during each time quantum.
A * indicates that the job completes during that quantum.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
A B C D E A B C* D E A B D E A B D* E A B E A B* E A E A E* A A*
The results are different: In the first one C finishes after 10 minutes, for example, whereas in the second one C finishes after 8 minutes.
Which one is the correct one and why? I'm confused.. Thanks in advance!
Q1: I believe that the "fair share" requirement means you can assume the time is evenly divided amongst running processes, and thus the particular order won't matter. You could also think of this as the quantum being so low that any variation introduced by a particular ordering is too small to worry about.
Q2: From the above, assuming the time is evenly divided, it will take 10 minutes for all processes to get 2 minutes of their own, at which point C will be done.