Can any one guide how to implement Process Aging?
I would Like to know on what factors does this Aging factor depend on?
According to me, it should depend on present priority and average waiting time.
How do I implement averaging waiting time?
Can any one please give clear idea on it?
The process aging should depend only on the waiting time (according to the original concept) of the process. Based on its original idea, the priority of a process will be increased accordingly to the time it is waiting in the ready queue.
read the process arrival time, burst time, process id, priority from the text file and calculate the average waiting time and show the cpu utilization of each cycle like this,
Time 1 P3 arrives
Time 1 P3 runs
Time 2 P5 arrives
Time 2 P3 runs
Time 10 P1 arrives
Time 10 P3 runs
Time 13 P3 finishes
Time 13 P5 runs
Time 16 P4 arrives
Time 16 P5 runs
Time 53 P5 finishes
Time 53 P1 runs
Time 82 P1 finishes
Time 82 P4 runs
Time 112 P4 finishes
Related
I have searched the internet for examples for the algorithms in cpu scheduling and I have never seen any examples with the same arrival time.
Is it possible to make the processes have the same arrival time?
For example:
Algorithm: Round Robin
Process ---- Arrival Time ----- Burst Time
P1 ----------------- 3 ------------------ 4 -----
P2 ----------------- 1 ------------------ 5 -----
P3 ----------------- 1 ------------------ 3 -----
Quantum = 1
What would be the gantt chart look like?
Is it possible to make the processes have the same arrival time?
Yes, for normal definitions of "same time" (e.g. excluding "in the same Planck time quantum") it's possible for processes to have the same arrival time.
For an example, imagine if 100 tasks sleep until midnight. When midnight occurs a timer IRQ handler processes a list of tasks waiting to wake up and wakes up 100 tasks at the "same" time.
Now, for this example you could say that "same time" is stricter; and that the timer IRQ handler processes the list of tasks sequentially and adds them to scheduler's queues sequentially, and that it's only "almost at the same time". In this case it's still possible to have N CPUs running in parallel (with a different timer for each CPU) that happen to wake (up to) N tasks at the same time.
Of Course, multiple processes can have the same arrival time i.e the time they came for looking the CPU to execute them. And its the responsibility of the Processor to handle and schedule them accordingly as per the appropriate Scheduling Algorithms.
When the Arrival time for two or more processes are same, then the RR-Scheduling follows FCFS {First come first serve} approach. Here in Round robin scheduling with quantum = 1, we have
Gantt Chart
At time 0, no process
At time 1 , we have P2 and P3 with P2 first then after a quantum RR executes P3,
At time 3 we have all three processes with order P2,P3,P1 hence the RR-Algorithm will keep switching between them until they complete their execution (burst) time.
And we will get all executed at time 13.
I am writing pseudocode for a CE scheduling algo. By the looks of it, task E is never going to be complete. Can anyone see where I'm going wrong? Am I choosing the correct interrupt time of 25 msec for this cyclic executive schedule?
Task Period p msec Exec Time msec
A 25 10
B 25 5
C 50 5
D 50 5
E 100 2
while(true)
wait_for_int (waits 25ms)
taskA()
taskB()
taskC()
taskD()
wait_for_int (waits 25ms)
taskA()
taskB()
wait_for_int (waits 25ms)
taskA()
taskB()
taskC()
taskD()
wait_for_int (waits 25ms)
taskA()
taskB()
endloop;
You are going wrong by thinking that all five tasks need to run in the same 25 millisecond period. That's not the case. All five tasks need to run every 100 milliseconds, and some tasks need to run more than once in that 100 millisecond period, but never do all five tasks need to run in the same 25 millisecond period.
For example, tasks C and D run every 50 milliseconds. But they don't have to run in the same 25 millisecond phase. They can run out of phase by 25 milliseconds. If you divide the 100 millisecond period into 25 millisecond phases then at most you need to run only four tasks in any given phase.
(If you break the 100 milliseconds into smaller phases, such as 5 milliseconds, then you might be able to design it such that no two tasks ever need to run in the same phase.)
Read this article, Multi-rate Main Loop Tasking, for a detailed explanation of what you're trying to do along with a great example.
You need to interleave C and D so that E can be executed in any 25ms period:
Period 0ms 25ms 50ms 100ms
-----------------------
A A A A
B B B B
C D C D
- - - E
-----------------------
Exec Time 20ms 20ms 20ms 22ms
Suppose I have a multilevel feedback queue with two round robins queues having time quanta 1 sec and 2 sec respectively.
Now let us consider a situation where two processes P1,P2 are in the second queue(lower priority) and P1 gets scheduled and after 1 second ( that is before the time quanta of P1 expired) process P3 comes into the top queue.
Since top queue is higher priority P1 is preempted and P3 is scheduled and after P3 is complete, now in the second queue, does P1 is resumed to complete its remaining time quanta (or) P2 is scheduled with a new timer
If they are two processes with the following Data, How should the Gantt Chart be?(SRTF scheduling)
Process Arrival Burst
P1 0 17
P2 1 16
So will the process P1 be completed first and then P2 will start executing..or P1 will have to wait for 16 milli seconds?
I feel the conflict can be resolved either by choosing the process which came earlier or by the process which has the longest burst. In this case, on choosing either of the approaches, P1 will be completed first.
It's going to choose P1 because at the time P2 didn't exist
P1 AT =0 thus will start first
next step they will be equal but as the processor is working already on p1 it will prefer to keep working on it until interruption or termination
In this case it gets P2 at 1, then it checks for remaining time. As both remaining times are same, it put new process; P2 in the queue for next execution(after P1's completion).
This was an exam question I could not solve, even after searching about response time.
I thought that answer should be 220, 120
Effectiveness of RR scheduling depends on two factors: choice of q, the time quantum, and the scheduling overhead s. If a system contains n processes and each request by a process consumes exactly q seconds, the response time (rt) for a request is rt= n(q+s) . This means that response is generated after spending the whole CPU burst and being scheduled to the next process. (after q+s)
Assume that an OS contains 10 identical processes that were initiated at the same time. Each process contains 15 identical requests, and each request consumes 20msec of CPU time. A request is followed by an I/O operation that consumes 10 sec. The system consumses 2msec in CPU scheduling. Calculate the average reponse time of the fisrt requests issued by each process for the following two cases:
(i) the time quantum is 20msec.
(ii) the time quantum is 10 msec.
Note that I'm assuming you meant 10ms instead of 10s for the I/O wait time, and that nothing can run on-CPU while an I/O is in progress. In real operating systems, the latter assumption is not true.
Each process should take time 15 requests * (20ms CPU + 10ms I/O)/request = 450ms.
Then, divide by the time quantum to get the number of scheduling delays, and add that to 450ms:
450ms / 20ms = 22.5 but actually it should be 23 because you can't get a partial reschedule. This gives the answer 450ms + 2ms/reschedule * 23 reschedules = 496ms.
450ms / 10ms = 45. This gives the answer 450ms + 2ms/reschedule * 45 reschedules = 540ms.