Calculating turnaround and total wait time for FCFS and SFJ - operating-system

I was given an assignment in which I have to calculate both the turnaround and total wait time based on:
Process Name: P1 | P2 | P3 |
Time of Arrival: P1=0 P2=2 P3=8
Total execution time: P1=10 P25 P3=27
Time needed for IO operations: P1=1.5 P2=2 P3=1.1
IO operations waiting and processing time: P1=5 P2=5 P3=3.5
Priority: P1=2 P2=1 P3=4
I found some info online saying that the turnaround time can be calculated by turnaround = total execution time - time of arrival and the total wait time can be calculated using total wait time = turnaround time - total execution time when you're only given the arrival time and the total execution time (CPU burst I think?).
How would the formulas above look like in this case since I have some extra info like the time needed for IO operations and IO operations waiting and processing time?

Related

Is it possible that you have the same arrival time in CPU Scheduling?

I have searched the internet for examples for the algorithms in cpu scheduling and I have never seen any examples with the same arrival time.
Is it possible to make the processes have the same arrival time?
For example:
Algorithm: Round Robin
Process ---- Arrival Time ----- Burst Time
P1 ----------------- 3 ------------------ 4 -----
P2 ----------------- 1 ------------------ 5 -----
P3 ----------------- 1 ------------------ 3 -----
Quantum = 1
What would be the gantt chart look like?
Is it possible to make the processes have the same arrival time?
Yes, for normal definitions of "same time" (e.g. excluding "in the same Planck time quantum") it's possible for processes to have the same arrival time.
For an example, imagine if 100 tasks sleep until midnight. When midnight occurs a timer IRQ handler processes a list of tasks waiting to wake up and wakes up 100 tasks at the "same" time.
Now, for this example you could say that "same time" is stricter; and that the timer IRQ handler processes the list of tasks sequentially and adds them to scheduler's queues sequentially, and that it's only "almost at the same time". In this case it's still possible to have N CPUs running in parallel (with a different timer for each CPU) that happen to wake (up to) N tasks at the same time.
Of Course, multiple processes can have the same arrival time i.e the time they came for looking the CPU to execute them. And its the responsibility of the Processor to handle and schedule them accordingly as per the appropriate Scheduling Algorithms.
When the Arrival time for two or more processes are same, then the RR-Scheduling follows FCFS {First come first serve} approach. Here in Round robin scheduling with quantum = 1, we have
Gantt Chart
At time 0, no process
At time 1 , we have P2 and P3 with P2 first then after a quantum RR executes P3,
At time 3 we have all three processes with order P2,P3,P1 hence the RR-Algorithm will keep switching between them until they complete their execution (burst) time.
And we will get all executed at time 13.

Regarding Queues

New to stackoverflow but had a quick question about queues. I'm not entirely sure how to phrase the question but here it goes:
Say you had a drive in: there is an order queue and a pickup queue. Normally you go and order + pay at the order queue and then once complete switch to the pickup queue. The average time for a customer to come in is 3 minutes, average time in order queue is 1 minute and average time in pickup que is 1 minute.
In a similar situation, you have an order queue, a pay queue and a pickup queue. Now the average wait time for the order que is half a minute, the average wait time for the pay queue is half a minute and the average wait time for the pickup is still 1 minute, with a average time of 3 minutes for every customer to come into the store.
My questions are:
1) Would you expect the total average wait time for a customer to be higher in case 1 or 2?
2) How would you expect the queues to differ if the average time for a customer to enter the store is increased or decreased?

Execution Time of a program

Assuming that the CPI of a program is 1.5 and the clock period is 500ns. What is the execution time?.
What is the execution time?.
I think that the execution time is the time the program takes to execute 1 instruction like latency.
It is impossible to know the execution time of a program just by looking at the CPI and the clock period. CPI gives the average number of cycles it takes to execute an instruction. If you executed 1 billion instructions, or 1 million instructions you can still have the same CPI. So, without knowing the number of instructions that are executed there is no way you can infer the execution time. If you had the number of instructions, the execution time would be:
t_execution = (Clock period) x CPI x (Number of executed instructions)
.

Round Robin Scheduling with arrival time and priority level

I have this Round Robin problem that I was wondering whether it was correct or not. The lower the number the higher the priority.
The Table is:
p0 p1 p2
Arrival Time 3ms 0ms 1ms
Burst Time 3ms 25ms 7ms
Priority 1 7 5
With a time quantum of 5 ms.
This is my Gantt Chart:
p1 p2 p0 p2 p1
0----1----3----6----11------------------------35
My understanding is that if it is preemptive and using priority then if during any given time a process with a higher priority enters the ready queue and the current process has a lower priority then it is preempted. Is my chart correct?

Calculation of response time in operating system

This was an exam question I could not solve, even after searching about response time.
I thought that answer should be 220, 120
Effectiveness of RR scheduling depends on two factors: choice of q, the time quantum, and the scheduling overhead s. If a system contains n processes and each request by a process consumes exactly q seconds, the response time (rt) for a request is rt= n(q+s) . This means that response is generated after spending the whole CPU burst and being scheduled to the next process. (after q+s)
Assume that an OS contains 10 identical processes that were initiated at the same time. Each process contains 15 identical requests, and each request consumes 20msec of CPU time. A request is followed by an I/O operation that consumes 10 sec. The system consumses 2msec in CPU scheduling. Calculate the average reponse time of the fisrt requests issued by each process for the following two cases:
(i) the time quantum is 20msec.
(ii) the time quantum is 10 msec.
Note that I'm assuming you meant 10ms instead of 10s for the I/O wait time, and that nothing can run on-CPU while an I/O is in progress. In real operating systems, the latter assumption is not true.
Each process should take time 15 requests * (20ms CPU + 10ms I/O)/request = 450ms.
Then, divide by the time quantum to get the number of scheduling delays, and add that to 450ms:
450ms / 20ms = 22.5 but actually it should be 23 because you can't get a partial reschedule. This gives the answer 450ms + 2ms/reschedule * 23 reschedules = 496ms.
450ms / 10ms = 45. This gives the answer 450ms + 2ms/reschedule * 45 reschedules = 540ms.