Calculation of response time in operating system - operating-system

This was an exam question I could not solve, even after searching about response time.
I thought that answer should be 220, 120
Effectiveness of RR scheduling depends on two factors: choice of q, the time quantum, and the scheduling overhead s. If a system contains n processes and each request by a process consumes exactly q seconds, the response time (rt) for a request is rt= n(q+s) . This means that response is generated after spending the whole CPU burst and being scheduled to the next process. (after q+s)
Assume that an OS contains 10 identical processes that were initiated at the same time. Each process contains 15 identical requests, and each request consumes 20msec of CPU time. A request is followed by an I/O operation that consumes 10 sec. The system consumses 2msec in CPU scheduling. Calculate the average reponse time of the fisrt requests issued by each process for the following two cases:
(i) the time quantum is 20msec.
(ii) the time quantum is 10 msec.

Note that I'm assuming you meant 10ms instead of 10s for the I/O wait time, and that nothing can run on-CPU while an I/O is in progress. In real operating systems, the latter assumption is not true.
Each process should take time 15 requests * (20ms CPU + 10ms I/O)/request = 450ms.
Then, divide by the time quantum to get the number of scheduling delays, and add that to 450ms:
450ms / 20ms = 22.5 but actually it should be 23 because you can't get a partial reschedule. This gives the answer 450ms + 2ms/reschedule * 23 reschedules = 496ms.
450ms / 10ms = 45. This gives the answer 450ms + 2ms/reschedule * 45 reschedules = 540ms.

Related

What fraction of the CPU time is wasted ? (Modern Operating Systems, 4th ed)

it's my first post here.
I'm currently learning Modern Operating Systems and I'm stuck at this question : A computer system has enough room to hold five programs in its main memory. These programs are idle waiting for I/O half of the time. What fraction of the CPU time is wasted?
The answer is 1/32, but why ?
The answer is 1/32, but why ?
The sentence "These programs are idle waiting for I/O half of the time" is ambiguous. Let's look at a few different ways of interpreting this sentence and see if they match the expected answer:
a) "Each of the 5 programs spends 50% of the total time waiting for IO". In this case, while one program is waiting for IO the CPU could be being used by other programs; and all programs combined could use 100% of CPU time with no time wasted. In fact, you'd be able to use 100% of CPU time with only 2 programs (the 1st program uses the CPU while the 2nd program waits for IO, then the 2nd program uses the CPU while the 1st task waits for IO, then ...). This can't be the intended meaning of "These programs are idle waiting for I/O half of the time" because the answer (possibly zero CPU time wasted) doesn't match the expected answer.
b) "All of the programs are idle waiting for I/O at the same time, for half the time". This can't be the intended meaning of the question because the answer would obviously be "50% of CPU time is wasted" and doesn't match the expected answer.
c) "Each program spends half of the time available to it waiting for IO". In this case, the first program has 100% of CPU time available to it but spends 50% of the time using the CPU and waits for IO for the other 50% of the time, leaving 50% of CPU time available for the next program; then the 2nd program uses 50% of the remaining CPU time (25% of total time) using the CPU and 50% of the remaining CPU time (25% of total time) waiting for IO, leaving 25% of CPU time available for the next program; then the third program uses 50% of the remaining CPU time (12.5% of total time) using the CPU and 50% of the remaining CPU time (12.5% of total time) waiting for IO, leaving 12.5% of CPU time available to the next programs, then...
In this case, the remaining time is halved by each program, so you get a "negative power of 2" sequence (1/2, 1/4, 1/8, 1/16, 1/32) that arrives at an answer that matches the expected answer.
Because we get the right answer for this interpretation, we can assume that this is what "These programs are idle waiting for I/O half of the time" was supposed to mean.

CPU utilization calculation

I've read in many places that a simple and decent way to get the % of CPU utilization is by this formula:
CPU utilization = 1 - p^n
where:
p - blocked time
n - number of processes
But i can't find an explanation for it. Seems it has to do with statistics, but i can't wrap my head around it.
My starting point is: if i have 2 processes with 50% wait time, then the formula would yield 1 - 1/4 = 75% CPU utilization. But my broken logic begs the question: if one process is blocked on I/O and the other is swapped in to run when the first is blocked(whatever the burst is), that means that while one waits, the second runs and their wait time overlap. Isn't that 100% CPU utilization? I think this is true only when the first half of the programs is guaranteed to run without IO need.
Question is: How is that formula taking into account every other possibility?
You need to think in terms of probabilities. If the probability of each of the cores to be idle (waiting for IO) is 0.5 then the probability of the CPU to be in idle is the probability of all of the cores to be in idle at the same time. That is 0.5 * 0.5 = 0.25 and so the probability the CPU is doing work is 1 - 0.25 = 0.75 = 75%
CPU utilisation is given as 1 - probability of CPU to being in the idle state
and CPU remain in the idle state when all the process loaded in the main memory is blocked time(I/O)
So if n process has wait time of 50% the probability that all the process are in
block(I/O) state

Operating Systems Virtual Memory

I am a student reading Operating systems course for the first time. I have a doubt in the calculation of the performance degradation calculation while using demand paging. In the Silberschatz book on operating systems, the following lines appear.
"If we take an average page-fault service time of 8 milliseconds and a
memory-access time of 200 nanoseconds, then the effective access time in
nanoseconds is
effective access time = (1 - p) x (200) + p (8 milliseconds)
= (1 - p) x 200 + p x 8.00(1000
= 200 + 7,999,800 x p.
We see, then, that the effective access time is directly proportional to the
page-fault rate. If one access out of 1,000 causes a page fault, the effective
access time is 8.2 microseconds. The computer will be slowed down by a factor
of 40 because of demand paging! "
How did they calculate the slowdown here? Is 'performance degradation' and slowdown the same?
This is whole thing is nonsensical. It assumes a fixed page fault rate P. That is not realistic in itself. That rate is a fraction of memory accesses that result in a page fault.
1-P is the fraction of memory accesses that do not result in a page fault.
T= (1-P) x 200ns + p (8ms) is then the average time of a memory access.
Expanded
T = 200ns + p (8ms - 200ns)
T = 200ns + p (799980ns)
The whole thing is rather silly.
All you really need to know is a nanosecond is 1/billionth of a second.
A microsecond is 1/thousandth of a second.
Using these figures, there is a factor of a million difference between the access time in memory and in disk.

Interrupt time in DMA operation

I'm facing difficulty with the following question :
Consider a disk drive with the following specifications .
16 surfaces, 512 tracks/surface, 512 sectors/track, 1 KB/sector, rotation speed 3000 rpm. The disk is operated in cycle stealing mode whereby whenever 1 byte word is ready it is sent to memory; similarly for writing, the disk interface reads a 4 byte word from the memory in each DMA cycle. Memory Cycle time is 40 ns. The maximum percentage of time that the CPU gets blocked during DMA operation is?
the solution to this question provided on the only site is :
Revolutions Per Min = 3000 RPM
or 3000/60 = 50 RPS
In 1 Round it can read = 512 KB
No. of tracks read per second = (2^19/2^2)*50
= 6553600 ............. (1)
Interrupt = 6553600 takes 0.2621 sec
Percentage Gain = (0.2621/1)*100
= 26 %
I have understood till (1).
Can anybody explain me how has 0.2621 come ? How is the interrupt time calculated? Please help .
Reversing form the numbers you've given, that's 6553600 * 40ns that gives 0.2621 sec.
One quite obvious problem is that the comments in the calculations are somewhat wrong. It's not
Revolutions Per Min = 3000 RPM ~ or 3000/60 = 50 RPS
In 1 Round it can read = 512 KB
No. of tracks read per second = (2^19/2^2)*50 <- WRONG
The numbers are 512K / 4 * 50. So, it's in bytes. How that could be called 'number of tracks'? Reading the full track is 1 full rotation, so the number of tracks readable in 1 second is 50, as there are 50 RPS.
However, the total bytes readable in 1s is then just 512K * 50 since 512K is the amount of data on the track.
But then it is further divided by 4..
So, I guess, the actual comments should be:
Revolutions Per Min = 3000 RPM ~ or 3000/60 = 50 RPS
In 1 Round it can read = 512 KB
Interrupts per second = (2^19/2^2) * 50 = 6553600 (*)
Interrupt triggers one memory op, so then:
total wasted: 6553600 * 40ns = 0.2621 sec.
However, I don't really like how the 'number of interrupts per second' is calculated. I currently don't see/fell/guess how/why it's just Bytes/4.
The only VAGUE explanation of that "divide it by 4" I can think of is:
At each byte written to the controller's memory, an event is triggered. However the DMA controller can read only PACKETS of 4 bytes. So, the hardware DMA controller must WAIT until there are at least 4 bytes ready to be read. Only then the DMA kicks in and halts the bus (or part of) for a duration of one memory cycle needed to copy the data. As bus is frozen, the processor MAY have to wait. It doesn't NEED to, it can be doing its own ops and work on cache, but if it tries touching the memory, it will need to wait until DMA finishes.
However, I don't like a few things in this "explanation". I cannot guarantee you that it is valid. It really depends on what architecture you are analyzing and how the DMA/CPU/BUS are organized.
The only mistake is its not
no. of tracks read
Its actually no. of interrupts occured (no. of times DMA came up with its data, these many times CPU will be blocked)
But again I don't know why 50 has been multiplied,probably because of 1 second, but I wish to solve this without multiplying by 50
My Solution:-
Here, in 1 rotation interface can read 512 KB data. 1 rotation time = 0.02 sec. So, one byte data preparation time = 39.1 nsec ----> for 4B it takes 156.4 nsec. Memory Cycle time = 40ns. So, the % of time the CPU get blocked = 40/(40+156.4) = 0.2036 ~= 20 %. But in the answer booklet options are given as A) 10 B)25 C)40 D)50. Tell me if I'm doing wrong ?

Calculating Interrupt Data Rate

I'm currently learning about interrupts but don't understand how you
calculate the data rate for the question below. I have the answers but
I have no idea how you get there. If someone could please explain to
me how it is calculated it would be really appreciated.
Here is the question...
This question concerns the use of interrupts to handle the input and
storage in memory of data arriving at an input interface, and the
consideration of data rates that be achieved using this mechanism. In
this particular question, the arrival of each new data item triggers
an interrupt request to input and store the data item in a queue in
memory.The question is about calculating the maximum data rate
achievable in this scenario.
You are first required to calculate the time to respond to an
interrupt from the interface, run the interrupt service routine (ISR)
and return to the interrupted program.From this and the number of data
bits input on each interrupt, you are required to calculate the
maximum data rate in bits per second, that can be handled. Below you
are given: the number of clock cycles the CPU requires to respond to
the interrupt and switch to the ISR, the number of instructions
executed by the ISR, the average number of clock cycles executed per
instruction in the ISR, the number of bits in the data item input on
each interrupt, and the clock frequency. [You can assume that when the
CPU can be immediately interrupted again as soon as the ISR completes,
but not before this]
clock cycles to respond to interrupt = 15
instructions executed in ISR= 70
average clock cycles per instruction = 5
number of bits per data item = 32
clock frequency = 10MHz
Questions
a) What is the time in microseconds to respond to an interrupt from
the interface, run the interrupt service routine (ISR) and return to
the interrupted program?
b)What is the maximum data rate in Kbits/second?
Answers
a) 36.5 - I understand this
b) 876.7 - ????
Because each ISR takes 36.5 us, the absolute maximum number of ISRs that can happen in a second is 27,397.2603.
In each ISR, 32 bits of data are processed.
Therefore, 27397.2603 * 32 bits = 876.712.33 bits processed per second