How does a long term scheduler decide which job is I/O bound and which one is CPU bound?
I heard that by using cpu burst we can distinguish between I/O bound and CPU bound jobs, but how is the CPU burst calculated without processing the program?
Generally, the CPU scheduler assigns time slices to processes/threads and switches between them whenever a) the time slice has run out or b) the process/thread blocks for I/O.
An I/O-bound job will be blocking for I/O very often, while a process/thread that always makes use of his full time slice can be assumed to be CPU-bound. So by distinguishing whether a process/thread blocks at the end of the time slice or by calling some wait_for_io_completion() function, you can effectively characterize those types of processes.
Note, that in real life, things get more complicated, because most of the time applications are not either I/O-bound or CPU-bound but switch roles all the time. This is why scheduling is about heuristics and not about correct solutions, because you cannot (always) predict the future.
CPU bound uses more of its time doing computations than I/O bound.
answered by tumaini kami david
Answers. Generally, the CPU scheduler assigns time slices to processes/threads and switches between them whenever a) the time slice has run out or b) the process/thread blocks for I/O. ... CPU bound uses more of its time doing computations than I/O bound.strong text
IO BOUND PROCESS :
Io bound process spends more time doing io than computations,many short cpu burst.
COU BOUND PROCESS :
process spends more time doing computations;few very long cpu bursts.
Related
Let's take this processor as an example: a CPU with 2 cores and 4 threads (2 threads per core).
From what I've read, such a CPU has 2 physical cores but can process 4 threads simultaneously through hyper threading. But, in reality, one physical core can only truly run one thread at a time, but using hyper threading, the CPU exploits the idle stages in the pipeline to process another thread.
Now, here is Kubernetes with Prometheus and Grafana and their CPU resource units measurement - millicore/millicpu. So, they virtually slice a core to 1000 millicores.
Taking into account the hyper threading, I can't understand how they calculate those millicores under the hood.
How can a process, for example, use 100millicore (10th part of the core)? How is this technically possible?
PS: accidentally, found a really descriptive explanation here: Multi threading with Millicores in Kubernetes
This gets very complicated. So k8s doesn't actually manage this it just provides a layer on top of the underlying container runtime (docker, containerd etc). When you configure a container to use 100 millicore k8's hands that down to the underlying container runtime and the runtime deals with it. Now once you start going to this level you have to start looking at the Linux kernel and how it does cpu scheduling / rate with cgroups. Which becomes incredibly interesting and complicated. In a nutshell though: The linux CFS Bandwidth Control is the thing that manages how much cpu a process (container) can use. By setting the quota and period params to the schedular you can control how much CPU is used by controlling how long a process can run before being paused and how often it runs. as you correctly identify you cant only use a 10th of a core. But you can use a 10th of the time and by doing that you can only use a 10th of the core over time.
For example
if I set quota to 250ms and period to 250ms. That tells the kernel that this cgroup can use 250ms of CPU cycle time every 250ms. Which means it can use 100% of the CPU.
if I set quota to 500ms and keep the period to 250ms. That tells the kernel that this cgroup can use 500ms of CPU cycle time every 250ms. Which means it can use 200% of the CPU. (2 cores)
if I set quota to 125ms and keep the period to 250ms. That tells the kernel that this cgroup can use 125ms of CPU cycle time every 250ms. Which means it can use 50% of the CPU.
This is a very brief explanation. Here is some further reading:
https://blog.krybot.com/a?ID=00750-cfae57ed-c7dd-45a2-9dfa-09d42b7bd2d7
https://www.kernel.org/doc/html/latest/scheduler/sched-bwc.html
Recently I came across the statement that
In SJF IO bound jobs get priority over CPU bound jobs.
I found this statement in page 4 of this slide and also in page 3 of this slide. I decide to attach the corresponding pictures below, if in case the link breaks in the future.
But I am having difficulty to understand the above and it seems rather counter intuitive to me. My argument is as follows:
I assume a CPU bound process is one which uses has higher CPU burst:(CPU Burst+IO Burst) and I assume a process as IO bound which has higher IO Burst:(CPU Burst+IO Burst). I assumed it from the knowledge I have received after reading the textbook "Operating Concepts" by Galvin et. al and the excerpt is below:
An I/O-bound process is one that spends more of its time doing I/O than it spends doing computations. A CPU-bound process, in contrast, generates I/O requests infrequently, using more of its time doing computations.
Which I guess agrees with what the professor says here.
Based on this I came up with the following examples:
Suppose I have two jobs
JOB 1: CPU BURST = 10 units; IO BURST =100 units
JOB 2: CPU BURST= 100 units; IO BURST=10 units...
SJF shall schedule JOB1 first which is IO Bound...
———————————————————————————
suppose I have two other jobs
JOB 3: CPU BURST = 10 units; IO BURST =1 units
JOB 4: CPU BURST= 100 units; IO BURST=200 units...
SJF shall schedule JOB3 first which is CPU bound...
From the above example I do not find any such correlation that SJF gives priority to IO bound jobs.
it's my first post here.
I'm currently learning Modern Operating Systems and I'm stuck at this question : A computer system has enough room to hold five programs in its main memory. These programs are idle waiting for I/O half of the time. What fraction of the CPU time is wasted?
The answer is 1/32, but why ?
The answer is 1/32, but why ?
The sentence "These programs are idle waiting for I/O half of the time" is ambiguous. Let's look at a few different ways of interpreting this sentence and see if they match the expected answer:
a) "Each of the 5 programs spends 50% of the total time waiting for IO". In this case, while one program is waiting for IO the CPU could be being used by other programs; and all programs combined could use 100% of CPU time with no time wasted. In fact, you'd be able to use 100% of CPU time with only 2 programs (the 1st program uses the CPU while the 2nd program waits for IO, then the 2nd program uses the CPU while the 1st task waits for IO, then ...). This can't be the intended meaning of "These programs are idle waiting for I/O half of the time" because the answer (possibly zero CPU time wasted) doesn't match the expected answer.
b) "All of the programs are idle waiting for I/O at the same time, for half the time". This can't be the intended meaning of the question because the answer would obviously be "50% of CPU time is wasted" and doesn't match the expected answer.
c) "Each program spends half of the time available to it waiting for IO". In this case, the first program has 100% of CPU time available to it but spends 50% of the time using the CPU and waits for IO for the other 50% of the time, leaving 50% of CPU time available for the next program; then the 2nd program uses 50% of the remaining CPU time (25% of total time) using the CPU and 50% of the remaining CPU time (25% of total time) waiting for IO, leaving 25% of CPU time available for the next program; then the third program uses 50% of the remaining CPU time (12.5% of total time) using the CPU and 50% of the remaining CPU time (12.5% of total time) waiting for IO, leaving 12.5% of CPU time available to the next programs, then...
In this case, the remaining time is halved by each program, so you get a "negative power of 2" sequence (1/2, 1/4, 1/8, 1/16, 1/32) that arrives at an answer that matches the expected answer.
Because we get the right answer for this interpretation, we can assume that this is what "These programs are idle waiting for I/O half of the time" was supposed to mean.
What is the degree of multiprogramming in OS?
Is it the number of processes in the ready queue or the number of processes in the memory?
In a multiprogramming-capable system, jobs to be executed are loaded into a pool. Some number of those jobs are loaded into main memory, and one is selected from the pool for execution by the CPU. If at some point the program in progress terminates or requires the services of a peripheral device, the control of the CPU is given to the next job in the pool.
An important concept in multiprogramming is the degree of multiprogramming. The degree of multiprogramming describes the maximum number of processes that a single-processor system can accommodate efficiently.
These are some of the factors affecting the degree of multiprogramming:
The primary factor is the amount of memory available to be allocated
to executing processes. If the amount of memory is too limited, the
degree of multiprogramming will be limited because fewer processes
will fit in memory.
Operating system - The means by which resources are allocated to processes. If the operating system
can not allocate resources to executing processes in a fair and
orderly fashion, the system will waste time in reallocation, or
process execution could enter into a deadlock state as programs wait
for allocated resources to be freed by other blocked processes.
Other factors affecting the degree of multiprogramming are program
I/O needs, program CPU needs, and memory and disk access speed.
Hope this answers you. :)
If not, You can get it in more detail here: http://www.tcnj.edu/~coburn/os
For a system with a single CPU core, there will never be more than one
process running at a time, whereas a multicore system can run multiple
processes at one time. If there are more processes than cores, excess
processes will have to wait until a core is free and can be
rescheduled. The number of processes currently in memory is known as
the degree of multiprogramming.
Excerpt from: Operating System Concepts, 10th Edition, Abraham Silberschatz
My question is why do we want to have CPU's operation overlap with that of the I/O processing. I have been thinking about optimization and such but yet to arrive at a conclusion.
If anyone is able to answer this question, it will be great. :D
I/O is generally very slow compared to the operating frequency of the CPU.
Suppose you have a 1GHz CPU that's capable of executing one instruction every clock cycle. That means the CPU is able to execute one instruction every nanosecond.
Now let's assume you want to fetch some data from your hard drive. Disk operations often take place in the milisecond scale, and we'll assume your drives are fast enough to fetch the data in only 1ms.
If the CPU just sit around and wait for the disk to fetch the data, the CPU will waste 1 million nanoseconds doing nothing, whereas it could be executing 1 million instructions for another task. When a program has a lot of IO access, those wasted cycles stacks up and become noticeable if you let the CPU wait and do nothing. This is why it's a good idea to overlap computation with IO so CPU cycles aren't wasted.
This is also why your computer becomes super unresponsive when your main memory is full, and the CPU has to page frequently to the disk. Your CPU cannot perform any useful task unless the data it needs has been retrieved from the disk into the main memory, so it must sit around and wait for the IOs to complete.