the maximum amount of busy waiting time - operating-system

V Saved In a multiprocessor system, busy waiting is sometimes tolerated rather than blocking the process. In such an environment If context switch time is S, what is the maximum amount of busy waiting time that may be tolerated when a process gets stuck in a spinlock? Justify/explain
I concerned about what is the maximum amount of busy waiting in this case. Can anyone help?

Related

How Round robin CPU scheduling algorithm deal with I/O-bound processes?

I am recently taking a course called The principle of operating system, and I learned CPU scheduling. I am confused about Round robin scheduling, for I/O-bound process, for example, the process will use CPU for 2ms and does I/O for 8ms. Will scheduler still assign a quantum to this process when it is doing I/O? Also, when this process is doing I/O, will the scheduler wait for the I/O to complete even when the quantum expires or it will just start to execute next process? Any help would be appreciated!
Will scheduler still assign a quantum to this process when it is doing I/O?
Typically each task has a state, maybe one of:
running, currently using CPU
ready to run, waiting to use CPU
blocked, waiting for something (disk IO, a mutex, a time delay, a network packet to arrive, ...)
The scheduler only cares about tasks that are running and ready to run - e.g. it might have a (circular singly linked?) list of tasks that want CPU time, and when a task blocks the task is removed from that list (and then later when whatever the task was waiting for happens and the task is unblocked, the task is put back on the list).
Traditionally; when a task is put back on the list it's put back on the end of the list, so that a task can't repeatedly block briefly to get a new time slice and hog the CPU.
This means that if there are 2 tasks and one blocks, a round robin scheduler might do "task A, task B, task A, task B" while task A is running/ready to run; then switch to "task B, task B, task B, task B" while task A is blocked; then after task A unblocks it'd go back to "task B, task A, task B, task A, ..." (starting with task B because task A was put on the end of the list and not the start of the list).
The other thing is that tasks literally can't decide to do something that would cause them to block unless they're currently running; which means that whenever a task blocks it doesn't use its whole time slice. For example, if the scheduler is giving out 1 ms time slices then a task may block after using 0.3 ms of its time slice, leaving a remainder of 0.7 ms. For this reason the scheduler needs a timer with higher precision and the length of time slices will be rounded to the precision of the timer IRQ (e.g. if the scheduler is using a timer IRQ that occurs every 0.2 ms; then that remaining 0.7 ms left after one task blocks might be rounded to 0.8 ms leaving a spare 0.1 ms due to rounding, and the next task might actually get 1.1 ms of CPU time instead of 1.0 ms of CPU time because of that "rounding to the timer's precision").
Also; when all tasks are blocked the scheduler's timer can be suppressed/disabled (and the CPUs put into a power saving state) to reduce power consumption by preventing pointless timer IRQs from waking the CPU out of a power saving state; and when only one task can run the scheduler's timer can be also be suppressed/disabled (and the task given an "effectively infinite" time slice) to prevent the overhead of pointless timer IRQs from decreasing the performance of the task.
Note 1: Almost all universities ignore reality; starting with the extremely dodgy assumption that its possible to know how long a task will use CPU time and when it will block or use IO (followed by the assumption that everything happens in nicely "aligned to time slice duration" boundaries).
Note 2: Almost all universities assume that "IO" means the initiating task is blocked; either because the disk controller does the IO while the CPU does other things, or because one or more different task/s use the CPU to do the IO while the initiating task blocks (e.g. your task calls "read()", your task is blocked and a file system task is unblocked, the file system task asks the disk controller's driver to fetch some data, the file system task is blocked and the disk controller driver's task is unblocked, then ...). This isn't strictly true in all cases (but may be true in all cases for some operating systems).
In general time critical I/O will typically be handled by an interrupt handler rather than a round-robin scheduled process.
For example, say you have a UART with no hardware FIFO, a character arriving in its data-register, must be read before it is overwritten by the next received character. In this case the character might be placed in a software FIFO buffer (a pipe or queue). That buffer would need to be large enough to capture all data received while the receiving process is not running. When the receiving process is scheduled it will receive all buffered data at once.
In other cases, I/O may use DMA operations, which occur in parallel to CPU operations. There is still often an interrupt handler involved but it would be a DMA controller interrupt rather than an interrupt from the I/O device.
Non time-critical I/O may simply be polled or asserted in a round-robin process when no precise timing is required.
If an application has a great deal of time-critical I/O and also time critical data processing. Round-robin scheduling may not be appropriate. Real-time operating systems generally use priority based premptive scheduling, with round-robin for tasks if equal priority.
The concept that a process either uses the CPU or does I/O however makes no sense, a process runs on the CPU, whether it is performing I/O or data processing. In fact for memory mapped I/O the CPU makes no real distinction.

Scheduling Queue for First Come First Server Algorithm

I have the above table, and i have to make a gantt chart for first come first server (FCFS) and Round-Robin (RR) algorithms, also something called a wait queue which i really don't know what it is, after googling a bit i think it's the Queue that has the process that will be executed next? now for FCFS i came up with this charts
yellow means it's executing, green it's waiting for its turn (in READY state), red means it's doing I/O, my question is this correct? if so, what would be the waiting queue be ? i'm thinking it will be P3, P1, P3, P1, P0 (in from right, out from left) which is just the processes sorted based on yellow in reverse. Or should it be the blue stuff ? since the process is in WAIT state there ?
i Also have to make a Wait time and response time table, for:
response time = start time - arrival time
wait time = time where the process is not in RUNNING state, ie it's in WAIT, thus i counted the green blue time since the process started executing
i'm pretty sure that `response time is correct, i'm doubting the latter
Last thing is: at the end of a quantum, the current running process is suspended (interrupted) if and only if the process queue is not empty, since statement has the word quantum i'm assuming it's only valid for Round-Robin scheduling? if so, please elaborate on what this means? i made sense of it like: if quantum time passes, the current running process will be interrupted if and only if there's another process waiting to be executed (ie if we only have one process, and say it runs for 6 units of time, and quantum=3, there's no need to run it for 3 units of time, then make it wait another 3 units of time, then run it again, so the proper answer would be: the process runs from t=0 to t=6 non-stop)

Can a process ask for x amount of time but take y amount instead?

If I am running a set of processes and they all want these burst times: 3, 5, 2 respectively, with the total expected time of execution being 10 time units.
Is it possible for one of the processes to take up more that what they ask for? For example even though it asked for 3 it took 11 instead because it was waiting on the user to enter some input. So the total execution time turns out to be 18.
This was all done in a non-preemptive cpu scheduler.
The reality is that software has no idea how long anything will take - my CPU runs at a different "nominal speed" to your CPU, both our CPUs keep changing their speed for power management reasons, and the speed of software executed by both our CPUs is effected by things like what other CPUs are doing (especially for SMT/hyper-threading) and what other devices happen to be doing at the time (their effect on caches, shared RAM bandwidth, etc); and software can't predict the future (e.g. guess when an IRQ will occur and take some time and upset the cache contents, guess when a read from memory will take 10 times longer because there was a single bit error that ECC needed to correct, guess when the CPU will get hot and reduce its speed to avoid melting, etc). It is possible to record things like "start time, burst time and end time" as it happens (to generate historical data from the past that can be analysed) but typically these things are only seen in fabricated academic exercises that have nothing to do with reality.
Note: I'm not saying fabricated academic exercises are bad - it's a useful tool to help learn basic theory before moving on to more advanced (and more realistic) theory.
Instead; for a non-preemptive scheduler, tasks don't try to tell the scheduler how much time they think they might take - the task can't know this information and the scheduler can't do anything with that information (e.g. a non-preemptive scheduler can't preempt the task when it takes longer than it guessed it might take). For a non-preemptive scheduler; a task simply runs until it calls a kernel function that waits for something (e.g. read() that waits for data from disk or network, sleep() that waits for time to pass, etc) and when that happens the kernel function that was called ends up telling the scheduler that the task is waiting and doesn't need the CPU, and the scheduler finds a different task to run that can use the CPU; and if the task never calls a kernel function that waits for something then the task runs "forever".
Of course "the task runs forever" can be bad (not just for malicious code that deliberately hogs all CPU time as a denial of service attack, but also for normal tasks that have bugs), which is why (almost?) nobody uses non-preemptive schedulers. For example; if one (lower priority) task is doing a lot of heavy processing (e.g. spending hours generating a photo-realistic picture using ray tracing techniques) and another (higher priority) task stops waiting (e.g. because it was waiting for the user to press a key and the user did press a key) then you want the higher priority task to preempt the lower priority task "immediately" (e.g. because most users don't like it when it takes hours for software to respond to their actions).

what is scheduler latency?

This seems to be a basic question, but i couldn't find answer anywhere in googling it.
As Far As I Understand, scheduler latency is the time incurred in making the task runnable again. I mean, if there are 100 processes namely 1, 2, e.t.c, then they are executed let's say in order starting from 1. So the latency is the time that the process 1 is executed again. which means that the latency is the waiting time of the process as well as the waiting time of it when it is in runqueue ready to execute.
Or
i misunderstood whole point and sheduler latency is just nothing but the context switching time between the processes?
Scheduling latency is the time that the system is inproductive because of scheduling tasks. It is system latency incurred because it has to spend time scheduling.
Specifically it consists of 2 elements:
The delay between a task waking up and actually running (the 'context switching time')
Time spent making scheduler decisions (the actual job of the scheduler, which consumes resources that cannot be used by real tasks anymore)

Types of Scheduling algorithms

I understand that CPU scheduling algorithms are classified into
Interactive - Round Robin, Priority scheduling
Batch Scheduling - FCFS,SJF
But I cant understand the reason behind the naming Interactive and Batch Scheduling..??
Why are algorithms like RR called interactive and those like FCFS called batch scheduling??
Thanks in advance...
The idea of Batch Scheduling is that there will be no change in the schedule during runtime: a process is scheduled to do an operation on data, and it runs until the process is finished. In 'interactive' scheduling, a new process could be launched while another process is running, and so time would be allocated for that process as well as the other. In batch scheduling the schedule is determined at the beginning of the operation.
Example of priority (interactive) scheduling:
Process A has a high priority, and process B has a low priority. Process A runs until it requires some input from the user. While A is waiting, the CPU gives some time to process B. Once the input for A has been gathered, process B is swapped out and process A is given the CPU, due to its higher priority.
Example of batch (FCFS) scheduling:
Process A and process B are processes to be scheduled. Process A is given to the CPU first, so B will not receive any time until A finishes running. Even if A pauses for user input, B will not run (and the CPU time while waiting for input is effectively wasted).
Of course, as with everything this low-level, it's not entirely that simple: to gain the illusion of multi-tasking, time is generally divided up between processes even when nothing is waiting for I/O. In priority scheduling, this may mean that more time slices are given to A than B while both are running so that A executes quicker. Both interactive and batch scheduling have their pros and cons: while interactive scheduling gives a quicker response time to the user and divides time up more 'fairly', an overhead is incurred due to how long a 'context switch' takes, which is the time taken for the processor to switch from working on process A to process B.
Interactive scheduling policies assign a time-slice to each process. Once the time-slice is over, the process is swapped even if not yet terminated. It can also be said that scheduling of this kind are preemptive.
Batch scheduling policies, instead, are non-preemptive. Once a Process is in the Running-status, it will not change status until it terminates.