What is the difference between preemptive and non-preemptive scheduling? - operating-system

I'm fresh on these term scheduling. I've become comfortable with identifying preemptive or non preemptive scheduling.

Preemptive scheduling means that the scheduler (like an OS kernel) can interrupt the running tasks at any time, schedule something else and resume them later.
Non-preemptive scheduling requires the tasks to cooperate by yielding control back to the scheduler in reasonable intervals (even when they are not done with their work yet).

Related

Starvation vs Convoy Effect

Is the only difference between starvation and convoy effect that convoy effect is mainly defined on FCFS scheduling algorithms and starvation is on priority based scheduling?
I researched on both effects but couldn't find a comparison. This is based on operating systems theory which I learned for my college degree.
Starvation and convoys can occur both algorithms. The simplest, starvation, can be simulated by a task entering this loop (I hope it isn't UDB):
while (1) {
}
In FCFS, this task will never surrender the CPU, thus all tasks behind it will starve. In a Priority based system, this same task will starve every task of a lower priority.
Convoys can be more generally recognized as a resource contention problem; one task has the resources (cpu), and other tasks have to wait until it is done with it. In a priority-based system, this is manifest in priority inversion where a high priority task is blocked because it needs a resource owned by a lower priority task. There are ways to mitigate these, including priority inheritance and ceiling protocols. Absent these mechanisms, tasks contending for a resource will form a convoy much like in the fcfs; unlike the fcfs, tasks not contending for the resource are free to execute at will.
The aspirations of responsiveness, throughput and fairness are often at odds, which is partly why we don't have a true solution to scheduling problems.

Why schedule threads between cpu cores expensive?

There are some articles which refers to so called core affinity and this technique will bind a thread to a core which would decrease the cost of the scheduling threads between cores. In contrast there is my question.
Why operating system doing this job take more time when scheduling threads between cores.
You're probably misinterpreting something you read. It's not the actual scheduling that's slow, it's that a task will run slower when it moves to a new core because private per-core caches will be cold on that new core.
(And worse than that, dirty on the old core requiring write-back before they can be read.)
In most OSes, it's not so much that a task is "scheduled to a core", as that the kernel running on each core grabs the highest-priority task that's currently runnable, subject to restrictions from the affinity mask. (The scheduler function on this core will only consider tasks whose affinity mask matches this core.)
There is no single-threaded master-control program that decides what each core should be doing; the scheduler in normal kernels is a cooperative multi-threaded algorithm.
It's mostly not the actual cost of CPU time in the kernel's scheduler function, it's that the task runs slower on a new core.

Preemptive & Nonpreemptive Kernel VS Premptive & Nonpreemptive Scheduling

I'm struggling to understand the difference between between preemptive and nonpreemptive kernels, and premptive & nonpreemptive scheduling.
From Operating System Concepts (Ninth Edition), Silberschatz, Galvin and Gagne:
A preemptive kernel is where the kernel allows a process to be removed and replaced while it is running in kernel mode.
A nonpreemptive kernel does not allow a process running in kernel mode to be preempted; a kernel-mode process will run until it exits kernel mode, blocks, or voluntarily yields control of the CPU.
Under nonpreemptive scheduling, once the CPU has been allocated to a process, the process keeps the CPU until it releases the CPU either by terminating or by switching to the waiting state. - This to me seems to be the exact same description of the nonpreemeptive kernel.
Preemptive scheduling occurs in these 2 situations (from same book):
*When a process switches from the running state to the ready state (for
example, when an interrupt occurs)
When a process switches from the waiting state to the ready state (for
example, at completion of I/O)*
The book simply states that there is a choice in this scenario, I'm not sure that the choice is. Is the choice whether the same process in the ready queue can be continued to run, or a different process from the ready queue can be selected to run?
Basically, a clear clarification on these 4 terms is what I'm looking for.
Thank you!
The problem you face is that these terms have no standard meaning. I suspect that your book is using them from the point of view of some specific operating system (which one?—Je ne sais quois). If you have searched the internet, you have certainly found conflicting explanations.
For example, Preemptive scheduling can mean:
Scheduling that will interrupt a running process that does not yield the CPU.
Scheduling that will interrupt a running process before it's quantum has expired.
Your book apparently has yet another definition. I cannot tell the meaning from the excerpt. It is entirely possible that book is just confusing on this point (as it apparently is on so many points). One point is that process states are system dependent. To define the term using process states is quite confusing.
This part of it definition makes sense:
Under nonpreemptive scheduling, once the CPU has been allocated to a process, the process keeps the CPU until it releases the CPU either by terminating or by switching to the waiting state.
The preemptive part of the definition makes no sense.
In the case of the term preemptive kernel, that is pretty standard and the description of it you give is somewhat normal. That said, the book's statement should be a bit more refined because every process has to be removed in kernel mode. Normally, one would say something along the lines of "In a non-preemptive kernel, a process cannot be removed when it has entered kernel mode through an exception."
A preemptive kernel is essential for real-time processing.
So you ask:
This to me seems to be the exact same description of the nonpreemeptive kernel.
You have four theoretical combinations:
Preemptive Scheduling Preemptive Kernel
The operating system can forcibly switch processes at nearly any time.
Non-Preemptive Scheduling Preemptive Kernel
This combination does not exist.
Non-Preemptive Scheduling Non-Preemptive Kernel
The process has to explicitly yield to allow the operating system to switch to another process.
Preemptive Scheduling Nonpreemptive Kernel
The operating system can forcibly switch processes except when the process is executing in kernel mode to process an exception (there may be circumstances where the process cannot be switched while handling an interrupt as well).

Round Robin scheduling and IO

I'm unsure how Round Robin scheduling works with I/O Operations. I've learned that CPU bound processes are favoured by Round Robin scheduling, but what happens if a process finishes its time slice early?
Say we neglect the dispatching process itself and a process finishes its time slice early, will the scheduler schedule another process if its CPU bound, or will the current process start its IO operation, and since that isn't CPU bound, will immediately switch to another (CPU bound) process after? And if CPU bound processes are favoured, will the scheduler schedule ALL CPU bound process until they are finished and only afterwards schedule the I/O processes?
Please help me understand.
There are two distinct schedulers: the CPU (process/thread ...) scheduler, and the I/O scheduler(s).
CPU schedulers typically employ some hybrid algorithms, because they certainly do regularly encounter both pre-emption and processes which voluntarily give up part of their time-slice. They must service higher-priority work quickly, while not "starving" anyone. (A study of the current Linux scheduler is most interesting. There have been several.)
CPU schedulers identify processes as being either "primarily 'I/O-bound'" or "primarily 'CPU-bound'" at this particular time, knowing that their characteristics can and do change. If your process repeatedly consumes full time slices, it is seen as CPU-bound.
I/O schedulers seek to order and re-order the I/O request queues for maximum efficiency. For instance, to keep the read/write head of a physical disk-drive moving efficiently in a single direction. (The two components of disk-drive delay are "seek time" and "rotational latency," with "seek time" being by-far the worst of the two. Per contra, solid-state drives have very different timing.) I/O-schedulers also have to be aware of the channels (disk interface cards, cabling, etc.) that provide access to each device: they can't simply watch what any one drive is doing. As with the CPU-scheduler, requests must be efficiently handled but never "starved." Linux's I/O-schedulers are also readily available for your study.
"Pure round-robin," as a scheduling discipline, simply means that all requests have equal priority and will be serviced sequentially in the order that they were originally submitted. Very pretty birds though they are, you rarely encounter Pure Robins in real life.

Which is more efficient preemptive or nonpreemptive scheduler?

I am just learning about preemptive and nonpreemptive schedulers so I was wondering which is more efficient a preemptive or nonpreemptive scheduler? or are they equally efficient? or are they just specialized for one task and are efficient in there own way?
Use a non-preemtive scheduler if you want I/O and inter-thread comms to be slower than Ruby running on an abacus.
Use a preemptive scheduler if you want to be saddled with locks, queues, mutexes and semaphores.
[I've also heard that there are positive characteristics too, but you'll have to Google for that, since googling your exact title results in: 'About 55,900 results']