Which is more efficient preemptive or nonpreemptive scheduler? - operating-system

I am just learning about preemptive and nonpreemptive schedulers so I was wondering which is more efficient a preemptive or nonpreemptive scheduler? or are they equally efficient? or are they just specialized for one task and are efficient in there own way?

Use a non-preemtive scheduler if you want I/O and inter-thread comms to be slower than Ruby running on an abacus.
Use a preemptive scheduler if you want to be saddled with locks, queues, mutexes and semaphores.
[I've also heard that there are positive characteristics too, but you'll have to Google for that, since googling your exact title results in: 'About 55,900 results']

Related

What is the difference between preemptive and non-preemptive scheduling?

I'm fresh on these term scheduling. I've become comfortable with identifying preemptive or non preemptive scheduling.
Preemptive scheduling means that the scheduler (like an OS kernel) can interrupt the running tasks at any time, schedule something else and resume them later.
Non-preemptive scheduling requires the tasks to cooperate by yielding control back to the scheduler in reasonable intervals (even when they are not done with their work yet).

Starvation vs Convoy Effect

Is the only difference between starvation and convoy effect that convoy effect is mainly defined on FCFS scheduling algorithms and starvation is on priority based scheduling?
I researched on both effects but couldn't find a comparison. This is based on operating systems theory which I learned for my college degree.
Starvation and convoys can occur both algorithms. The simplest, starvation, can be simulated by a task entering this loop (I hope it isn't UDB):
while (1) {
}
In FCFS, this task will never surrender the CPU, thus all tasks behind it will starve. In a Priority based system, this same task will starve every task of a lower priority.
Convoys can be more generally recognized as a resource contention problem; one task has the resources (cpu), and other tasks have to wait until it is done with it. In a priority-based system, this is manifest in priority inversion where a high priority task is blocked because it needs a resource owned by a lower priority task. There are ways to mitigate these, including priority inheritance and ceiling protocols. Absent these mechanisms, tasks contending for a resource will form a convoy much like in the fcfs; unlike the fcfs, tasks not contending for the resource are free to execute at will.
The aspirations of responsiveness, throughput and fairness are often at odds, which is partly why we don't have a true solution to scheduling problems.

Can single-processor systems handle multi-level queue scheduling?

I know that in Asymmetric multiprocessing one processor can make all the scheduling decisions whilst the others execute user code only. But is it possible for a single-processor system to allow for multi-level queue scheduling? And why?
Certainly a single processor system can use multi-level queue scheduling (MLQS). The MLQS algorithm is used to decide which process to run next when a processor becomes available. The algorithm doesn't require that there be more than one processor in the system. As a matter of fact, the algorithm is most efficient if there is only one processor. In a multi-processor system the data structure would need some sort of locking to prevent it from being corrupted.

Trouble understanding preemptive kernel

how does preemptive kernel lead to race conditions? if a process is preempted i.e. isn't kicked out of its critical section . from my understanding race condition is when several processes try to access and manipulate resources concurrently right. I have trouble grasping the concept
A preemptive kernel can start and stop threads at any point. This means that threads that don't carefully coordinate their accesses through locks and critical sections end up in race conditions.
The other form of multithreading is cooperative multithreading, where threads can be stopped only at points where they explicitly offer to yield the processor. This helps prevent race conditions because threads are not interrupted at random unexpected points in their processing.
The downside of cooperative multithreading is that a thread written not to yield can hog the processor, and this is why most modern operating systems use preemptive multithreading rather than cooperative multithreading.

Application-level scheduling

As far as I know Windows uses a round-robin scheduler which distributes time slices to each ruanable thread.
This means that if an application/process has multiple threads it gets an larger amount of the computational resources than other application with fewer threads.
Now one could think of a operating system scheduler that assigns an equal amount of the compuational resources to each application. And this partition is distributed among all threads of this application. The result would be that no application could affect other applications just because it has more threads.
Now my questions:
How is such scheduling called? I need a term so I can search for research papers regarding such scheduling.
Do operating systems exist which uses such scheduling?
I think it's some variation of "fair" scheduling.
I expect that you will need to use synonyms for "application", for example they may be called "tasks" or "processes" instead.