Trouble understanding preemptive kernel - operating-system

how does preemptive kernel lead to race conditions? if a process is preempted i.e. isn't kicked out of its critical section . from my understanding race condition is when several processes try to access and manipulate resources concurrently right. I have trouble grasping the concept

A preemptive kernel can start and stop threads at any point. This means that threads that don't carefully coordinate their accesses through locks and critical sections end up in race conditions.
The other form of multithreading is cooperative multithreading, where threads can be stopped only at points where they explicitly offer to yield the processor. This helps prevent race conditions because threads are not interrupted at random unexpected points in their processing.
The downside of cooperative multithreading is that a thread written not to yield can hog the processor, and this is why most modern operating systems use preemptive multithreading rather than cooperative multithreading.

Related

What is the relation between threads and concurrency?

Concurrency means the ability to allow more than one tasking process at a time
But where does threading fit in it?
What's the relation between threading and concurrency?
What is the important link between these two which will fully clear all the confusion?
Threads are one way to achieve concurrency. Concurrency can be achieved at many levels and in many ways. Here are some of them from low to high level to give you a rough idea:
CPU pipelines: at a hardware level, multiple instructions are executed in parallel (each instruction is at a different stage in the pipeline)
Duplication of ALU and FPU CPU units. There are more arithmetic-logic units and floating point units in a processor that can execute instructions in parallel.
vectorized instructions. Instructions which execute for multiple data.
hyperthreading/SMT. Duplication of the process context.
threads. Streams of instructions which can be executed in parallel.
processes. You run both a browser and a word processor on your system.
tasks. Higher abstraction over threads and async work.
multiple computers. Run your program on multiple computers
I'm new here but I don't really understand the down votes? Could someone explain it to me? Is it just because this question has (likely) been answered or because it's considered obvious?
Now that that's out of the way...
Nothing being executed on the CPU is from a "process" or anything else. They're all threads, scheduled and entirely managed by the kernel using a variety of algorithms to reach expected performance for any given application. The CPU only allows n threads, where n equals (cores * hyperthreads). In most cases hyperthreads will be 2 so you have double the core count to get logical CPU count. What this really means is that instead of 4 (for example) threads being run at once, it can support up to 8. Now the OS may have hundreds of threads at any given time, how is that possible? Well the kernel uses a variety of checks such as how frequently and long the thread sleeps to assign it a priority. Whenever the CPU triggers a timer interrupt the OS will swap out threads appropriately if they've reached their alotted time slice based on the OS determination of its priority.

Preemptive & Nonpreemptive Kernel VS Premptive & Nonpreemptive Scheduling

I'm struggling to understand the difference between between preemptive and nonpreemptive kernels, and premptive & nonpreemptive scheduling.
From Operating System Concepts (Ninth Edition), Silberschatz, Galvin and Gagne:
A preemptive kernel is where the kernel allows a process to be removed and replaced while it is running in kernel mode.
A nonpreemptive kernel does not allow a process running in kernel mode to be preempted; a kernel-mode process will run until it exits kernel mode, blocks, or voluntarily yields control of the CPU.
Under nonpreemptive scheduling, once the CPU has been allocated to a process, the process keeps the CPU until it releases the CPU either by terminating or by switching to the waiting state. - This to me seems to be the exact same description of the nonpreemeptive kernel.
Preemptive scheduling occurs in these 2 situations (from same book):
*When a process switches from the running state to the ready state (for
example, when an interrupt occurs)
When a process switches from the waiting state to the ready state (for
example, at completion of I/O)*
The book simply states that there is a choice in this scenario, I'm not sure that the choice is. Is the choice whether the same process in the ready queue can be continued to run, or a different process from the ready queue can be selected to run?
Basically, a clear clarification on these 4 terms is what I'm looking for.
Thank you!
The problem you face is that these terms have no standard meaning. I suspect that your book is using them from the point of view of some specific operating system (which one?—Je ne sais quois). If you have searched the internet, you have certainly found conflicting explanations.
For example, Preemptive scheduling can mean:
Scheduling that will interrupt a running process that does not yield the CPU.
Scheduling that will interrupt a running process before it's quantum has expired.
Your book apparently has yet another definition. I cannot tell the meaning from the excerpt. It is entirely possible that book is just confusing on this point (as it apparently is on so many points). One point is that process states are system dependent. To define the term using process states is quite confusing.
This part of it definition makes sense:
Under nonpreemptive scheduling, once the CPU has been allocated to a process, the process keeps the CPU until it releases the CPU either by terminating or by switching to the waiting state.
The preemptive part of the definition makes no sense.
In the case of the term preemptive kernel, that is pretty standard and the description of it you give is somewhat normal. That said, the book's statement should be a bit more refined because every process has to be removed in kernel mode. Normally, one would say something along the lines of "In a non-preemptive kernel, a process cannot be removed when it has entered kernel mode through an exception."
A preemptive kernel is essential for real-time processing.
So you ask:
This to me seems to be the exact same description of the nonpreemeptive kernel.
You have four theoretical combinations:
Preemptive Scheduling Preemptive Kernel
The operating system can forcibly switch processes at nearly any time.
Non-Preemptive Scheduling Preemptive Kernel
This combination does not exist.
Non-Preemptive Scheduling Non-Preemptive Kernel
The process has to explicitly yield to allow the operating system to switch to another process.
Preemptive Scheduling Nonpreemptive Kernel
The operating system can forcibly switch processes except when the process is executing in kernel mode to process an exception (there may be circumstances where the process cannot be switched while handling an interrupt as well).

how often deadlock detection to be done?

If deadlocks are less likely to occur in the system and processes are frequently requesting resources what are the main logical reasons due to which we must only run the deadlock algorithm only when they occur and not in a continuous loop testing the deadlock condition to be true?
This is highly system dependent. Quality operating systems incorporate general purpose locking mechanisms as system services that detect deadlocks. Deadlock checks are normally instigated when a process requests a lock; not through continuous loop.
Many quick and dirty system have no deadlock detection at all.

What are some of the advantages and disadvantages of user mode and kernel mode

In an Operating System, threads are typically handled in user mode or kernel mode. What are some of the advantages and disadvantages of each?
User-mode threads are scheduled in user mode by something in the process, and the process itself is the only thing handled by the kernel scheduler.
That means your process gets a certain amount of grunt from the CPU and you have to share it amongst all your user mode threads.
Simple case, you have two processes, one with a single thread and one with a hundred threads.
With a simplistic kernel scheduling policy, the thread in the single-thread process gets 50% of the CPU and each thread in the hundred-thread process gets 0.5% each.
With kernel mode threads, the kernel itself manages your threads and schedules them independently. Using the same simplistic scheduler, each thread would get just a touch under 1% of the CPU grunt (101 threads to share the 100% of CPU).
In an Operating System, threads are typically handled in user mode or kernel mode.
Typically threads are handled in kernel mode.
What are some of the advantages and disadvantages of each?
In theory, the advantage of handling threads in user mode is that it avoids the cost of switching to/from kernel when a thread needs to wait for something (which can be relatively expensive as it involves privilege level switches). In practice this "advantage" often doesn't happen because the thread has to switch to kernel anyway, to ask kernel to do whatever the thread would wait for (e.g. switching to kernel to ask it to read data from a file and then returning to user-space to block/wait instead of blocking/waiting in the kernel while you're already in the kernel). Mostly; it only helps if the kernel isn't involved at all, which only really happens when user-space threads communicate with or share locks with other threads in the same process.
The advantage of handling threads in kernel is that the kernel can support thread priorities properly. For example, if you have two processes that both have a very high priority thread and a very low priority thread; then kernel can make sure CPU time is given to the high priority thread/s when possible (including pre-empting low priority threads when a high priority thread unblocks) because it knows about all threads; but user-space can't do this - one process doesn't know about threads belonging to a different process, so user threading will get it wrong and ruin performance (one process giving CPU time to its own very low priority thread while a very high priority thread belonging to a different process needs the CPU and doesn't get it).
The other advantage of handling threads in the kernel is that (especially for systems with multiple CPUs) the kernel has access to better information and can make smarter scheduling decisions. This includes balancing the load (from any number of processes) across all CPUs while taking into account "CPU topology" (NUMA, SMT, etc; possibly including heterogeneous CPUs - e.g. "big.LITTLE" arrangements); and making trade-offs between thread priorities, CPU temperatures and power consumption (e.g. if one of the CPU's is getting too hot, reduce that CPU's clock speed to let it cool down and use it for low priority threads so that the performance of high priority threads isn't effected).

Is there a critical section race around error in case of non preemptive kernels?

I am reading from the book "Operating system concepts", which says that
a non preemptive kernel is free from race around conditions on the kernel data structures, as only one process is active at a time
I want to ask
Is this true only for a single processor? Because if it is a multiprocessor system, then it could have multiple processes running concurrently, which could be accessing the same kernel data.
Is there any need (if it is possible) of using semaphores in a non preemptive kernel system?