Does First-Come, First-Served (FCFS) Scheduling avoids deadlock? - operating-system

Is it guaranteed that by using FCFS scheduling, the 'system' will not be in deadlock?
Thanks in advance!

The four conditions for a deadlock are:
Mutual exclusion: Irrespective of the scheduling algorithm, resources can be possessed by one process without sharing.
Hold and wait: In this condition, processes can wait for other resources while holding onto one resource. This is possible with any scheduling algorithm.
No preemption: FCFS is non-preemptive. That is, processes executing a critical section of their code cannot be forced to stop.
Circular wait: Processes are waiting for another process to release a resource in a circular fashion. This, again, is irrespective of the scheduling algorithm
Hence, FCFS does not guarantee that the system will not be in deadlock. If the four conditions are met, a deadlock will occur.

Deadlocks are caused by resource locking, not scheduling order. FCFS doesn’t guarantee that your threads will always grab resources in sequence, so the answer to your question is no.

Related

Deadlock situations

In a given set of processes if some of them can be executed and rest can't because of resources they are requesting for are being held by some other processes. Do we call such a situation as deadlock?
A deadlock situation occurs when none of the process's requests is fulfilled. Each process will be in circular wait, waiting for resources held by other processes.
Necessary Condition for deadlock is
Mutual exclusion
Hold and wait
No pre-emption
Circular wait
Here in this situation, some processes can execute and rest are not able to, so the processes that are not allocated resources will surely be in circular wait.
Hence so this situation can't be called clearly a deadlock situation.
You can go thorugh Operating systems text book by ABRAHAM SILBERSCHATZ (Wiley) 'The Dinasour Book'.

Condition for deadlock to happen

Deadlock-
A deadlock is a situation in which two or more competing actions are each waiting for the other to finish, and thus neither ever does.
For deadlock to happen all these four conditions must hold simultaneously
Mutual exclusion
Hold and wait
No preemption
Circular wait
we apply deadlock detection algorithm to check whether the system is in deadlock or not. But if any of the above criterion fails(For example No preemption fails, so some resource is being released) which incur the system to be deadlock free. So what I think, if the deadlock detection algorithm finds the state to be unsafe and all the above four criterion holds true simultaneously then we can say the system is in deadlock.
Unsafe state may or may not lead to deadlock.
But unsafe state with all these 4 conditions holding simultaneously must incur deadlock.
Am I thinking right?
I have another question in my mind. How can we say deadlock has occurred definitely because the next moment some process may release their resources to get rid of deadlock.
Am I thinking right?
Yes, you are correct.
See this link to see why unsafe may not lead to deadlock.
I have another question in my mind. How can we say deadlock has occurred definitely because the next moment some process may release their resources to get rid of deadlock.
Say deadlock has occurred. All processes causing the deadlock are waiting for some resource to be acquired. And because of "No preemption" no such process will get preempted and therefore release resources. Also because of "Hold and wait" property, process needs some more resources to continue but is not going to give up or release whatever it is holding now and will wait till its required resources are met. Once there is deadlock, nothing can happen (there cannot be any progress) until you break one of the above condition. Breaking a condition will make some other process to meet its requirements and ensure progress and completion.

semaphore priority inversion

Why do RTOSes not have any implementation to prevent priority inversion for semaphore even though it exists for mutex.
Semaphores do not need to prevent priority inversion?
The same situation happens both on uC/OS and GreenHills RTOS.
Thanks in advance.
Priority inversion occurs when a low-priority task owns a semaphore,
and a high-priority task is forced to wait on the semaphore until the
low-priority task releases it. If, prior to releasing the semaphore,
the low priority task is preempted by one or more mid-priority tasks,
then unbounded priority inversion has occurred because the delay of
the high-priority task is no longer predictable. This defeats Deadline
Monotonic Analysis (DMA) because it is not possible to predict if the
high-priority task will meet its deadline.
Sharing a critical resource between high and low priority tasks is not
a desirable design practice. It is better to share a resource only
among equal priority tasks or to limit resource accesses to a single
resource server task. Examples are a print server task and a file
server task. We have long advocated this practice. However, with the
layering of increasingly diverse and complicated middleware onto
RTOSs, it is becoming impractical to enforce such simple strategies.
Hence, in the interest of safety, it is best to implement some method
of preventing unbounded priority inversion.
Check full link at http://www.smxrtos.com/articles/techppr/mutex.htm
Regards,
Otacon

which process puts in waiting queue

Assume we are using semaphores for providing mutual exclusion and one process is executing in critical section. Then another process comes to use the critical region, would it be put into the waiting queue?
I have a doubt that which process puts this process in the waiting queue?
Thanks in advance,
In a typical operating system this is handled by the kernel and not a process. The kernel keeps track of what critical regions exist and which processes are occupying them. Also in a typical operating system the scheduler is also part of the kernel so it is the scheduler that will put the process in a waiting state (or to be more precise more likely a blocking state).
When a thread/process/task requests a mutual exclusion object, it makes a system call to the kernel where mutual exclusion objects are handled. If this object is not available at the moment, then the kernel puts this thread/process/task in the waiting/blocked queue and elects another one.

What is round-robin scheduling?

In a multitasking operating system context, sometimes you hear the term round-robin scheduling. What does it refer to?
What other kind of scheduling is there?
Round Robin Scheduling
If you are a host in a party of 100 guests, round-robin scheduling would mean that you spend 1 minute (a fixed amount) per guest. You go through each guest one-by-one, and after 100 minutes, you would have spent 1 minute with each guest. More on Wikipedia.
There are many other types of scheduling, such as priority-based (i.e. most important people first), first-come-first-serve, earliest-deadline-first (i.e. person leaving earliest first), etc. You can start off by googling for scheduling algorithms or check out scheduling at Wikipedia
Timeslicing is inherent to any round-robin scheduling system in practice, AFAIK.
I disagree with InSciTek Jeff's implication that the following is round-robin scheduling:
That is, each task at the same priority in the round-robin rotation can be allowed to run until they reach a resource blocking condition before yeilding to the next task in the rotation.
I do not see how this could be considered round-robin. This is actually preemptive scheduling. However, it is possible to have a scheduling algorithm which has elements of both round-robin and preemptive scheduling, which VxWorks does if round-robin scheduling and preemption are both enabled (round-robin is disabled by default). The way to enable round-robin scheduling is to provide a non-zero value in kernelTimeSlice.
I do agree with this statement:
Therefore, while timeslicing based scheduling implies round-robin scheduling, round-robin scheduling does not require equal time based timeslicing.
You are right that it doesn't require equal time. Preemption can muck with that. And actually in VxWorks, if a task is preempted during round-robin scheduling, when the task gets control again it will execute for the rest of the time it was allocated.
Edit directed at InSciTek Jeff (I don't have comment privileges)
Yes, I was referring to task locking/interrupt disabling, although I obviously didn't express that very well. You preempted me (ha!) with your second comment. I hope to debate the more salient point, that you believe round-robin scheduling can exist without time slicing. Or did you just mean equal time based time slicing? I disagree with the former, but agree with the latter. I am eager to learn. Thanks.
Edit2 directed at Jeff:
Round-robin can exist without timeslicing. That is exactly what happens in VxWorks when kernelTimeSlice is disabled (zero).
I disagree with this statement. See this document section 2.2.3 with the heading Round-Robin Scheduling.
Round-robin scheduling uses time
slicing to achieve fair allocation of
the CPU to all tasks with the same
priority. Each task, in a group of
tasks with the same priority, executes
for a defined interval or time slice.
Round-robin scheduling is enabled by
calling kernelTimeSlice( ), which
takes a parameter for a time slice, or
interval. [...] If round-robin
scheduling is enabled, and preemption
is enabled for the executing task, the
system tick handler increments the
task's time-slice count.
Timeslicing is inherent in round-robin scheduling. Otherwise you are relying on a task to give up CPU control, which round-robin scheduling is intended to solve.
The answers here and even the Wikipedia article describe round-robin scheduling to inherently include periodic timeslicing. While this is very common, I believe that Round-Robin scheduling and timeslicing are not exactly the same thing. Certainly, for timeslicing to make sense, round-robin schedling is implied when rotating to each task, however you can do round-robin scheduling without having timeslicing. That is, each task at the same priority in the round-robin rotation can be allowed to run until they reach a resource block condition and only then having the next task in the rotation run. In other words, when equal priority tasks exist, the reschedling points are not time pre-emptive.
The above idea is actually realized specifically in the case of Wind River's VxWorks kernel. Within their priority scheme, tasks of each priority run round robin but do not timeslice without specifically enabling that feature in the kernel. The reason for this flexibility is to avoid the overhead of timeslicing tasks that are already known to run into a block within a well bounded time.
Therefore, while timeslicing based scheduling implies round-robin scheduling, round-robin scheduling does not require equal time based timeslicing.
An opinion. It seems that we are intertwining two mechanisms into one. Assuming only the OP's original assertion "In a multitasking operating system context" then
1 - A round robin scheduler always schedules the next item in a circular queue.
2 - How the scheduler regains control to perform the scheduling is separate and unrelated.
I don't disagree that the most prevalent method for 2 is time-slicing / yield waiting for resource, but as has been noted there are others. If I am not mistaken the first Mac's didn't utilize time-slicing, they used voluntary yield / yield waiting for resource (20+ year old brain cells can be wrong sometimes;).
Round robin is a simple scheduling algorithm where time is divided evenly among jobs without priority.
For example - if you have 5 processes running - each process will be allowed to run for 1/5 a unit of time before another process is allowed to run. Round robin is typically easy to implement in an OS.
Actaully, you are getting confused with Preemptive scheduling and Round robin. Infact RR is part of Preemptive scheduling.
Round Robin scheduling is based on time sharing also known as quantum (max time given by CPU to any process in one go). There are multiple processes(which require different time to complete aka burst time) in a queue and CPU has to process them all so it keeps switching between processes to give every process equal time based on the quantum value. This type of scheduling is known as Round Robin scheduling.
Checkout this simple video to understand round robin scheduling easily: https://www.youtube.com/watch?v=9hw-_qJ55K4