Is it possible to have priority inversion with two processes? - operating-system

The usual example gives three processes, but shouldn't it be possible with only two processes?
Let us assume we have two processes, p3 and p1. p3's priority is less than p1. p3 is currently in the critical section using a resource which p1 will need. p1 comes along, and p3 gets preempted by p1. But, p3 was holding a resource which p1 needed to run.
Isn't this an example of priority inversion with 2 processes?

No, it's not. p1 will just block when it tries to acquire the resource, which will allow p3 to run again, finish using the resource, and relinquish it, thereby unblocking p1.
Wikipedia's example of a priority inversion is a good reference that describes why three tasks are required.

Related

A Semaphore example

Let we have two semaphores S and Q and both initialized to 1 :
P0 P1
Wait(S) Wait(S)
Wait(Q) Wait(Q)
… …
… …
Signal(S) Signal(Q)
Signal(S) Signal(S)
What is the unwanted situatuin(s) that will happen here ? Also what is the effect of the second signal call on s made by process p0 on this situation (s) ?
A lot of this depends on the implementation of the semaphore. Is it a counting semaphore or binary? If counting, does it have a maximum or is it unlimited?
A bad situation that can occur is if P0 executes first, Q is acquired but never released, so P1 will wait on Q forever.
Another bad situation that can occur is if P0 executes first, and P1 executes between the two Signal(S) calls in P0 (ignoring the Wait(Q) bit for now). P1 is released, and assumes it has exclusive access on S, but then P0 executes the second Signal(S), breaking the assumption of mutual exclusivity.
The effect of the second Signal(S) depends on the semaphore. If it's binary, then it will be re-signaled if it was acquired between the first and second Signal(S) calls. If nothing was acquired, it will do nothing.
If it's a counting semaphore, then its counter will be incremented, or any additional waiting threads will be allowed to proceed. If it has a maximum value, then this value won't be exceeded.

Resource Constrained Project Scheduling such that tasks are scheduled based on highest priority

This is regarding a Resource Constrained Project Scheduling Problem (RCPSP). This involves scheduling certain tasks in time windows on machines subject to availability of manpower. This is set up in the form of an Integer Program. I'm using a uniform discrete time representation.
The decision variables are x_it: x_it = 1 if activity i is scheduled to start at a discrete time point t.
Every task has a priority associated with it due to external reasons. To illustrate the goal, consider 3 tasks - p1,p2,p3 with priorities 3,3,4. (two priority levels - 3,4) The requirement is this - if sufficient manpower is available to schedule p1 & p2 or p3 alone, p3 must be chosen even though p1+p2 > p3. I'm looking for a way to implement this logic using decision variables x_it.
I've tried implementing my requirement in the following manner: Assign a new priority (P) to each task: P1 = 3, P2 = 3, P3 = 7; Essentially this involves scaling each priority level such that no combination of lower priority tasks can be higher than this priority level and setting the objective function to "maximize P_i*x_it"
The problem with this approach is that while scaling for a large set of tasks (~300 tasks) and multiple priority levels (20 levels), the new priority values quickly become huge numbers (~10^17).
Is there a more robust way to implement this requirement within the Integer Programming paradigm?
One way would be:
Solve for jobs with highest priority (say priority 1). Let number of jobs schedule be n1.
Add constraint: scheduled number of jobs with priority 1 = n1
Solve for jobs with priorities 1 and 2. Let number of scheduled jobs with priority 2 be n2.
Add constraint: scheduled number of jobs with priority 2 = n2
etc

Assumptions taken in Rate Monotonic Scheduling Algorithm?

I'm doing a Real Time Systems course, and we in the class are stuck in some assumptions in the section 4 of the paper of Liu and Layland about Rate-Monotonic Scheduling that we can not fully understand:
If floor(T2/T1) is the number of Times that Task1 interferes in Task2 why the function applied to T2/T1 is floor and not ceil?
Also, there are these equations:
According to the paper, and as we clearly see in the image, equation (1) is a necessary condition but not sufficient, while equation (2) is a sufficient condition. This gives sense to me, but why do the authors state as a conclusion this:
In other words, whenever the T1 < T2 and C1,C2 are such that the task scheduling is feasible with Task2 at higher priority than Task1, it is also feasible with Task1 at higher priority than Task2 (but opposite is not true)[...] Specifically, tasks with higher request rates will have higher priorities.
If the second equation, where Task2 have the highest priority, is a sufficient condition, why could we assume that the task scheduling would be feasible if Task1 have the highest priority instead of Task2?
I hope i explained myself well, please feel free to tell me also if i've understood wrong the article statements.
EDIT:
As requested, here is a little explanation to the terms in the article and presented in this question.
In the article, T1 refers to the Period (and Deadline) of Task1,
and T2 to the Period (and Deadline) of Task2.
C1 and C2 refer to the runtime of Task1 and Task2 each one.
First of all, too bad you didn't include what T's and C's mean. I had to read the article, but it was an interesting read so thanks.
It's pretty straightforward. The first equation (1) defines what is the highest possible T2 period value - it cannot be longer than C2 (time needed to execute task 2) plus C1 multiplied by the number of times task 1 is requested during period T2 (hence, floor(T1/T2)) AND CAN FULLY EXECUTE. If T1 is requested, it's executed no matter what, as in this case it's the highest priority task. Task 2 is not able to execute if task 1 is currently executing.
Consider the equation (1) with certain values. Let's use the values suggested in the article. T2 = 5, T1 = 2, C1 = C2 = 1. The floor(T2/T1) gives us the number of requests of task 1 that occur during task 2 period. It's 2 (floor(5/2)). If it would be ceiling(T2/T1) than the result would be 3, which is obviously wrong, as the third request was indeed executed (as depicted in Figure 2), but the period didn't end. To understand it better, consider the same case, but extend the task 1 execution time C1 to 1.5. Below is the timeline for such a system, which is feasible. It fulfills the equaiton (1). If we would use ceiling(T2/T1) then the equation would not be fulfilled, and you clearly see below that the system is fine. I think this will help you see and understand the difference - we need to take into account not the number of times the task is requested, but the number of times the full period of a higher priority task can fit in the period of the lower priority task).
Tasks timeline. Image made using Excel. One "column" is 0.5 unit of time
That's the first part of my answer, need to take some more time to answer the other part of your question. I'll post an update soon.
Anyway, thanks for linking to an interesting article.

Shortest Remaining Time First Query

If they are two processes with the following Data, How should the Gantt Chart be?(SRTF scheduling)
Process Arrival Burst
P1 0 17
P2 1 16
So will the process P1 be completed first and then P2 will start executing..or P1 will have to wait for 16 milli seconds?
I feel the conflict can be resolved either by choosing the process which came earlier or by the process which has the longest burst. In this case, on choosing either of the approaches, P1 will be completed first.
It's going to choose P1 because at the time P2 didn't exist
P1 AT =0 thus will start first
next step they will be equal but as the processor is working already on p1 it will prefer to keep working on it until interruption or termination
In this case it gets P2 at 1, then it checks for remaining time. As both remaining times are same, it put new process; P2 in the queue for next execution(after P1's completion).

Existence of a 0- and 1-valent configurations in the proof of FLP impossibility result

In the known paper Impossibility of Distributed Consensus with one Faulty Process (JACM85), FLP (Fisher, Lynch and Paterson) proved the surprising result that no completely asynchronous consensus protocol can tolerate even a single unannounced process death.
In Lemma 3, after showing that D contains both 0-valent and 1-valent configurations, it says:
Call two configurations neighbors if one results from the other in a single step. By an easy induction, there exist neighbors C₀, C₁ ∈ C such that Dᵢ = e(Cᵢ) is i-valent, i = 0, 1.
I can follow the whole proof except when they claim the existence of such C₀ and C₁. Could you please give me some hints?
D (the set of possible configurations after applying e to elements of C) contains both 0-valent and 1-valent configurations (and is assumed to contain no bivalent configurations).
That is — e maps every element in C to either a 0-valent or a 1-valent configuration. By definition of C, there must be a root element that is connected to all other elements by a series of "neighbour" relationships, so there must be a boundary point where an element in C that leads to a 0-valent configuration after e is neighbours with an element in C that leads to a 1-valent configuration after e.
I once went down the path of reading all these papers only to discover its a complete waste of time.
The result is not surprising at all.
The paper you mention "[Impossibility of Distributed Consensus with One Faulty
Process]" 1
is a long list of complex mathematical proofs that simply equate to:
1) Consensus is a deterministic state
2) one (or more) faulty systems within an environment is a non deterministic environment
3) No deterministic state, action or outcome can ever be reached within a non deterministic environment.
The end. No further thought is required.
This is how it works in the real world outside of acadamia.
If you wish for agents to reach consensus then Synchronous (Timing model) approximation constructs have to be added to make the environment deterministic within a given set of constraints. For example simple constructs like Timeouts, Ack/Nack, Handshake, Witness, or way more complex constructs.
The closer you wish to get to a Synchronous deterministic model the more complex the constructs become. A hypothetical Synchronous model would have infinitely complex constructs. Also bearing in mind that a fully deterministic Synchronous model can never be achieved in a non trivial distributed system. This is because in any non trivial dynamic multi variate system with a variable initial state there exists an infinite number of possible states, actions and outcomes at any point in time. Chaos Theory
Consider the complexity of a construct for detecting a dropped TCP packets because of buffer overflow errors in a router at hop number 21. And the complexity of detecting the same buffer overflow error dropping the detection signal from the construct itself.
Define a mapping f such that f(C) = 0, if e(C) is 0-valent, otherwise, f(C) = 1, if e(C) is 1-valent.
Because e(C) could not be bivalent, if we assume that D has no bivalent configuration, f(C) could only be either 0 or 1.
Arrange accessible configurations from the initial bivalent configuration in a tree, there must be two neighbors C0, C1 in the tree that f(C0) != f(C1). Because, if not, all f(C) are the same, which means that D has only either all 0-valent configurations or all 1-valent configurations.