Pre-emptive scheduling algorithm - operating-system

Can a first come first serve algorithm with priority levels be described as a pre-emptive scheduling algorithm?

Traditionally first come first serve algorithm was used in Batch scheduling.
In most cases there is no existence of a pure form of "preemptive scheduling" , but preemptive scheduling is mixed with other policies like round robin , shortest job first etc . So yes there may be implementations where first come first serve algorithm is used along with preemptive scheduling.

Absolutely NO. The original First Come First Served is a non-preemptive scheduling strategy. I don't know whether there is any alternative/revision of this algorithm that can be implemented as preemptive FCFS. You can find this in the "Operating System Concept" by Abraham Silberschatz et al.

FCFS doesn't consider priorities. If you need to consider priorities, then you have to go with Priority Scheduling; which is an extended version of FCFS. Then it becomes a preemptive scheduling method.

Related

In Paxos, why can't we use random backoff to avoid collision?

I understand that the heart of Paxos consensus algorithm is that there is only one "majority" in any given set of nodes, therefore if a proposer gets accepted by a majority, there cannot be another majority that accepts a different value, given that any acceptor can only accept 1 single value.
So the simplest "happy path" of a consensus algorithm is just for any proposer to ping a majority of acceptors and see if it can get them to accept its value, and if so, we're done.
The collision comes when concurrent proposers leads to a case where no majority of nodes agrees on a value, which can be demonstrated with the simplest case of 3 nodes, and every node tries to get 2 nodes to accept its value but due to concurrency, every node ends up only get itself to "accept" the value, and therefore no majority agrees on anything.
Paxos algorithm continues to invent a 2-phase algorithm to solve this problem.
But why can't we just simply backoff a random amount of time and retry, until eventually one proposer will succeed in grabbing a majority opinion? This can be demonstrated to succeed eventually, since every proposer will backoff a random amount of time if it fails to grab a majority.
I understand that this is not going to be ideal in terms of performance. But let's get performance out of the way first and only look at the correctness. Is there anything I'm missing here? Is this a correct (basic) consensus algorithm at all?
The designer of paxos is a Mathematician first, and he leaves the engineering to others.
As such, Paxos is designed for the general case to prove consensus is always safe, irrespective of any message delays or colliding back-offs.
And now the sad part. The FLP impossibility result is a proof that any system with this guarantee may run into an infinite loop.
Raft is also designed with this guarantee and thus the same mathematical flaw.
But, the author of Raft also made design choices to specialize Paxos so that an engineer could read the description and make a well-functioning system.
One of these design choices is the well-used trick of exponential random backoff to get around the FLP result in a practical way. This trick does not take away the mathematical possibility of an infinite loop, but does make its likelihood extremely, ridiculously, very small.
You can tack on this trick to Paxos itself, and get the same benefit (and as a professional Paxos maintainer, believe me we do), but then it is not Pure Paxos.
Just to reiterate, the Paxos protocol was designed to be in its most basic form SO THAT mathematicians could prove broad statements about consensus. Any practical implementation details are left to the engineers.
Here is a case where a liveness issue in RAFT caused a 6-hour outage: https://decentralizedthoughts.github.io/2020-12-12-raft-liveness-full-omission/.
Note 1: Yes, I said that the Raft author specialized Paxos. Raft can be mapped onto the more general Vertical Paxos model, which in turn can be mapped onto the Paxos model. As can any system that implements consensus.
Note 2: I have worked with Lamport a few times. He is well aware of these engineering tricks, and he assumes everyone else is, too. Thus he focuses on the math of the problem in his papers, and not the engineering.
The logic you are describing is how leader election is implemented in Raft:
when there is no leader (or leader goes offline) every node will have a random delay
after the random delay, the node will contact every other node and propose "let me be the leader"
if the node gets the majority of votes, then the node considers itself the leader: which is equivalent of saying "the cluster got the consensus on who is the leader"
if the node did not get the majority, then after a timeout and a random delay, the node will attempt again
Raft also has a concept of term, but on a high level, the randomized waits is the feature with helps to get to consensus faster.
Answering your questions "why can't we..." - we can, it will be a different protocol.

Does prior-CPU-usage-time dependent scheduling algorithm exist in Operating System?

I'd like to know if there is scheduling algorithm something like, priority of certain process getting higher if prior-CPU-usage-time is small. It is similar with Weighted Round Robin algorithm, but its priority depends on 'prior-CPU-usage-time'. Thanks.
Have a look at multilevel feedback/rr scheduling of the old UNIXs, e.g. 3.xBSD. There, you have aging in dependence of the cup usage.

Flynn's Bottleneck - maximum speedup 2

According to Flynn's Bottleneck, the speedup due to instruction level parallelism (ILP) can be at best 2. Why is it so?
That version of Flynn's Bottleneck originates in Detection and Parallel Execution of Independent Instructions where the authors empirically conclude that ILP for most programs is less than 2. That was 1970 technology and that was an empirical conclusion. You can contrast it with Fisher's Optimism which said there was lots of ILP out there and proposed trace scheduling and VLIW to exploit it.
So the literal answer to your question is because that's what they measured within basic blocks back then.
The ILP less than 2 meaning isn't really used anymore because superscalars and better compilers have blown past the number 2. So instead, over time Flynn's Bottleneck has come to mean You cannot retire more than you fetch which stems from his earlier paper Some Computer Organizations and Their Effectiveness.
The execution bandwidth of a system is usually referred to as being
the maximum number of operations that can be performed per unit time
by the execution area. Notice that due to bottlenecks in issuing
instructions, for example, the execution bandwidth is usually
substantially in excess of the maximum performance of a system.

RTC vs PIT for scheduler

My professor said that it is recommended to use the PIT instead of the RTC to implement a epoch based round robin scheduler. He didn't really mention any concrete reasons and I can't think of any either. Any thoughts?
I personally would use the PIT (if you can Only choose between these two, modern OSes use the HPET iirc)
One, it can generate interrupts at a faster frequency (although I question if preempting a process within milliseconds is beneficial)
two, it has a higher priority on the PIC chip, which means it can't be interrupted by other IRQs.
Personally I use the PIT for the scheduler and the RTC timer for wall clock time keeping.
The RTC can be changed (it is, after all, a normal "clock"), meaning it's values can't be trusted from an OS perspective. It might also not have good enough resolution and/or precision needed for OS scheduler interrupts.
While this doesn't answer the question directly, here are some further insights into choosing the preemption timer.
On modern systems (i586+; I am not sure if i486's external local APIC (LAPIC) had timer) you should use neither, because you always get the local APIC timer, which is per-core. There's even more: using either PIT or RTC for timer interrupts is already obsolete.
The LAPIC timer is usually used for preemption on modern systems, while HPET is used for high precision events. On systems having HPET, there's usually no physical PIT; also, first two comparators of HPET are capable of replacing PIT and RTC interrupt sources, which is the simplest possible configuration for them and is preferred in most cases.
PITs are faster. RTCs typically increment no faster than 8 kHz and are most commonly configured to increment at 1 Hz (once a second).
PIT has interrupt function.
PIT has higher resolution than Real-Time Clock.

Difference between Latency and Jitter in Operating-Systems

discussing criterias for Operating-Systems every time I hear Interupt-Latency and OS-Jitter. And now I ask myself, what is the Difference between these two.
In my opinion the Interrupt-Latency is the Delay from occurence of an Interupt until the Interupt-Service-Routine (ISR) is entered.
On the contrary Jitter is the time the moment of entering the ISR differs over time.
Is this the same you think?
Your understanding is basically correct.
Latency = Delay between an event happening in the real world and code responding to the event.
Jitter = Differences in Latencies between two or more events.
In the realm of clustered computing, especially when dealing with massive scale out solutions, there are cases where work distributed across many systems (and many many processor cores) needs to complete in fairly predictable time-frames. An operating system, and the software stack being leveraged, can introduce some variability in the run-times of these "chunks" of work. This variability is often referred to as "OS Jitter". link
Interrupt latency, as you said is the time between interrupt signal and entry into the interrupt handler.
Both the concepts are orthogonal to each other. However, practically, more interrupts generally implies more OS Jitter.