I know that in Asymmetric multiprocessing one processor can make all the scheduling decisions whilst the others execute user code only. But is it possible for a single-processor system to allow for multi-level queue scheduling? And why?
Certainly a single processor system can use multi-level queue scheduling (MLQS). The MLQS algorithm is used to decide which process to run next when a processor becomes available. The algorithm doesn't require that there be more than one processor in the system. As a matter of fact, the algorithm is most efficient if there is only one processor. In a multi-processor system the data structure would need some sort of locking to prevent it from being corrupted.
Related
Is it the Operating System who delegates any job to core?
What is that specific algorithm or a way, on which it is decided that the next task will be assigned to which cpu core?
Correct, it is the operating system's responsibility to designate tasks for the CPU to complete, regardless of how many cores it has. It does this via a scheduling algorithm, which decides in what order tasks/processes should be executed. In a symmetric multiprocessing environment, the OS views each core as an independent, identical CPU and therefore schedules them individually. When several cores are available, there are a couple important things to keep in mind:
1. Load balancing- For maximum performance, each core should be performing roughly the same amount of work.
2. Affinity- Because of caching, it is best (in terms of performance) for processes to complete the entirety of their execution on just one processor.
These things need to be kept in mind along with the traditional scheduling considerations of priority, fairness etc. Obviously, this topic is far too large for just one post to handle, so here are some resources that go in to further detail:
https://www.tutorialspoint.com/operating_system/os_process_scheduling_algorithms.htm
https://www.geeksforgeeks.org/multiple-processor-scheduling-in-operating-system/
As part of my research I need to provide the reader with a comprehensive introduction to distributed systems. I am currently struggling with properly defining a number of the concepts that are recurring in literature on distributed systems and transactions. These are (a) nodes, (b) processes, (c) transactions and, (d) operations. I could really use some help in understanding their correlation, as I seem to continuously mix up nodes with processes and transaction with operations. Any input is appreciated!
I have already tried to grasp these concepts by researching the following literature:
Distributed Systems: Concepts and Design (G. Coulouris et al.)
A brief introduction to distributed systems (A.S. Tannenbaum)
I'm not sure what type of the ambiguity you exactly perceive in the defined terms and thus it's harder to put the right answer. These terms have the same meaning in the distributed systems terminology as any other part of the information technology science.
To be more concrete.
The node is usually "a machine" which runs one or multiple processes. The process executes operations. Operations may be grouped in a transaction (the transaction is composed from operations).
I just quickly searched in the resources you referred and there is said
A computing element, which we will generally refer to as a node, can
be either a hardware device or a software process.
The node runs processes. But the node itself can be a real hardware (a machine) or it could be a virtual machine - which is a process that runs on some machine (a real hardware).
From distributed system perspective you don't mind what the node is in reality (it's real as the HW or it's virtual as the SW) but it's a "container" for running processes.
Process is "a runtime". It processes something. It can process numbers, data, messages... The chunks of the work that is processed inside of the process are operations. E.g. you save data to a database and you do it as an operation.
The transaction defines a unit of work which consists of several operations. The transaction brings you guarantees over those operations. What are those guarantees depend on model you use. If you think about ACID transactions (as defined in paper Principles of Transaction-Oriented Database Recovery from 1983) then you are guaranteed that the all operation are successfully process or no of them is(A), consistency is maintained(C), parallel transactions do not interfere(I) and you are guaranteed that transaction outcome is persistent(D).
I'm learning about synchronization and now I'm confused about the definition of atomic operation. Through searching, I could only find out that atomic operation is uninterruptible operation.
Then, won't the atomic operation only be valid for uni processor system since for multiprocessor system, many operation can be run simultaneously?
This link explains it pretty much perfectly (emphasis mine):
On multiprocessor systems, ensuring atomicity exists is a little
harder. It is still possible to use a lock (e.g. a spinlock) the same
as on single processor systems, but merely using a single instruction
or disabling interrupts will not guarantee atomic access. You must
also ensure that no other processor or core in the system attempts to
access the data you are working with. The easiest way to achieve this
is to ensure that the instructions you are using assert the 'LOCK'
signal on the bus, which prevents any other processor in the system
from accessing the memory at the same time. On x86 processors, some
instructions automatically lock the bus (e.g. 'XCHG') while others
require you to specify a 'LOCK' prefix to the instruction to achieve
this (e.g. 'CMPXCHG', which you should write as 'LOCK CMPXCHG op1,
op2').
I have a dual core Intel processor and would like to use one core for processing certain commands like SATA writes and another for reads, how do we do it? Can this be controlled from the application(with multiple threads) or would this require a change in the kernel to ensure the reads/writes dont get processed by the the 'wrong' core?
This will be pretty much totally up to your operating system, which you haven't specified.
Some may offer thread affinity to try and keep one thread on the same execution engine (be that a core or a CPU), but that's only for threads. If two threads both write to disk, then they may well do so on different engines.
If you want that sort of low level control, it's probably best to do it at the kernel level.
My question to you would by "Why?". A great deal of performance tuning goes into OS kernels and they would generally know better than any application how to efficiently do this low level stuff.
Can a shared ready queue limit the scalability of a multiprocessor system?
Simply put, most definetly. Read on for some discussion.
Tuning a service is an art-form or requires benchmarking (and the space for the amount of concepts you need to benchmark is huge). I believe that it depends on factors such as the following (this is not exhaustive).
how much time an item which is picked up from the ready qeueue takes to process, and
how many worker threads are their?
how many producers are their, and how often do they produce ?
what type of wait concepts are you using ? spin-locks or kernel-waits (the latter being slower) ?
So, if items are produced often, and if the amount of threads is large, and the processing time is low: the data structure could be locked for large windows, thus causing thrashing.
Other factors may include the data structure used and how long the data structure is locked for -e.g., if you use a linked list to manage such a queue the add and remove oprations take constant time. A prio-queue (heaps) takes a few more operations on average when items are added.
If your system is for business processing you could take this question out of the picture by just using:
A process based architecure and just spawning multiple producer consumer processes and using the file system for communication,
Using a non-preemtive collaborative threading programming language such as stackless python, Lua or Erlang.
also note: synchronization primitives cause inter-processor cache-cohesion floods which are not good and therefore should be used sparingly.
The discussion could go on to fill a Ph.D dissertation :D
A per-cpu ready queue is a natural selection for the data structure. This is because, most operating systems will try to keep a process on the same CPU, for many reasons, you can google for.What does that imply? If a thread is ready and another CPU is idling, OS will not quickly migrate the thread to another CPU. load-balance kicks in long run only.
Had the situation been different, that is it was not a design goal to keep thread-cpu affinities, rather thread migration was frequent, then keeping separate per-cpu run queues would be costly.