Which data structure is used for ready queue in operating system? - operating-system

To implement round-robin algorithm, the "circular queue" is considered as the best data structure.

Which data structure is used for ready queue in Windows Operating System?
I don't know but I have found some articles which might contain the answer:
Processes, Threads, and Jobs in the Windows Operating System by Mark E. Russinovich and David A. Solomon - search for "ready queue"
Internals of Windows Thread by Mahesh Bailwal - search for "ready queue"
Google query: site:reactos.org "ready queue"
According the article #1 there are fields in the Processor Region Control Block kernel object (PRCB) called ReadySummary (Bitmask 32 bits), DeferredReadyListHead (Single linked list), DispatcherReadyListHead (Array of 32 list entries) which implement the ready queue.
You can use source code of the ReactOS Operating System (article #3) to learn more about the Windows behavior.

Which data structure is used for ready queue in operating system?
This depends on the operating system. For most modern operating system this is some sort of algorithm made to be as fair as possible. This can be a round robin, fifo, CFS (red-black tree), or some other algorithm. The ready queue may be split into priority levels.
When a process is created or unblocked, it can be appended at the back of the list/queue. It may also be left in place and skipped as long as it is blocked.
What is used also depends on weather preemption is in use or not, which it is by default in general purpose operating systems (Linux/Windows/MacOS).

The round-robin scheduler chooses the processes on a first-come, first-served basis. The first to arrive is the first to be executed. Each process is executed for a short time called time-slice. When the time-slice expires, a timer interrupt occurs. Then the first in the queue is chosen to be executed and the interrupted process is placed at the end of the queue. This FIFO feature favors the implementation of the ready queue as a FIFO queue structure.

Related

Micro scheduler for real-time kernel in embedded C applications?

I am working with time-critical applications where the microsecond counts. I am interested to a more convenient way to develop my applications using a non bare-metal approach (some kind of framework or base foundation common to all my projects).
A considered real-time operating system such as RTX, Xenomai, Micrium or VXWorks are not really real-time under my terms (or under the terms of electronic engineers). So I prefer to talk about soft-real-time and hard-real-time applications. An hard-real-time application has an acceptable jitter less than 100 ns and a heat-beat of 100..500 microseconds (tick timer).
After lots of readings about operating systems I realized that typical tick-time is 1 to 10 milliseconds and only one task can be executed each tick. Therefore the tasks take usually much more than one tick to complete and this is the case of most available operating systems or micro kernels.
For my applications a typical task has a duration of 10..100 microseconds, with few exceptions that can last for more than one tick. So any real-time operating system cannot not fulfill my requirements. That is the reason why other engineers still not consider operating system, micro or nano kernels because the way they work is too far from their needs. I still want to struggle a bit and in my case I now realize I have to consider a new category of operating system that I never heard about (and that may not exist yet). Let's call this category nano-kernel or subtick-scheduler
In such dreamed kernels I would find:
2 types of tasks:
Preemptive tasks (that run in their own memory space)
Non-preemptive tasks (that run in the kernel space and must complete in less than one tick.
Deterministic kernel scheduler (fixed duration after the ISR to reach the theoretical zero second jitter)
Ability to run multiple tasks per tick
For a better understanding of what I am looking for I made this figure below that represents the two types or kernels. The first representation is the traditional kernel. A task executes at each tick and it may interrupt the kernel with a system call that invoke a full context switch.
The second diagram shows a sub-tick kernel scheduler where multiple tasks may share the same tick interrupt. Task 1 was summoned with a maximum execution time value so it needs 2 ticks to complete. Task 2 is set with low priority, so it consumes the remaining time of each tick upon completion. Task 3 is non-preemptive so it operates on the kernel space which save some precious context switch time.
Available operating systems such as RTOS, RTAI, VxWorks or µC/OS are not fully real-time and are not suitable for embedded hard real-time applications such as motion-control where a typical cycle would last no more than 50 to 500 microseconds. By analyzing my needs I land on different topology for my scheduler were multiple tasks can be executed under the same tick interrupt. Obviously I am not the only one with this kind of need and my problem might simply be a kind of X-Y problem. So said differently I am not really looking at what I am really looking for.
After this (pretty) long introduction I can formulate my question:
What could be a good existing architecture or framework that can fulfill my requirements other than a naive bare-metal approach where everything is written sequentially around one master interrupt? If this kind of framework/design pattern exists what would it be called?
Sorry, but first of all, let me say that your entire post is completely wrong and shows complete lack of understanding how preemptive RTOS works.
After lots of readings about operating systems I realized that typical tick-time is 1 to 10 milliseconds and only one task can be executed each tick.
This is completely wrong.
In reality, a tick frequency in RTOS determines only two things:
resolution of timeouts, sleeps and so on,
context switch due to round-robin scheduling (where two or more threads with the same priority are "runnable" at the same time for a long period of time.
During a single tick - which typically lasts 1-10ms, but you can usually configure that to be whatever you like - scheduler can do hundreds or thousands of context switches. Or none. When an event arrives and wakes up a thread with sufficiently high priority, context switch will happen immediately, not with the next tick. An event can be originated by the thread (posting a semaphore, sending a message to another thread, ...), interrupt (posting a semaphore, sending a message to a queue, ...) or by the scheduler (expired timeout or things like that).
There are also RTOSes with no system ticks - these are called "tickless". There you can have resolution of timeouts in the range of nanoseconds.
That is the reason why other engineers still not consider operating system, micro or nano kernels because the way they work is too far from their needs.
Actually this is a reason why these "engineers" should read something instead of pretending to know everything and seeking "innovative" solutions to non-existing problems. This is completely wrong.
The first representation is the traditional kernel. A task executes at each tick and it may interrupt the kernel with a system call that invoke a full context switch.
This is not a feature of a RTOS, but the way you wrote your application - if a high priority task is constantly doing something, then lower priority tasks will NOT get any chance to run. But this is just because you assigned wrong priorities.
Unless you use cooperative RTOS, but if you have such high requirements, why would you do that?
The second diagram shows a sub-tick kernel scheduler where multiple tasks may share the same tick interrupt.
This is exactly how EVERY preemptive RTOS works.
Available operating systems such as RTOS, RTAI, VxWorks or µC/OS are not fully real-time and are not suitable for embedded hard real-time applications such as motion-control where a typical cycle would last no more than 50 to 500 microseconds.
Completely wrong. In every known RTOS it is not a problem to get a response time down to single microseconds (1-3us) with a chip that has clock in the range of 100MHz. So you actually can run "jobs" which are as short as 10us without too much overhead. You can even have "jobs" as short as 10ns, but then the overhead will be pretty high...
What could be a good existing architecture or framework that can fulfill my requirements other than a naive bare-metal approach where everything is written sequentially around one master interrupt? If this kind of framework/design pattern exists what would it be called?
This pattern is called preemptive RTOS. Do note that threads in RTOS are NOT executed in "tick interrupt". They are executed in standard "thread" context, and tick interrupt is only used to switch context of one thread to another.
What you described in your post is a "cooperative" RTOS, which does NOT preempt threads. You use that in systems with extremely limited resources and with low timing requirements. In every other case you use preemptive RTOS, which is capable of handling the events immediately.

How did the apples fell for threads to be conceived

I was going through the following lecture notes on OS :
http://williamstallings.com/Extras/OS-Notes/h2.html
What I could draw is that "A process is a stream of execution ,i.e.basically a sequence of statements and so is a thread .However , the register states of one process are independent of the register states of another process but the register states of another thread can be accessed inside a thread. For every process at least one thread is allotted or dedicated ,when a process is started the OS activities for that process are taken over by the thread ( or a thread)"
What was the rationale behind conceiving the idea of threads ? When the OS is running a particular process why do we need some intermediate like a thread between them ?
"However , the register states of one process are independent of the register states of another process but the register states of another thread can be accessed inside a thread".
Can the above statement be taken as in the code for a process we cannot access the register states of a another process but in a code for a thread we can access the register states of another thread ?
(The above question did have the substitution of process and thread by their definition as codes or sequences of streams )
P.S : The title of the question is a metaphoric one .Please forgive if it misleads in any way . :P Could I take the liberty to broaden up and ask that if
the processor generates a thread for every process what does it write in the code for a thread ?(How does the code for a thread look like ? )
Terminology - for a system with virtual memory, threads share the same virtual memory address space, while each process has it's own address space. Processes can share physical memory by having a portion of that memory shared into their virtual address spaces (but the virtual address for each process may be different even though it is the same physical memory block).
Early (1960's) instances of multi-processing were mainframes that ran multiple processes that usually did not communicate with each other. Most of this activity was for batch oriented jobs, with a stream of jobs to be run, often from a punched card reader, or in more advanced situations, from remote job entry sites, which were other computers with a few peripherals (card readers, tape drives, line printers, ... ) that communicated with the mainframe to run jobs. There were also time sharing applications, similar to servers, except in many cases, relatively dumb terminals were used to communicate with the main frame. By the 1970's, APL/SV (A Programmming Language / Share Variables) was a time sharing application / programming language that could share variables between users.
For multi-process / multi-threaded operating systems, the device drivers operate from a queue of requests (such as a file read or write). Each request to be added to a device driver queue is done similar to a context switch so there won't be conflicts between process or thread requests for I/O. Some peripherals, such as mainframe, SCSI, or ... disk drives also operated from an internal queue, and could process I/O requests out of order to reduce random access overhead.
The basic problem that drove thread was how can an application handle multiple tasks at the same time and do it in a system-independent manner?
In classical eunuchs, a process could only do one thing at a time. If you needed to handle multiple things you kicked off multiple processes.
In the olde RSX and VMS systems (and Windoze under the covers), programmers relied on software interrupts. A process could queue I/O requests to multiple devices and receive a software interrupt when the request completed, thus allowing the application to do multiple things at once.
Another approach to the multiple things at once problem was to use event queues (Windoze, X Windows).
The ADA programming language was the first (and still really the only) mainstream programming language to support threads (tasks) as a system independent way to handle these kinds of problems. DOD compliance mandates drove the creation of threads.
Originally, threads were implemented through libraries ("use threads", "many to one model"). With the rise of multiprocessor systems, there became an increased demand to be able to have threads execute in parallel on different processors. This drove the creates of kernel threads in operating systems. (Many operating systems still do not support kernel threads).

Difference between Job Queue,Input Queue and Ready Queue?

Could someone explain what exactly is the function of all the 3 queues and how are they different from each other? It would be great if you could also tell where exactly the queue resides (i.e Main memory or Disk). Thanks!
Edit: I want to know their function with respect to them being used for queueing processes in UNIX based Operating Systems.
Jobs and their queues are abstract concepts with many different implementations (see Wikipedia: Job queue and Wikipedia: Job scheduler) which then define their meaning. Input queue and ready queue fall into the same "abstract" category.
For example: the Windows AT command can schedule and execute jobs in the form of arbitrary OS shell command and the job queue resides almost in Wikipedia: Windows Registry which resides on disk but for performance reasons is also cached in the main memory. See http://ss64.com/nt/at.html for more details

which process puts in waiting queue

Assume we are using semaphores for providing mutual exclusion and one process is executing in critical section. Then another process comes to use the critical region, would it be put into the waiting queue?
I have a doubt that which process puts this process in the waiting queue?
Thanks in advance,
In a typical operating system this is handled by the kernel and not a process. The kernel keeps track of what critical regions exist and which processes are occupying them. Also in a typical operating system the scheduler is also part of the kernel so it is the scheduler that will put the process in a waiting state (or to be more precise more likely a blocking state).
When a thread/process/task requests a mutual exclusion object, it makes a system call to the kernel where mutual exclusion objects are handled. If this object is not available at the moment, then the kernel puts this thread/process/task in the waiting/blocked queue and elects another one.

Multicores and mulithreads

How is process-based multitasking achieved by using multi-threading in each process?
For example, consider when an operating system is running with two background process. Each process supports internally multi-threading features. Now, how does time slicing happen between and inside these processes, and how does time slicing happen between threads?
The scheduler typically works at the thread level. In simplest terms the scheduler gives each runnable thread its timeslice in turn.
So a process with two threads will get twice as much CPU time as a process with one thread.
From:
http://msdn.microsoft.com/en-us/library/ms684259(VS.85).aspx
"A multitasking operating system divides the available processor time among the processes or threads that need it. The system is designed for preemptive multitasking; it allocates a processor time slice to each thread it executes. The currently executing thread is suspended when its time slice elapses, allowing another thread to run. When the system switches from one thread to another, it saves the context of the preempted thread and restores the saved context of the next thread in the queue.
The length of the time slice depends on the operating system and the processor. Because each time slice is small (approximately 20 milliseconds), multiple threads appear to be executing at the same time. This is actually the case on multiprocessor systems, where the executable threads are distributed among the available processors. However, you must use caution when using multiple threads in an application, because system performance can decrease if there are too many threads."
Also check out This link for when to use multi-tasking
The operating system decides when and for how long each thread exectues. For Microsoft operating systems, there is no way to determine or predict which thread in which process will execute next. Each thread also has a priority that it runs at. Higher priority threads tend to get more time than lower This priority can be changed by the user or by a program. See this link for more info.
"Now, how does time slicing happen between and inside these processes, and how does time slicing happen between threads?"
That's entirely up to the operating system to decide, really. A really basic OS might not do time-slicing at all, and just let each process run through to completion on a first-come, first-serve basis.
However, most modern operating systems will use some flavor of scheduling algorithm to decide which thread gets to execute on which core and for how long, and perform the context-switching necessary to save and restore per-thread state when swapping out one thread for another.