I cannot seem to find any info on this question, so I thought I'd ask here.
(No reply here: https://lists.zephyrproject.org/pipermail/zephyr-devel/2017-June/007743.html)
When a driver (eg. SPI or UART) is invoked through
FreeRTOS using the vendor HAL, then there
are two options for waiting upon completion:
1) Interrupt
2) busy-waiting
My question is this:
If the driver is invoked using busy-waiting; Does FreeRTOS then have any knowledge of the busy-waiting (occuring in the HAL Driver)? Does the task still get a time slot allocated (for doing busy-waiting). Is this
how it works? (Presuming FreeRTOS task has a preemptive scheduler)
Now in Zephyr (and probably Mynewt), I can see that when the driver is called, Zephyr keeps track of the calling task, which is then suspended (blocked state) until finished. Then the driver interrupt routine it puts the calling thread into the run-queue, when ready to proceed. This way no cycles are waisted. Is this correct understood?
Thanks
Anders
I don't understand your question. In FreeRTOS, if a driver is implemented to perform a busy wait (i.e. the driver has no knowledge of the multithreading, so is not event driven, and instead uses a busy wait that uses all CPU time) then the RTOS scheduler has no idea that is happening, so will schedule the task just as if it would any other task. Therefore, if the task is the highest priority ready state task it will use all the CPU time, and if there are other tasks of equal priority, it will share the CPU time with those tasks.
On the other hand, if the driver is written to make use of an RTOS (be that Zephr, FreeRTOS, or any other) then it can make use of the RTOS primitives to create a much more efficient event driven execution pattern. I can't see how the different schedulers you mention will behave any differently in this respect. For example, how can Zephr know that a task it didn't know the application writer was going to create was going to call a library function it had no previous knowledge of, and that the library function was going to use a busy wait?
Related
I came across that the process ready for execution in the ready queue are given the control of the CPU by the scheduler. The scheduler selects a process based on its scheduling algorithm and then gives the selected process the control of the CPU and later preempts if it is following a preemptive style. I would like to know that if the CPU's processing unit is being used by the processor then who exactly preempts and schedules the processes if the processing unit is not available.
now , i want to share you my thought about the OS,
and I'm sorry my English is not very fluent
What do you think about the OS? Do you think it's 'active'?
no, in my opinion , OS is just a pile of dead code in memory
and this dead code is constituted by interrupt handle function(We just called this dead code 'kernel source code')
ok, now, CPU is execute process A, and suddenly a 'interrupt' is occur, this 'interrupt' may occured because time clock or because a read system call, anyhow, a interrupt is occur. then CPU will jump the constitute interrupt handl function(CPU jump because CPU's constitute is designed). As said previously, this interrupt handle function is the part of OS kernel source code.
and CPU will execute this code. And what this code will do? this code will scheduleļ¼and CPU will execute this code.
Everything happens in the context of a process (Linux calls these lightweight processes but its the same).
Process scheduling generally occurs either as part of a system service call or as part of an interrupt.
In the case of a system service call, the process may determine it cannot execute so it invokes the scheduler to change the context to a new process.
The OS will schedule timer interrupts where it can do scheduling. Scheduling can also occur in other types of interrupts. Interrupts are handled by the current process.
Having just finished a book on comp. architecture, I find myself not completely clarified on where the scheduler is running.
What I'm looking to have clarified is where the scheduler is running - does it have it's own core assigned to run that and nothing else, or is the "scheduler" in fact just a more ambiguous algorithm, that it implemented in every thread being executed - ex. upon preemption of thread, a swithToFrom() command is run?
I don't need specifics according to windows x/linux x/mac os x, just in general.
No the scheduler is not run in it's own core. In fact multi-threading was common long before multi-core CPUs were common.
The best way to see how scheduler code interacts with thread code is to start with a simple, cooperative, single-core example.
Suppose thread A is running and thread B is waiting on an event. thread A posts that event, which causes thread B to become runnable. The event logic has to call the scheduler, and, for the purposes of this example, we assume that it decides to switch to thread B. At this point in time the call stack will look something like this:
thread_A_main()
post_event(...)
scheduler(...)
switch_threads(threadA, threadB)
switch_threads will save the CPU state on the stack, save thread A's stack pointer, and load the CPU stack pointer with the value of thread B's stack pointer. It will then load the rest of the CPU state from the stack, where the stack is now stack B. At this point, the call stack has become
thread_B_main()
wait_on_event(...)
scheduler(...)
switch_threads(threadB, threadC)
In other words, thread B has now woken up in the state it was in when it previously yielded control to thread C. When switch_threads() returns, it returns control to thread B.
These kind of manipulations of the stack pointer usually require some hand-coded assembler.
Add Interrupts
Thread B is running and a timer interrupts occurs. The call stack is now
thread_B_main()
foo() //something thread B was up to
interrupt_shell
timer_isr()
interrupt_shell is a special function. It is not called. It is preemptively invoked by the hardware. foo() did not call interrupt_shell, so when interrupt_shell returns control to foo(), it must restore the CPU state exactly. This is different from a normal function, which returns leaving the CPU state according to calling conventions. Since interrupt_shell follows different rules to those stated by the calling conventions, it too must be written in assembler.
The main job of interrupt_shell is to identify the source of the interrupt and call the appropriate interrupt service routine (ISR) which in this case is timer_isr(), then control is returned to the running thread.
Add preemptive thread switches
Suppose the timer_isr() decides that it's time for a time-slice. Thread D is to be given some CPU time
thread_B_main()
foo() //something thread B was up to
interrupt_shell
timer_isr()
scheduler()
Now, scheduler() can't call switch_threads() at this point because we are in interrupt context. However, it can be called soon after, usually as the last thing interrupt_shell does. This leaves the thread B stack saved in this state
thread_B_main()
foo() //something thread B was up to
interrupt_shell
switch_threads(threadB, threadD)
Add Deferred Service Routines
Some OSses do not allow you to do complex logic like scheduling from within ISRs. One solution is to use a deferred service routine (DSR) which runs as higher priority than threads but lower than interrupts. These are used so that while scheduler() still needs to be protected from being preempted by DSRs, ISRs can be executed without a problem. This reduces the number of places a kernel has to mask (switch off) interrupts to keep it's logic consistent.
I once ported some software from an OS that had DSRs to one that didn't. The simple solution to this was to create a "DSR thread" that ran higher priority than all other threads. The "DSR thread" simply replaces the DSR dispatcher that the other OS used.
Add traps
You may have observed in the examples I've given so far, we are calling the scheduler from both thread and interrupt contexts. There are two ways in and two ways out. It looks a bit weird but it does work. However, moving forward, we may want to isolate our thread code from our Kernel code, and we do this with traps. Here is the event posting redone with traps
thread_A_main()
post_event(...)
user_space_scheduler(...)
trap()
interrupt_shell
kernel_space_scheduler(...)
switch_threads(threadA, threadB)
A trap causes an interrupt or an interrupt-like event. On the ARM CPU they are known as "software interrupts" and this is a good description.
Now all calls to switch_threads() begin and end in interrupt context, which, incidentally usually happens in a special CPU mode. This is a step towards privilege separation.
As you can see, scheduling wasn't built in a day. You could go on:
Add a memory mapper
Add processes
Add multiple Cores
Add hyperthreading
Add virtualization
Happy reading!
Each core is separately running the kernel, and cooperates with other cores by reading / writing shared memory. One of the shared data structures maintained by the kernel is the list of tasks that are ready to run, and are just waiting for a timeslice to run in.
The kernel's process / thread scheduler runs on the core that needs to figure out what to do next. It's a distributed algorithm with no single decision-making thread.
Scheduling doesn't work by figuring out what task should run on which other CPU. It works by figuring out what this CPU should do now, based on which tasks are ready to run. This happens whenever a thread uses up its timeslice, or makes a system call that blocks. In Linux, even the kernel itself is pre-emptible, so a high-priority task can be run even in the middle of a system call that takes a lot of CPU time to handle. (e.g. checking the permissions on all the parent directories in an open("/a/b/c/d/e/f/g/h/file", ...), if they're hot in VFS cache so it doesn't block, just uses a lot of CPU time).
I'm not sure if this is done by having the directory-walking loop in (a function called by) open() "manually" call schedule() to see if the current thread should be pre-empted or not. Or maybe just that tasks waking up will have set some kind of hardware time to fire an interrupt, and the kernel in general is pre-emptible if compiled with CONFIG_PREEMPT.
There's an inter-processor interrupt mechanism to ask another core to schedule something on itself, so the above description is an over-simplification. (e.g. for Linux run_on to support RCU sync points, and TLB shootdowns when a thread on another core uses munmap). But it's true that there isn't one "master control program"; generally the kernel on each core decides what that core should be running. (By running the same schedule() function on a shared data-structure of tasks that are ready to run.)
The scheduler's decision-making is not always as simple as taking the task at the front of the queue: a good scheduler will try to avoid bouncing a thread from one core to another (because its data will be hot in the caches of the core it was last running on, if that was recent). So to avoid cache thrashing, a scheduler algorithm might choose not to run a ready task on the current core if it was just running on a different core, instead leaving it for that other core to get to later. That way a brief interrupt-handler or blocking system call wouldn't result in a CPU migration.
This is especially important in a NUMA system, where running on the "wrong" core will be slower long-term, even once the caches populate.
There are three types of general schedulers:
Job scheduler also known as the Long term scheduler.
Short term scheduler also known as the CPU scheduler.
Medium term scheduler, mostly used to swap jobs so there can be non-blocking calls. This is usually for not having too many I/O jobs or to little.
In an operating systems book it shows a nice automata of the states these schedulers go to and from. Job scheduler puts things from job queue to ready queue, the CPU scheduler takes things from ready queue to running state. The algorithm is just like any other software, it must be run on a cpu/core, it is most likely probably part of the kernel somewhere.
It doesn't make sense the scheduler can be preempted. The jobs inside the queue can be preempted when running, for I/O, etc. No the kernel does not have to schedule itself to allocate the task, it just gets cpu time without scheduling itself. And yes, most likely the data is in probably in ram, not sure if it is worth storing in the cpu cache.
Since idle tasks are generally used to safely consume CPU time that is not required by other software, what would happen if there was no idle task? Would the RTOS just automatically create one? Also, what other purpose do idle tasks serve other than consuming time?
what would happen if there was no idle task? Would the RTOS just automatically create one?
I doubt there is any RTOS that would do that. If there would be no idle task, then the list of runnable tasks would be empty and the scheduler would probably crash. Generally the single most important reason for idle thread's existence is to make the list of runnable tasks "never empty". This simplifies the code of scheduler.
Also, what other purpose do idle tasks serve other than consuming time?
In some systems idle task can perform some low priority activities (for example some garbage collection). It can also switch the core to low-power mode, especially on embedded devices. In that case when the idle task is run it means that there is nothing more to do, so the core can be stopped and wait for the next event (hardware interrupt or timeout) without using too much power. When the next event arrives the core is awakened by hardware and the event is processed. Either some "normal" thread will start running, or - if there is still nothing more to do - idle thread will resume and again switch to low-power mode.
If the CPU clock is running, instructions must be executed; if there were no idle task, then your OS is broken. The idle loop is an intrinsic part of the RTOS, not a user task, so the RTOS does not need to "create one automatically".
A low priority user task that never yields will prevent the idle loop from running; which is not necessarily a good thing. Such a task is not the same thing as the idle loop. For one thing any CPU usage tools the RTOS supports would report 100% usage all the time if such a task eusted - execution of the idle loop is not included is CPU usage because the CPU is always ready to respond to any interrupt event when idle - the loop does not ever cause any ready task to be delayed.
The idle task, or "idle loop" is typically just that, and empty loop that the program counter is set to when there is nothing else to do. In some architectures the loop may include a "wait-for-interrupt" instruction that stops core execution (stops clocking the core) to reduce power consumption. Since any context switch necessarily requires an interrupt to occur, the processor can if WFI is supported just stop in this loop.
Some RTOS support user hooks for the idle loop; low-priority run-to-completion functions that can operate in the background in the idle loop context.
what other purpose do idle tasks serve other than consuming time?
Most commonly, it does two things:
1. Garbage(resource) collection or cleaning
2. Initiate steps to reduce power consumption
I have this question related to embedded systems, this expression which i found in a source file of a dispatcher:
What is a multi-rate non preemptive OS / Dispatcher ?
I know a little about dispatcher , non-preemptive systems RTOS based on my research etc ... but i didn't found the expression combined.
What i can understand is that the Dispatcher is the entity responsible for adding a process/thread to the run queue. Non preemptive means that a task when it began to run it cannot be stopped by another task until it finishes and multi-rate means that the dispatcher will keep running tasks like a while(1) loop.
Any help will be appreciated, thanks
Note: the multi-rate tag doesn't exist yet on SO so it's not mentionned :p
This article provides a great explanation and example of a multi-rate non-preemptive scheduler: Multi-Rate Main Loop Tasking
To summarize, imagine a scheduler or main loop that calls a series of functions that each represent a different task. Non-preemptive means that a task cannot preempt another task but that each task yields (returns) back to the scheduler (main loop) so that the scheduler can run another task. Multi-rate means that the scheduler can call each task function at a different periodic rate. In other words, not every task function is called every time through the main loop and some task functions are called more often than others.
Who schedules the scheduler?
Which is the first task created and how is this first task created? Isn't any resource or memory required for it? isn't like a chicken and egg scenario?
Isn't scheduler a task? Does it get the CPU at the end of each time slice to check which task needs to be given CPU?
Are there any good links which makes a person think and understand deeply all these concepts rather than spilling out some theory which needs to be byhearted?
The scheduler is scheduled by
an (external) event such as an interrupt, (disk done, mouse click, timer tick)
or an internal event (such as the completion of a thread, the signalling by a thread that it needs to wait for something, or the signalling of a thread that it has released a resource, or a trap caused by a thread doing something illegal like division by zero)
In short, it is triggered by any event that might require that the set of tasks to be run and/or the priorities of those tasks to be reevaluated. The scheduler decides which task(s) run next, and passes control to the next task.
Typically, this "scheduling" of the scheduler is caused by the code associated with a hardware interrupt, or code associated with a system call.
While you can think of the scheduler as being a real thread, in practice it doesn't need to be implemented that way... because it is executed with higher priority than any other task. Sophisticated OSes may in fact set aside a special thread that is the scheduler, and mark it busy when the scheduler gets control. That makes it pretty, but the bogus thread isn't scheduled by the scheduler
One can have multiple schedulers: the highest priority one (e.g., the one we just described), and other schedulers which really are threads, and are run like other user tasks. Such lower priority schedulers tend to be used to manage actions which occur at much longer intervals, such as background jobs.
it is usually invoked periodically by a timed CPU interrupt