FreeSTOS task never get swapped - operating-system

According to the FreeRTOS task scheduling documentation, the kernel can swap a task even if the task is currently executing and haven't called any blocking function. So once the kernel gets the clock ticks interrupt and is executing its ISR, it can schedule another task to execute after that.
On my system with FreeRTOS, I launch 5 tasks, each one is programmed to delay itself at some point and therefore I can see all tasks being swapped in and out and each task is executing at some point. But if I enter an infinite loop inside a task, that task is NEVER gets swapped out.
How is that possible?

Firstly you need to ensure that configUSE_TIME_SLICING is set. This enables the round robin scheduler, which allows the scheduler to do what you are expecting.
Also it will only switch to another task if it is of equal or higher priority.

Related

Is a context switch needed for the the short-term scheduler to run?

My understanding is that the short-term scheduler is a module in the kernel (a process in itself i guess?). Frequently this is being run to check and decide if it should preemptive the running process (may be because of SJF and a shorter job as arrived).
If that is correct, my intuition suggests that for the short-term scheduler to run a context switch has to happen:
Save state of running process
Load the new process (short-term scheduler)
Let it decide which process to run (lets say next_process)
next_process is being allocated the CPU and thus its PCB is loaded.
However I don't think this is correct, judging from what my teacher has taught us.
How and why am I wrong?
How can the short-term scheduler process run without a context switch to happen for it?
Let's start by assuming a task has a state that is one of:
"currently running". If there are 8 CPUs then a maximum of 8 tasks can be currently running on a CPU at the same time.
"ready to run". If there are 20 tasks and 8 CPUs, then there may be 12 tasks that are ready to run on a CPU.
"blocked". This is waiting for IO (disk, network, keyboard, ...), waiting to acquire a mutex, waiting for time to pass (e.g. sleep()), etc. Note that this includes things the task isn't aware of (e.g. fetching data from swap space because the task tried to access data that isn't actually in memory).
Sometimes a task will do something (call a kernel function like read(), sleep(), pthread_mutex_lock(), etc; or access data that isn't in memory) that causes the task to switch from the "currently running" state to the "blocked" state. When this happens some other part of the kernel (e.g. the virtual file system layer, virtual memory management, ...) will tell the scheduler that the currently running task has blocked (and needs to be put into the "blocked" state); and the scheduler will have to find something else for the CPU to do, which will be either finding another task for the CPU to run (and switching the other task from "ready to run" to "currently running") or putting the CPU into a power saving state (because there's no tasks for the CPU to run).
Sometimes something that a task was waiting for occurs (e.g. the user presses a key, a mutex is released, data arrives from swap space, etc). When this happens some other part of the kernel (e.g. the virtual file system layer, virtual memory management, ...) will tell the scheduler that the task needs to leave the "blocked" state. When this happens the scheduler has to decide if the task will go from "blocked" to "ready to run" (and tasks that were using CPUs will continue using CPUs), or if the task will go from "blocked" to "currently running" (which will either cause a currently running task to be preempted and go from "currently running" to "ready to run", or will cause a previously idle CPU to be taken out of a power saving state). Note that in a well designed OS this decision will depend on things like task priorities (e.g. if a high priority tasks unblocks it preempt a low priority task, but if a low priority task unblocks then it doesn't preempt a high priority task).
On modern systems these 2 things (tasks entering and leaving the "blocked" state) are responsible for most task switches.
Other things that can cause task switches are:
a task terminates itself or crashes. This is mostly the same as a task blocking (some other part of the kernel informs the scheduler and the scheduler has to find something else for the CPU to do).
a new task is created. This is mostly the same as a task unblocking (some other part of the kernel informs the scheduler and the scheduler decides if the new task will preempt a currently running task or cause a CPU to be taken out of a power saving state).
the scheduler is frequently switching between 2 or more tasks to create the illusion that they're all running at the same time (time multiplexing). On a well designed modern system this only ever happens when there are more tasks at the same priority than there are available CPUs and those tasks block often enough; which is extremely rare. In some cases (e.g. "earliest deadline first" scheduling algorithm in a real-time system) this might be impossible.
My understanding is that the short-term scheduler is a module in the kernel (a process in itself i guess?)
The scheduler is typically implemented as set of functions that other parts of the kernel call - e.g. maybe a block_current_task(reason) function (where scheduler might have to decide which other task to switch to), and an unblock_task(taskID) function (where if the scheduler decides the unblocked task should preempt a currently running task it already knows which task it wants to switch to). These functions may call an even lower level function to do an actual context switch (e.g. a switch_to_task(taskID)), where that lower level function may:
do time accounting (work out how much time has passed since last time, and use that to update statistics so that people can know things like how much CPU time each task has consumed, how much time a CPU has been idle, etc).
if there was a previously running task (if the CPU wasn't previously idle), change the previously running task's state from "currently running" to something else ("ready to run" or "blocked").
if there was a previously running task, save the previously running task's "CPU state" (register contents, etc) somewhere (e.g. in a some kind of structure).
change the state of the next task to "currently running" (regardless of what the next task's state was previously).
load the next task's "CPU state" (register contents, etc) from somewhere.
How can the short-term scheduler process run without a context switch to happen for it?
The scheduler is just a group of functions in the kernel (and not a process).

Delay Celery task based on condition

Is there any way to delay a Celery task from running based on a condition? Before it moves from scheduled to active I would like to perform a quick check to see if my machine can run the task based on the arguments provided and my machine's state at the time. If it's not, it halts the scheduled queue and waits until the condition is satisfied.
I've looked around at the following points but it didn't seem to cut it:
Celery's Signals: closest thing I could get to is task_prerun() but regardless of what I put in there, the task will get run and it doesn't halt the other scheduled tasks from running. There's also worker_ready() but that doesn't look at the upcoming task's arguments to do the check.
Database Lock (also here as well): I can have each of the tasks start running normally and then do the check at the beginning of the task's run but if I set a periodic interval to check if condition is met, I lose the order of the active queue as condition can be met at any point and one of the many active tasks will be able to continue. This is where the database lock comes in and is so far the most feasible solution. I can make a lock every time I do the check and if the condition's not met, it stays locked. When the condition's finally met, I release the lock for the next item in the queue, preserving the queue's original order.
I find it surprising that celery doesn't have this functionality to specify if/when the next item in the scheduled queue is ready to be run.

What is a multi-rate non preemptive OS?

I have this question related to embedded systems, this expression which i found in a source file of a dispatcher:
What is a multi-rate non preemptive OS / Dispatcher ?
I know a little about dispatcher , non-preemptive systems RTOS based on my research etc ... but i didn't found the expression combined.
What i can understand is that the Dispatcher is the entity responsible for adding a process/thread to the run queue. Non preemptive means that a task when it began to run it cannot be stopped by another task until it finishes and multi-rate means that the dispatcher will keep running tasks like a while(1) loop.
Any help will be appreciated, thanks
Note: the multi-rate tag doesn't exist yet on SO so it's not mentionned :p
This article provides a great explanation and example of a multi-rate non-preemptive scheduler: Multi-Rate Main Loop Tasking
To summarize, imagine a scheduler or main loop that calls a series of functions that each represent a different task. Non-preemptive means that a task cannot preempt another task but that each task yields (returns) back to the scheduler (main loop) so that the scheduler can run another task. Multi-rate means that the scheduler can call each task function at a different periodic rate. In other words, not every task function is called every time through the main loop and some task functions are called more often than others.

Switching from high priority task to low priority task in uCOS II

I'm new to RTOS (uCOS II) and learning it by reading the book written by uCOS author. I have a doubt and I'm unable to find the answer to it.
In uCOS the task with highest priority is given CPU as per the scheduling algorithm. So, if I create write a uCOS application by creating two tasks One with High priority ( Prio = 1 for ex) and the other with low priority ( for ex Prio = 9).
If for example the highest priority task is waiting for an event, then the scheduler should start executing the next higher priority task ? If thats correct then what part of the code switches High priority with low priority ?
The three arch dependent codes are :
1. Interrupt level context switch
2. Start highest priority task ready to run
3. Task level context switch
In case 1 after serving the interrupt the scheduler returns to the highest priority task. In case 2, its called when we start the OS by OSStart()
In case 3, When ever a higher priority task is made ready and its called by timer interrupt
Now, where exactly or how exactly will the scheduler assigns CPU to a lower priority task given the high priority task is in wait ??
Thanks
Another way to consider your question is to ask yourself how did the high priority task get into the waiting state. The answer to both questions is that the high priority task calls an RTOS routine such as GetEvent(). (I don't know whether that is a real uCOS-II routine -- I'm just generalizing.). The RTOS routine puts the high priority task into the waiting state (i.e. blocked) and then the RTOS scheduler finds the next highest priority task that is ready to run and switches to that task's context. The RTOS will have several blocking functions that allow for a task context switch. For example when you read from a queue or mailbox or when you wait for a semaphore or mutex.
The scheduler runs whenever a scheduling event occurs. In your example, that occurs when the high priority task calls the event wait. In general OS calls that may block or yield cause the scheduler to run. The scheduler also runs on exit from ISRs including the IS timer ISR.
In general, when the scheduler performs a context switch, it copies the current processor core registers to the task's control block, and copies the stored register values for the task being switched to into the processor registers, with the stack pointer and program-counter copies last. The change to the program-counter causes execution to continue in the new task with the task's own stack, in the state it was when it last blocked or was preemted. Preemption can occur when a scheduling event occurs in an ISR that causes a higher priority task to become ready.
The thing about uC/OS-II is that it is described in intricate detail in Jean Labrosse's book. The general principles of RTOS with examples using uC/OS-II are described in
this online course by Jack Ganssle.
Interrupt level context switch is used for preemptive, for example, you have an low priority task running, and high priority need to run (OSTimeDly timeout, for example), in this situation, Interrupt level context switch will pause low priority task, then switch to high priority one.
For high to low priority switch, it need high one give up CPU resource by calling OS_Sched

Who schedules the scheduler in OS - Isn't it a chicken and egg scenario?

Who schedules the scheduler?
Which is the first task created and how is this first task created? Isn't any resource or memory required for it? isn't like a chicken and egg scenario?
Isn't scheduler a task? Does it get the CPU at the end of each time slice to check which task needs to be given CPU?
Are there any good links which makes a person think and understand deeply all these concepts rather than spilling out some theory which needs to be byhearted?
The scheduler is scheduled by
an (external) event such as an interrupt, (disk done, mouse click, timer tick)
or an internal event (such as the completion of a thread, the signalling by a thread that it needs to wait for something, or the signalling of a thread that it has released a resource, or a trap caused by a thread doing something illegal like division by zero)
In short, it is triggered by any event that might require that the set of tasks to be run and/or the priorities of those tasks to be reevaluated. The scheduler decides which task(s) run next, and passes control to the next task.
Typically, this "scheduling" of the scheduler is caused by the code associated with a hardware interrupt, or code associated with a system call.
While you can think of the scheduler as being a real thread, in practice it doesn't need to be implemented that way... because it is executed with higher priority than any other task. Sophisticated OSes may in fact set aside a special thread that is the scheduler, and mark it busy when the scheduler gets control. That makes it pretty, but the bogus thread isn't scheduled by the scheduler
One can have multiple schedulers: the highest priority one (e.g., the one we just described), and other schedulers which really are threads, and are run like other user tasks. Such lower priority schedulers tend to be used to manage actions which occur at much longer intervals, such as background jobs.
it is usually invoked periodically by a timed CPU interrupt