In an operating system subject i'm taking this semester we were asked this question
what are the techniques that can be used to protect critical sections ??
i tried searching online but couldn't find anything
could anyone please briefly explain critical sections and what techniques to protect them ?
First of all critical section applies only to parallel execution, and it is a piece of code that cannot be executed by more than one thread / process at given time.
It occurs when two or more threads or processes want to write into the same location at once,
which potentially can cause incorrect state of the data or deadlock.
Even so innocent looking piece of code as i += 1 has to be protected in parallel world -- you have to remember that execution of thread or process can be suspended at any time by OS.
The basic mechanism of synchronization are mutexes and monitors.
With semaphores one can limit access to resources.
a) A process must first declare its intention to enter
the critical section by raising a flag.
b) Next, the critical section is entered and upon
leaving, the flag is lowered.
c) If the process is suspended after raising the flag
but before it able to enter the critical section,
the other process will see the raised flag and not
enter until the flag is lowered.
Related
Refer to Galvin et. al Operating System Concepts, 8th edition, 6th chapter, section 6.9, page 257. It says, "If two critical sections are instead executed concurrently, the result is equivalent to their sequential execution in some unknown order. Although this property is useful in many application domains, in many cases we would like to make sure that a critical section forms a single logical unit of work that either is performed in its entirety or is not performed at all." When is that property useful? Please explain, thanks in advance! Also, please offer me some vegemite to eat!
The property is useful (because it increases potential parallelism) when the order that the critical sections are executed is irrelevant.
For a more complex example; let's say you have a thread fetching the next block from a file, a thread compressing the current block, and a thread sending the previously compressed block to a network connection.
In this case there are obvious constraints (you can't compress the current block while it's still being fetched, and you can't send the compressed block to a network connection until it's finished being compressed), but there are also obvious opportunities for parallelism where the order is irrelevant (you don't care if the next block is fetched before or after or while the current block is compressed, you don't care if the current block is compressed before or after or while the previously compressed block is being sent to network, and you don't care if the next block is fetched before or after or while the previously compressed block is being sent to network).
I'm developing a real time system with FreeRTOS on an
STM3240G
board.
The system contains some different tasks ( GUI, KB, ModBus, Ctrl, etc . . )
The tasks have different priorities.
The GUI seems to display a little slowly.
So I use a Profiler software to see what is going on between the different tasks
during a run. This profiler shows me which task was running at each moment ( microsecond) and what interrupts had arrived.
This profiler enables me to "mark" different locations on the code so I know
when it was there. So I run the program and make a record.
I looked at the record and I saw that (for example) Ctrl task was between two
lines of code for 15 milliseconds (this time change in size) there was not any
task change no interrupt arrived and after this time the system continues normally from this point according to the record and my marks.
I tried closing disabling different interrupts without any success.
Has anyone any idea what it could be?
On the eval board, there is a MIPI connector that supports ETM trace - a considerable luxury/advantage over other development boards!
If you also have one of the more expensive debug adapters that also support ETM trace (like for example, uTrace or J-Trace or
ULINKpro or I-jet Trace), you should use it to trace the entire control flow without having to instrument tasks and ISRs.
Otherwise, you should re-check if really every IRQ handler has been instrumented (credits to #RealtimeRik, who pointed this out) at a low-enough level so that the profiler can really spot it.
Especially if you are using third-party libraries, it may be that some IRQs are serviced by handlers you (or the profiler) doesn't have the code of.
As I had to make this experience once myself, I suggest you review the NVIC settings carefully to re-check if there is an ISR you haven't been aware of.
Another question is how the profiler internally works.
If it is based on ETM/TPIU or ITM/SWO tracing, see above.
If it creates and counts a huge number of snapshots, there might be some systematic cases which prevent snapshots to be made in a particular part of the software:
Could it be that a non-maskable interrupt or exception handler is running in a way that cannot be interrupted by the mechanism that collects snapshots?
Could it be that the timing of the control task correlates (by some coincidence) to a timing signal used for snapshots?
What happens if you insert some time-consuming extra code in front of the unexpected "profiling gap" (e.g., some hundreds or thousands of NOPs)?
I was reading Critical Section Problem from Operating System Concepts by Peter B. Galvin.
According to it
1) Progress is : If no process is executing in its critical section and some processes wish to enter their critical sections, then only those processes that are not executing in their remainder section can participate in deciding which will enter its critical section next, and this selection cannot be postponed indefinitely.
And
2) Bounded waiting is : There exists a bound, or limit, on the number of times other processes are allowed to enter their critical sections after a process has made request to enter its critical section and before that request is granted.
I am not understanding what the author wants to say in both the cases.
Could you please make me understand by giving a proper example related to this definition.
Thank You.
First, let me introduce some terminology. A critical section (CS) is a sequence of instructions that can be executed by at most one process at the same time. When using critical sections, the code can be broken down into the following sections:
// Some arbitrary code (such as initialization).
EnterCriticalSection(cs);
// The code that constitutes the CS.
// Only one process can be executing this code at the same time.
LeaveCriticalSection(cs);
// Some arbitrary code. This is called the remainder section.
The first section contains some code such as initialization code. We don't have a name for that section. The second section is the code that tries to enter the CS. The third section is the CS itself. The fourth section is the code that leaves the critical section. The fifth and last section is called the remainder section which can contain any code. Note that the CS itself can be different between processes (consider for example a process that that receives requests from a client and insert them in a queue and another process that processes these requests).
To make sure that an implementation of critical sections works properly, there are three conditions that must be satisfied. You mentioned two of them (which I will explain next). The third is mutual exclusion which is obviously vital. It's worth noting that mutual exclusion applies only to the CS and the leave section. However, the other three sections are not exclusive.
The first condition is progress. The purpose of this condition is to make sure that either some process is currently in the CS and doing some work or, if there was at least one process that wants to enter the CS, it will and then do some work. In both cases, some work is getting done and therefore all processes are making progress overall.
Progress: If no process is executing in its critical section and
some processes wish to enter their critical sections, then only those
processes that are not executing in their remainder section can
participate in deciding which will enter its critical section next,
and this selection cannot be postponed indefinitely.
Let's understand this definition sentence by sentence.
If no process is executing in its critical section
If there is a process executing in its critical section (even though not stated explicitly, this includes the leave section as well), then this means that some work is getting done. So we are making progress. Otherwise, if this was not the case...
and some processes wish to enter their critical sections
If no process wants to enter their critical sections, then there is no more work to do. Otherwise, if there is at least one process that wishes to enter its critical section...
then only those processes that are not executing in their remainder section
This means we are talking about those processes that are executing in either of the first two sections (remember, no process is executing in its critical section or the leave section)...
can participate in deciding which will enter its critical section next,
Since there is at least one process that wishes to enter its CS, somehow we must choose one of them to enter its CS. But who's going to make this decision? Those process who already requested permission to enter their critical sections have the right to participate in making this decision. In addition, those processes that may wish to enter their CSs but have not yet requested the permission to do so (this means that they are in executing in the first section) also have the right to participate in making this decision.
and this selection cannot be postponed indefinitely.
This states that it will take a limited amount of time to select a process to enter its CS. In particular, no deadlock or livelock will occur. So after this limited amount of time, a process will enter its CS and do some work, thereby making progress.
Now I will explain the last condition, namely bounded waiting. The purpose of this condition is to make sure that every process gets the chance to actually enter its critical section so that no process starves forever. However, please note that neither this condition nor progress guarantees fairness. An implementation of a CS doesn't have to be fair.
Bounded waiting: There exists a bound, or limit, on the number of
times other processes are allowed to enter their critical sections
after a process has made request to enter its critical section and
before that request is granted.
Let's understand this definition sentence by sentence, starting from the last one.
after a process has made request to enter its critical section and
before that request is granted.
In other words, if there is a process that has requested to enter its CS but has not yet entered it. Let's call this process P.
There exists a bound, or limit, on the number of
times other processes are allowed to enter their critical sections
While P is waiting to enter its CS, other processes may be waiting as well and some process is executing in its CS. When it leaves its CS, some other process has to be selected to enter the CS which may or may not be P. Suppose a process other than P was selected. This situation might happen again and again. That is, other processes are getting the chance to enter their CSs but never P. Note that progress is being made, but by other processes, not by P. The problem is that P is not getting the chance to do any work. To prevent starvation, there must be a guarantee that P will eventually enter its CS. For this to happen, the number of times other processes enter their CSs must be limited. In this case, P will definitely get the chance to enter its CS.
I would like to mention that the definition of a CS can be generalized so that at most N processes are executing in their critical sections where N is any positive integer. There are also variants of reader-writer critical sections.
Mutual exclusion
No two process can be simultaneously present inside critical section at any point in time, only one process can enter into a critical section at any point in time.
Image for Progress:
Progress
No process running outside the critical section should block the other interesting process from entering into a critical section when in fact the critical section is free.
In this image, P1 (which is running outside of critical section )is blocking P2 from entering into the critical section where in fact critical section is free.
Bounded waiting
No process should have to wait forever to enter into the critical section.
there should be a boundary on getting chances to enter into the critical section.
If bounded waiting is not satisfied then there is a possibility of starvation.
Note
No assumption is related to H/W or processing speed.
Overall, a solution to the critical section problem must satisfy three conditions:
Mutual Exclusion: Exclusive access of each process to the shared memory. Only one process can be in it's critical section at any given time.
Progress: If no process is in its critical section, and if one or more threads want to execute their critical section then any one of these threads must be allowed to get into its critical section.
Bounded Waiting: After a process makes a request for getting into its critical section, there is a limit for how many other processes can get into their critical section, before this process's request is granted. So after the limit is reached, system must grant the process permission to get into its critical section. The purpose of this condition is to make sure that every process gets the chance to actually enter its critical section so that no process starves forever.
Requirements to tell synchronisation solution is correct or not
1). Mutual exclusion:-at any point of time only one process should be present inside critical section.
2). Progress:-the process which is outside critical section and who do not want to enter critical section then such process should not stop the other interested process to enter into its critical section. If a process is getting success to stop other interested process then the progress is not guaranteed or else it is guaranteed. Critical section should be free.
3). Bounded waiting:-the waiting time of a process outside a critical section should be Limited.
4). Architectural neutral:-there is no assumption regarding hardware
(Definition in simple words)
Bounded Waiting :- when only single process gets the turn to enter into critical section every time indeed other process are also interested to enter into critical section.
I have been learning O.S in which it is written that there are two types of Process
1) CPU Bound Processes
2) I/O Bound Processes.
and somewhere its
1)Independent Processes
2)Cooperative Processes.
same goes for Threads
1) Single Level Thread.
2) Multilevel Thread.
and
1)User Level Thread
2)Kernel Level Thread.
Now confusion is that if someone asks me about Types of Process and Thread so which ones should i tell them, from above?
Kindly Make My Concept Clear?
I shall remain thankful to you!
Processes are two types based on their types of categories. The first one which you mentioned is related to event-specific process categorization and the next categorization is based on their nature. But, if someone asks you, you should ask for more clarification as to which type of category does he/she wants the classification. If null, then you should state the first(default) category as shown below:-
Event-specific based category of process
a) CPU Bound Process: Processes that spend the majority of their time simply using the CPU (doing calculations).
b) I/O Bound Process: Processes that are associated with input/output-based activity like reading from files, etc.
Category of processes based on their nature
a) Independent Process: A process that does not need any other external factor to get triggered is an independent process.
b) Cooperative Process: A process that works on the occurrence of any event and the outcome affects any part of the rest of the system is a cooperating process.
But, Threads have got only one classification based on their nature(Single Level Thread and Multi-Level Threads).
Actually, in modern operating systems, there are two levels at which threads operate. They are system or kernel threads and user-level threads. This one is generally not the classification, though some of them freely do classify. It is a misuse.
If you've further doubts, leave a comment below.
Basically there are two types of process:
Independent process.
Cooperating process.
For execution a process should be mixer of CPU bound and I/O bound.
CPU bound: is a time process reside in processor and perform it's execution.
I/O bound: is a time in which a process perform input output operation.e.g take input from keyboard or display output in monitor.
What is a Process?
A process is a program in execution. Process is not as same as program code but a lot more than it. A process is an 'active' entity as opposed to program which is considered to be a 'passive' entity. Attributes held by process include hardware state, memory, CPU etc.
Process memory is divided into four sections for efficient working :
The Text section is made up of the compiled program code, read in from non-volatile storage when the program is launched.
The Data section is made up the global and static variables, allocated and initialized prior to executing the main.
The Heap is used for the dynamic memory allocation, and is managed via calls to new, delete, mallow, free, etc.
The Stack is used for local variables. Space on the stack is reserved for local variables when they are declared.
Category of process:
1.Independent/isolated/competing.
2.Dependent/co-operating/concurrent.
1.Independetn:Execution of one process does not effect the execution's of other process that means there is nothing common for sharing.
2.Dependent:in it process can share some deliver buffer variable ,resources,(cpu,printer).
it process can share any thing, then execution of one process can effect other.
->execution of one process can effect or get affected by the execution of process.
As far as I know in interrupt handler, there is no need of synchronization technique. The interrupt handler cannot run concurrently. In short, the pre-emption is disabled in ISR. However, I have a doubt regarding tasklets. As per my knowledge, tasklets runs under interrupt context. Thus, In my opinion, there is no need for spin lock under tasklet function routine. However, I am not sure on it. Can somebody please explain on it? Thanks for your replies.
If data is shared between top half and bottom half then go for lock. Simple rules for locking. Locks meant to protect data not code.
1. What to protect?.
2. Why to protect?
3. How to protect.
Two tasklets of the same type do not ever run simultaneously. Thus, there is no need to protect data used only within a single type of tasklet. If the data is shared between two different tasklets, however, you must obtain a normal spin lock before accessing the data in the bottom half. You do not need to disable bottom halves because a tasklet never preempts another running tasklet on the same processor.
For synchronization between code running in process context (A) and code running in softirq context (B) we need to use special locking primitives. We must use spinlock operations augmented with deactivation of bottom-half handlers on the current processor in (A), and in (B) only basic spinlock operations. Using spinlocks makes sure that we don't have races between multiple CPUs while deactivating the softirqs makes sure that we don't deadlock in the softirq is scheduled on the same CPU where we already acquired a spinlock. (c) Kernel docs