Difference between busy waiting and spin lock in OS? - operating-system

Can Anyone give the detailed explanation about the difference busy waiting and spin lock in OS (operating system) ?
Please give the explanation in layman language

Simply put:
Busy waiting is a technique in which a process repeatedly checks to see if a condition is true (from Wikipedia).
Spinlock uses the above technique for the purpose of checking if a lock is available.
These 2 SO answers explain nicely what is a spinlock and when one should use it:
https://stackoverflow.com/a/1957464/6098812
https://stackoverflow.com/a/1456261/6098812

Related

FreeRTOS vs Zephyr/Mynewt task blocked state

I cannot seem to find any info on this question, so I thought I'd ask here.
(No reply here: https://lists.zephyrproject.org/pipermail/zephyr-devel/2017-June/007743.html)
When a driver (eg. SPI or UART) is invoked through
FreeRTOS using the vendor HAL, then there
are two options for waiting upon completion:
1) Interrupt
2) busy-waiting
My question is this:
If the driver is invoked using busy-waiting; Does FreeRTOS then have any knowledge of the busy-waiting (occuring in the HAL Driver)? Does the task still get a time slot allocated (for doing busy-waiting). Is this
how it works? (Presuming FreeRTOS task has a preemptive scheduler)
Now in Zephyr (and probably Mynewt), I can see that when the driver is called, Zephyr keeps track of the calling task, which is then suspended (blocked state) until finished. Then the driver interrupt routine it puts the calling thread into the run-queue, when ready to proceed. This way no cycles are waisted. Is this correct understood?
Thanks
Anders
I don't understand your question. In FreeRTOS, if a driver is implemented to perform a busy wait (i.e. the driver has no knowledge of the multithreading, so is not event driven, and instead uses a busy wait that uses all CPU time) then the RTOS scheduler has no idea that is happening, so will schedule the task just as if it would any other task. Therefore, if the task is the highest priority ready state task it will use all the CPU time, and if there are other tasks of equal priority, it will share the CPU time with those tasks.
On the other hand, if the driver is written to make use of an RTOS (be that Zephr, FreeRTOS, or any other) then it can make use of the RTOS primitives to create a much more efficient event driven execution pattern. I can't see how the different schedulers you mention will behave any differently in this respect. For example, how can Zephr know that a task it didn't know the application writer was going to create was going to call a library function it had no previous knowledge of, and that the library function was going to use a busy wait?

When to use MCS lock

I have been reading about MCS locks which I feel is pretty cool. Now that I know how it's implemented the next question is when to use it. Below are my thoughts. Please feel free to add items to the list
1) Ideally should be used when there more than 2 threads we want to synchronise
2) MCS locks reduces the number of cache lines that has to be invalidated. In the worst case, cache lines of 2 CPUs is invalidated.
Is there anything else I'm missing ?
Also can MCS used to implement a mutex instead of a spinlock ?
A code will benefit from using MCS lock when there's a high locks contention, i.e., many threads attempting to acquire a lock simultaneously. When using a simple spin lock, all threads are polling a single shared variable, whereas MCS forms a queue of waiting threads, such that each thread is polling on its predecessor in the queue. Hence cache coherency is much lighter since waiting is performed "locally".
Using MCS to implement a mutex doesn't really makes sense.
In mutex, waiting threads are usually queued by the OS and de-scheduled, so there's no polling whatsoever. For example check out pthread's mutex implementation.
I think the other answer by #CodeMoneky1 doesn't really explain "Also can MCS used to implement a mutex instead of a spinlock ?"
The mutex was implemented using spinlock + counter + wait queue. Here the spinlock is usually Test&Set primitive, or using Peterson's solution. I would actually agree that MCS could be an alternative. The reason it is not used is probably the gain is limited. After all the scope of spinlock used in mutex is much smaller.

Is it required to use spin_lock inside tasklets?

As far as I know in interrupt handler, there is no need of synchronization technique. The interrupt handler cannot run concurrently. In short, the pre-emption is disabled in ISR. However, I have a doubt regarding tasklets. As per my knowledge, tasklets runs under interrupt context. Thus, In my opinion, there is no need for spin lock under tasklet function routine. However, I am not sure on it. Can somebody please explain on it? Thanks for your replies.
If data is shared between top half and bottom half then go for lock. Simple rules for locking. Locks meant to protect data not code.
1. What to protect?.
2. Why to protect?
3. How to protect.
Two tasklets of the same type do not ever run simultaneously. Thus, there is no need to protect data used only within a single type of tasklet. If the data is shared between two different tasklets, however, you must obtain a normal spin lock before accessing the data in the bottom half. You do not need to disable bottom halves because a tasklet never preempts another running tasklet on the same processor.
For synchronization between code running in process context (A) and code running in softirq context (B) we need to use special locking primitives. We must use spinlock operations augmented with deactivation of bottom-half handlers on the current processor in (A), and in (B) only basic spinlock operations. Using spinlocks makes sure that we don't have races between multiple CPUs while deactivating the softirqs makes sure that we don't deadlock in the softirq is scheduled on the same CPU where we already acquired a spinlock. (c) Kernel docs

Multiple I/O Completion Ports

Can I create multiple I/O Completion Ports in a single application? I mean, hold two or more CreateIoCompletionPort handles with their own CompletionKey's? My application has 2 IOCP Classes with their own client structures starting from the index 0. I use these indexes in the CompletionKey so I believe that in some point this causing conflict because my application leads to a deadlock without any logical reason. Triple checked for any deadlock situation and run in debugging mode not helped!
Yes. You can create as many IOCPs as you like*.
I expect you have a bug in your code or a standard 'deadlock caused by lock inversions'.
Can you break into the app in the debugger when it has deadlocked and see what the threads are doing?
(* subject to the usual resource limitations, memory, etc).

techniques that can be used to protect critical sections

In an operating system subject i'm taking this semester we were asked this question
what are the techniques that can be used to protect critical sections ??
i tried searching online but couldn't find anything
could anyone please briefly explain critical sections and what techniques to protect them ?
First of all critical section applies only to parallel execution, and it is a piece of code that cannot be executed by more than one thread / process at given time.
It occurs when two or more threads or processes want to write into the same location at once,
which potentially can cause incorrect state of the data or deadlock.
Even so innocent looking piece of code as i += 1 has to be protected in parallel world -- you have to remember that execution of thread or process can be suspended at any time by OS.
The basic mechanism of synchronization are mutexes and monitors.
With semaphores one can limit access to resources.
a) A process must first declare its intention to enter
the critical section by raising a flag.
b) Next, the critical section is entered and upon
leaving, the flag is lowered.
c) If the process is suspended after raising the flag
but before it able to enter the critical section,
the other process will see the raised flag and not
enter until the flag is lowered.