When to use MCS lock - operating-system

I have been reading about MCS locks which I feel is pretty cool. Now that I know how it's implemented the next question is when to use it. Below are my thoughts. Please feel free to add items to the list
1) Ideally should be used when there more than 2 threads we want to synchronise
2) MCS locks reduces the number of cache lines that has to be invalidated. In the worst case, cache lines of 2 CPUs is invalidated.
Is there anything else I'm missing ?
Also can MCS used to implement a mutex instead of a spinlock ?

A code will benefit from using MCS lock when there's a high locks contention, i.e., many threads attempting to acquire a lock simultaneously. When using a simple spin lock, all threads are polling a single shared variable, whereas MCS forms a queue of waiting threads, such that each thread is polling on its predecessor in the queue. Hence cache coherency is much lighter since waiting is performed "locally".
Using MCS to implement a mutex doesn't really makes sense.
In mutex, waiting threads are usually queued by the OS and de-scheduled, so there's no polling whatsoever. For example check out pthread's mutex implementation.

I think the other answer by #CodeMoneky1 doesn't really explain "Also can MCS used to implement a mutex instead of a spinlock ?"
The mutex was implemented using spinlock + counter + wait queue. Here the spinlock is usually Test&Set primitive, or using Peterson's solution. I would actually agree that MCS could be an alternative. The reason it is not used is probably the gain is limited. After all the scope of spinlock used in mutex is much smaller.

Related

Where and why deadlock?

I have 2 concurrent queues:
let concurrentQueue = DispatchQueue(label: "test.concurrent", attributes: .concurrent)
let syncQueue = DispatchQueue(label: "test.sync", attributes: .concurrent)
And code:
for index in 1...65 {
concurrentQueue.async {
self.syncQueue.async(flags: .barrier) {
print("WRITE \(index)")
}
self.syncQueue.sync {
print("READ \(index)")
}
}
}
Outputs:
WRITE 1
READ 1
Why, where and how it gets deadlock?
With <65 iterations count everything is good.
This pattern (async writes with barrier, concurrent reads) is known as the “reader-writer” pattern. This particular multithreaded synchronization mechanism can deadlock in thread explosion scenarios.
In short, it deadlocks because:
You have “thread explosion”;
You have exhausted the worker thread pool, which only has 64 threads;
Your dispatched item has two potentially blocking calls, not only the sync, which obviously can block, but also the concurrent async (see next point); and
When you hit a dispatch, if there is not an available worker thread in the pool, it will wait until one is made available (even if dispatching asynchronously).
The key observation is that one should simply avoid unbridled thread explosion. Generally we reach for tools such as GCD's concurrentPerform (a parallel for loop which is constrained to the maximum number of CPU cores), operation queues (which can be controlled through judicious maxConcurrentOperationCount setting) or Swift concurrency (use its cooperative thread pool to control degree of concurrency, actors for synchronization, etc.).
While the reader-writer has intuitive appeal, in practice it simply introduces complexities (synchronization for multithreaded environment with yet another multithreaded mechanism, both of which are constrained by surprisingly small GCD worker thread pools), without many practical benefits. Benchmark it and you will see that it is negligibly faster than a simple serial GCD queue, and relatively much slower than lock-based approaches.
The < 65 observation immediately makes me think you're hitting the queue width limit, which is undocumented (for good reason), but widely understood to be 64. A while back, I wrote a very detailed answer about this queue width limit over here. You should check it out.
I do have some relevant ideas I can share:
The first thing I would recommend would be replacing the print() calls with something that does not trigger I/O. You could create a numReads variable and a numWrites variable, and then use something like an atomic compare and swap operation to increment them, then read them after the loop completes and make sure they're what you expected. See Swift Atomics over here. You could also just dispatch the print operations (async) to the main queue. That would also eliminate any I/O problems.
I'll also note here that by introducing a barrier block on every single iteration of the outer for loop, you're effectively making that queue serial, but with a non-deterministic order. At the very least, you're creating a ton of contention for it, which is suboptimal. Reader/Writer locks tend to make the most sense when there are many more reads than writes, but your code here has a 1:1 ratio of reads to writes. If that is what your real use case looks like, you should just use a serial queue (or a mutex) since it would achieve the same effect, but without the contention.
At the risk of being "that guy", can you maybe elaborate on what you're trying to accomplish here? If you're just trying to prove to yourself that reader/writer locks emulated using barrier blocks work, I can assure you that they do.

Swift Queues/Concurrency and Locking

I usually use serial queues as a mechanism of locking to make sure that one resource can be accessed by many different threads without having problems. But, I have seen cases where other devs use concurrent queues with or even without semaphores (saw IBM/Swift on Linux using concurrent queue with semaphore).
Are there any advantages/disadvantages? I would believe that just using serial queues would correctly block the resource without wasting time for semaphores.
On the other hand, what happens when the cpu is busy? If I remember correctly, a serial queue is not necessarily executed on the same thread/same cpu, right?
That would be the only explanation I can think of; a concurrent queue would be able to share the workload on all available threads/cpus, assuring thread-safe access through the semaphore.
Using a concurrent queue without a semaphore would not be safe, right?
Concurrent queues with semaphores give you more granularity as to what conditions require locking. You can have most of the functions be executed in parallel, with only the mutually exclusive regions (the critical regions) requiring locking.
However, this can be equally simulated with a concurrent queue whose critical regions are dispatched to a serial queue, to ensure mutual exclusion.
I would believe that just using serial queues would correctly block the resource without wasting time for semaphores.
Serial queues also need semaphores as mutation to the queue must be synchronized. However, it tucks it under the rug, and protects you from the many easy-to-make mistakes associated with manual semaphore use.
Using a concurrent queue without a semaphore would not be safe, right?
Nope

bad use cases of scala.concurrent.blocking?

With reference to the third point in this accepted answer, are there any cases for which it would be pointless or bad to use blocking for a long-running computation, whether CPU- or IO-bound, that is being executed 'within' a Future?
It depends on the ExecutionContext your Future is being executed in.
Pointless:
If the ExecutionContext is not a BlockContext, then using blocking will be pointless. That is, it would use the DefaultBlockContext, which simply executes the code without any special handling. It probably wouldn't add that much overhead, but pointless nonetheless.
Bad:
Scala's ExecutionContext.Implicits.global is made to spawn new threads in a ForkJoinPool when the thread pool is about to be exhausted. That is, if it knows that is going to happen via blocking. This can be bad if you're spawning lots of threads. If you're queuing up a lot of work in a short span of time, the global context will happily expand until gridlock. #dk14's answer explains this in more depth, but the gist is that it can be a performance killer as managed blocking can actually become quickly unmanageable.
The main purpose of blocking is to avoid deadlocks within thread pools, so it is tangentially related to performance in the sense that reaching a deadlock would be worse than spawning a few more threads. However, it is definitely not a magical performance enhancer.
I've written more about blocking in particular in this answer.
From my practice, blocking + ForkJoinPool may lead to contionuous and uncontrollable creation of threads if you have a lot of messages to process and each one requires long blocking (which also means that it holds some memory during such). ForkJoinPool creates new thread to compensate the "managable blocked" one, regardless of MaxThreadCount; say hello to hundreds of threads in VisualVm. And it almost kills backpressure, as there is always a place for task in the pool's queue (if your backpressure is based on ThreadPoolExecutor's policies). Performance becomes killed by both new-thread-allocation and garbage collection.
So:
it's good when message rate is not much higher than 1/blocking_time as it allows you to use full power of threads. Some smart backpressure might help to slow down incoming messages.
It's pointless if a task actually uses your CPU during blocking{} (no locks), as it will just increase counts of threads more than count of real cores in system.
And bad for any other cases - you should use separate fixed thread-pool (and maybe polling) then.
P.S. blocking is hidden inside Await.result, so it's not always obvious. In our project someone just did such Await inside some underlying worker actor.

Read-Write lock with GCD

My application makes heavy use of GCD, and almost everything is split up in small tasks handled by dispatches. However, the underlying data model is mostly read and only occasionally written.
I currently use locks to prevent changes to the critical data structures while reading. But after looking into locks some more today, I found NSConditionLock and some page about read-write locks. The latter is exactly what I need.
I found this implementation: http://cocoaheads.byu.edu/wiki/locks . My question is, will this implementation work with GCD, seeing that it uses PThreads?
It will still work. pthreads is the threading API which underlies all of the other thread-using APIs on Mac OS X. (Under that there's Mach thread activations, but that's SPI, not API.) Anyway, the pthreads locks don't really require that you use pthreads threads.
However, GCD offers a better alternative as of iOS 5: dispatch_barrier_async(). Basically, you have a private concurrent queue. You submit all read operations to it in the normal fashion. You submit write operations to it using the barrier routines. Ta-da! Read-write locking.
You can learn more about this if you have access to the WWDC 2011 session video for Session 210 - Mastering Grand Central Dispatch.
You might also want to consider maintaining a serial queue for all read/write operations. You can then dispatch_sync() writes to that queue to ensure that changes to the data model are applied promptly and dispatch_async() all the reads to make sure you maintain nice performance in the app.
Since you have a single serial queue on which all the reads and writes take place you ensure that no reads can happen during a write. This is far less costly than a lock but it means you cannot execute multiple 'read' operations simultaneously. This is unlikely to cause a problem for most applications.
Using dispatch_barrier_async() might mean that writes you make take an arbitrary amount of time to actually be committed since all the pre-existing tasks in the queue have to be completed before your barrier block executes.

Difference between binary semaphore and mutex

Is there any difference between a binary semaphore and mutex or are they essentially the same?
They are NOT the same thing. They are used for different purposes!
While both types of semaphores have a full/empty state and use the same API, their usage is very different.
Mutual Exclusion Semaphores
Mutual Exclusion semaphores are used to protect shared resources (data structure, file, etc..).
A Mutex semaphore is "owned" by the task that takes it. If Task B attempts to semGive a mutex currently held by Task A, Task B's call will return an error and fail.
Mutexes always use the following sequence:
- SemTake
- Critical Section
- SemGive
Here is a simple example:
Thread A Thread B
Take Mutex
access data
... Take Mutex <== Will block
...
Give Mutex access data <== Unblocks
...
Give Mutex
Binary Semaphore
Binary Semaphore address a totally different question:
Task B is pended waiting for something to happen (a sensor being tripped for example).
Sensor Trips and an Interrupt Service Routine runs. It needs to notify a task of the trip.
Task B should run and take appropriate actions for the sensor trip. Then go back to waiting.
Task A Task B
... Take BinSemaphore <== wait for something
Do Something Noteworthy
Give BinSemaphore do something <== unblocks
Note that with a binary semaphore, it is OK for B to take the semaphore and A to give it.
Again, a binary semaphore is NOT protecting a resource from access. The act of Giving and Taking a semaphore are fundamentally decoupled.
It typically makes little sense for the same task to so a give and a take on the same binary semaphore.
A mutex can be released only by the thread that had acquired it.
A binary semaphore can be signaled by any thread (or process).
so semaphores are more suitable for some synchronization problems like producer-consumer.
On Windows, binary semaphores are more like event objects than mutexes.
The Toilet example is an enjoyable analogy:
Mutex:
Is a key to a toilet. One person can
have the key - occupy the toilet - at
the time. When finished, the person
gives (frees) the key to the next
person in the queue.
Officially: "Mutexes are typically
used to serialise access to a section
of re-entrant code that cannot be
executed concurrently by more than one
thread. A mutex object only allows one
thread into a controlled section,
forcing other threads which attempt to
gain access to that section to wait
until the first thread has exited from
that section." Ref: Symbian Developer
Library
(A mutex is really a semaphore with
value 1.)
Semaphore:
Is the number of free identical toilet
keys. Example, say we have four
toilets with identical locks and keys.
The semaphore count - the count of
keys - is set to 4 at beginning (all
four toilets are free), then the count
value is decremented as people are
coming in. If all toilets are full,
ie. there are no free keys left, the
semaphore count is 0. Now, when eq.
one person leaves the toilet,
semaphore is increased to 1 (one free
key), and given to the next person in
the queue.
Officially: "A semaphore restricts the
number of simultaneous users of a
shared resource up to a maximum
number. Threads can request access to
the resource (decrementing the
semaphore), and can signal that they
have finished using the resource
(incrementing the semaphore)." Ref:
Symbian Developer Library
Nice articles on the topic:
MUTEX VS. SEMAPHORES – PART 1: SEMAPHORES
MUTEX VS. SEMAPHORES – PART 2: THE MUTEX
MUTEX VS. SEMAPHORES – PART 3 (FINAL PART): MUTUAL EXCLUSION PROBLEMS
From part 2:
The mutex is similar to the principles
of the binary semaphore with one
significant difference: the principle
of ownership. Ownership is the simple
concept that when a task locks
(acquires) a mutex only it can unlock
(release) it. If a task tries to
unlock a mutex it hasn’t locked (thus
doesn’t own) then an error condition
is encountered and, most importantly,
the mutex is not unlocked. If the
mutual exclusion object doesn't have
ownership then, irrelevant of what it
is called, it is not a mutex.
Since none of the above answer clears the confusion, here is one which cleared my confusion.
Strictly speaking, a mutex is a locking mechanism used to
synchronize access to a resource. Only one task (can be a thread or
process based on OS abstraction) can acquire the mutex. It means there
will be ownership associated with mutex, and only the owner can
release the lock (mutex).
Semaphore is signaling mechanism (“I am done, you can carry on” kind of signal). For example, if you are listening songs (assume it as
one task) on your mobile and at the same time your friend called you,
an interrupt will be triggered upon which an interrupt service routine
(ISR) will signal the call processing task to wakeup.
Source: http://www.geeksforgeeks.org/mutex-vs-semaphore/
Their synchronization semantics are very different:
mutexes allow serialization of access to a given resource i.e. multiple threads wait for a lock, one at a time and as previously said, the thread owns the lock until it is done: only this particular thread can unlock it.
a binary semaphore is a counter with value 0 and 1: a task blocking on it until any task does a sem_post. The semaphore advertises that a resource is available, and it provides the mechanism to wait until it is signaled as being available.
As such one can see a mutex as a token passed from task to tasks and a semaphore as traffic red-light (it signals someone that it can proceed).
At a theoretical level, they are no different semantically. You can implement a mutex using semaphores or vice versa (see here for an example). In practice, the implementations are different and they offer slightly different services.
The practical difference (in terms of the system services surrounding them) is that the implementation of a mutex is aimed at being a more lightweight synchronisation mechanism. In oracle-speak, mutexes are known as latches and semaphores are known as waits.
At the lowest level, they use some sort of atomic test and set mechanism. This reads the current value of a memory location, computes some sort of conditional and writes out a value at that location in a single instruction that cannot be interrupted. This means that you can acquire a mutex and test to see if anyone else had it before you.
A typical mutex implementation has a process or thread executing the test-and-set instruction and evaluating whether anything else had set the mutex. A key point here is that there is no interaction with the scheduler, so we have no idea (and don't care) who has set the lock. Then we either give up our time slice and attempt it again when the task is re-scheduled or execute a spin-lock. A spin lock is an algorithm like:
Count down from 5000:
i. Execute the test-and-set instruction
ii. If the mutex is clear, we have acquired it in the previous instruction
so we can exit the loop
iii. When we get to zero, give up our time slice.
When we have finished executing our protected code (known as a critical section) we just set the mutex value to zero or whatever means 'clear.' If multiple tasks are attempting to acquire the mutex then the next task that happens to be scheduled after the mutex is released will get access to the resource. Typically you would use mutexes to control a synchronised resource where exclusive access is only needed for very short periods of time, normally to make an update to a shared data structure.
A semaphore is a synchronised data structure (typically using a mutex) that has a count and some system call wrappers that interact with the scheduler in a bit more depth than the mutex libraries would. Semaphores are incremented and decremented and used to block tasks until something else is ready. See Producer/Consumer Problem for a simple example of this. Semaphores are initialised to some value - a binary semaphore is just a special case where the semaphore is initialised to 1. Posting to a semaphore has the effect of waking up a waiting process.
A basic semaphore algorithm looks like:
(somewhere in the program startup)
Initialise the semaphore to its start-up value.
Acquiring a semaphore
i. (synchronised) Attempt to decrement the semaphore value
ii. If the value would be less than zero, put the task on the tail of the list of tasks waiting on the semaphore and give up the time slice.
Posting a semaphore
i. (synchronised) Increment the semaphore value
ii. If the value is greater or equal to the amount requested in the post at the front of the queue, take that task off the queue and make it runnable.
iii. Repeat (ii) for all tasks until the posted value is exhausted or there are no more tasks waiting.
In the case of a binary semaphore the main practical difference between the two is the nature of the system services surrounding the actual data structure.
EDIT: As evan has rightly pointed out, spinlocks will slow down a single processor machine. You would only use a spinlock on a multi-processor box because on a single processor the process holding the mutex will never reset it while another task is running. Spinlocks are only useful on multi-processor architectures.
Though mutex & semaphores are used as synchronization primitives ,there is a big difference between them.
In the case of mutex, only the thread that locked or acquired the mutex can unlock it.
In the case of a semaphore, a thread waiting on a semaphore can be signaled by a different thread.
Some operating system supports using mutex & semaphores between process. Typically usage is creating in shared memory.
Mutex: Suppose we have critical section thread T1 wants to access it then it follows below steps.
T1:
Lock
Use Critical Section
Unlock
Binary semaphore: It works based on signaling wait and signal.
wait(s) decrease "s" value by one usually "s" value is initialize with value "1",
signal(s) increases "s" value by one. if "s" value is 1 means no one is using critical section, when value is 0 means critical section is in use.
suppose thread T2 is using critical section then it follows below steps.
T2 :
wait(s)//initially s value is one after calling wait it's value decreased by one i.e 0
Use critical section
signal(s) // now s value is increased and it become 1
Main difference between Mutex and Binary semaphore is in Mutext if thread lock the critical section then it has to unlock critical section no other thread can unlock it, but in case of Binary semaphore if one thread locks critical section using wait(s) function then value of s become "0" and no one can access it until value of "s" become 1 but suppose some other thread calls signal(s) then value of "s" become 1 and it allows other function to use critical section.
hence in Binary semaphore thread doesn't have ownership.
On Windows, there are two differences between mutexes and binary semaphores:
A mutex can only be released by the thread which has ownership, i.e. the thread which previously called the Wait function, (or which took ownership when creating it). A semaphore can be released by any thread.
A thread can call a wait function repeatedly on a mutex without blocking. However, if you call a wait function twice on a binary semaphore without releasing the semaphore in between, the thread will block.
Myth:
Couple of article says that "binary semaphore and mutex are same" or "Semaphore with value 1 is mutex" but the basic difference is Mutex can be released only by thread that had acquired it, while you can signal semaphore from any other thread
Key Points:
•A thread can acquire more than one lock (Mutex).
•A mutex can be locked more than once only if its a recursive mutex, here lock and unlock for mutex should be same
•If a thread which had already locked a mutex, tries to lock the mutex again, it will enter into the waiting list of that mutex, which results in deadlock.
•Binary semaphore and mutex are similar but not same.
•Mutex is costly operation due to protection protocols associated with it.
•Main aim of mutex is achieve atomic access or lock on resource
Mutex are used for " Locking Mechanisms ". one process at a time can use a shared resource
whereas
Semaphores are used for " Signaling Mechanisms "
like "I am done , now can continue"
You obviously use mutex to lock a data in one thread getting accessed by another thread at the same time. Assume that you have just called lock() and in the process of accessing data. This means that you don’t expect any other thread (or another instance of the same thread-code) to access the same data locked by the same mutex. That is, if it is the same thread-code getting executed on a different thread instance, hits the lock, then the lock() should block the control flow there. This applies to a thread that uses a different thread-code, which is also accessing the same data and which is also locked by the same mutex. In this case, you are still in the process of accessing the data and you may take, say, another 15 secs to reach the mutex unlock (so that the other thread that is getting blocked in mutex lock would unblock and would allow the control to access the data). Do you at any cost allow yet another thread to just unlock the same mutex, and in turn, allow the thread that is already waiting (blocking) in the mutex lock to unblock and access the data? Hope you got what I am saying here?
As per, agreed upon universal definition!,
with “mutex” this can’t happen. No other thread can unlock the lock
in your thread
with “binary-semaphore” this can happen. Any other thread can unlock
the lock in your thread
So, if you are very particular about using binary-semaphore instead of mutex, then you should be very careful in “scoping” the locks and unlocks. I mean that every control-flow that hits every lock should hit an unlock call, also there shouldn’t be any “first unlock”, rather it should be always “first lock”.
A Mutex controls access to a single shared resource. It provides operations to acquire() access to that resource and release() it when done.
A Semaphore controls access to a shared pool of resources. It provides operations to Wait() until one of the resources in the pool becomes available, and Signal() when it is given back to the pool.
When number of resources a Semaphore protects is greater than 1, it is called a Counting Semaphore. When it controls one resource, it is called a Boolean Semaphore. A boolean semaphore is equivalent to a mutex.
Thus a Semaphore is a higher level abstraction than Mutex. A Mutex can be implemented using a Semaphore but not the other way around.
Modified question is - What's the difference between A mutex and a "binary" semaphore in "Linux"?
Ans: Following are the differences –
i) Scope – The scope of mutex is within a process address space which has created it and is used for synchronization of threads. Whereas semaphore can be used across process space and hence it can be used for interprocess synchronization.
ii) Mutex is lightweight and faster than semaphore. Futex is even faster.
iii) Mutex can be acquired by same thread successfully multiple times with condition that it should release it same number of times. Other thread trying to acquire will block. Whereas in case of semaphore if same process tries to acquire it again it blocks as it can be acquired only once.
Diff between Binary Semaphore and Mutex:
OWNERSHIP:
Semaphores can be signalled (posted) even from a non current owner. It means you can simply post from any other thread, though you are not the owner.
Semaphore is a public property in process, It can be simply posted by a non owner thread.
Please Mark this difference in BOLD letters, it mean a lot.
Mutex work on blocking critical region, But Semaphore work on count.
http://www.geeksforgeeks.org/archives/9102 discusses in details.
Mutex is locking mechanism used to synchronize access to a resource.
Semaphore is signaling mechanism.
Its up to to programmer if he/she wants to use binary semaphore in place of mutex.
Apart from the fact that mutexes have an owner, the two objects may be optimized for different usage. Mutexes are designed to be held only for a short time; violating this can cause poor performance and unfair scheduling. For example, a running thread may be permitted to acquire a mutex, even though another thread is already blocked on it. Semaphores may provide more fairness, or fairness can be forced using several condition variables.
In windows the difference is as below.
MUTEX: process which successfully executes wait has to execute a signal and vice versa. BINARY SEMAPHORES: Different processes can execute wait or signal operation on a semaphore.
While a binary semaphore may be used as a mutex, a mutex is a more specific use-case, in that only the process that locked the mutex is supposed to unlock it. This ownership constraint makes it possible to provide protection against:
Accidental release
Recursive Deadlock
Task Death Deadlock
These constraints are not always present because they degrade the speed. During the development of your code, you can enable these checks temporarily.
e.g. you can enable Error check attribute in your mutex. Error checking mutexes return EDEADLK if you try to lock the same one twice and EPERM if you unlock a mutex that isn't yours.
pthread_mutex_t mutex;
pthread_mutexattr_t attr;
pthread_mutexattr_init (&attr);
pthread_mutexattr_settype (&attr, PTHREAD_MUTEX_ERRORCHECK_NP);
pthread_mutex_init (&mutex, &attr);
Once initialised we can place these checks in our code like this:
if(pthread_mutex_unlock(&mutex)==EPERM)
printf("Unlock failed:Mutex not owned by this thread\n");
The concept was clear to me after going over above posts. But there were some lingering questions. So, I wrote this small piece of code.
When we try to give a semaphore without taking it, it goes through. But, when you try to give a mutex without taking it, it fails. I tested this on a Windows platform. Enable USE_MUTEX to run the same code using a MUTEX.
#include <stdio.h>
#include <windows.h>
#define xUSE_MUTEX 1
#define MAX_SEM_COUNT 1
DWORD WINAPI Thread_no_1( LPVOID lpParam );
DWORD WINAPI Thread_no_2( LPVOID lpParam );
HANDLE Handle_Of_Thread_1 = 0;
HANDLE Handle_Of_Thread_2 = 0;
int Data_Of_Thread_1 = 1;
int Data_Of_Thread_2 = 2;
HANDLE ghMutex = NULL;
HANDLE ghSemaphore = NULL;
int main(void)
{
#ifdef USE_MUTEX
ghMutex = CreateMutex( NULL, FALSE, NULL);
if (ghMutex == NULL)
{
printf("CreateMutex error: %d\n", GetLastError());
return 1;
}
#else
// Create a semaphore with initial and max counts of MAX_SEM_COUNT
ghSemaphore = CreateSemaphore(NULL,MAX_SEM_COUNT,MAX_SEM_COUNT,NULL);
if (ghSemaphore == NULL)
{
printf("CreateSemaphore error: %d\n", GetLastError());
return 1;
}
#endif
// Create thread 1.
Handle_Of_Thread_1 = CreateThread( NULL, 0,Thread_no_1, &Data_Of_Thread_1, 0, NULL);
if ( Handle_Of_Thread_1 == NULL)
{
printf("Create first thread problem \n");
return 1;
}
/* sleep for 5 seconds **/
Sleep(5 * 1000);
/*Create thread 2 */
Handle_Of_Thread_2 = CreateThread( NULL, 0,Thread_no_2, &Data_Of_Thread_2, 0, NULL);
if ( Handle_Of_Thread_2 == NULL)
{
printf("Create second thread problem \n");
return 1;
}
// Sleep for 20 seconds
Sleep(20 * 1000);
printf("Out of the program \n");
return 0;
}
int my_critical_section_code(HANDLE thread_handle)
{
#ifdef USE_MUTEX
if(thread_handle == Handle_Of_Thread_1)
{
/* get the lock */
WaitForSingleObject(ghMutex, INFINITE);
printf("Thread 1 holding the mutex \n");
}
#else
/* get the semaphore */
if(thread_handle == Handle_Of_Thread_1)
{
WaitForSingleObject(ghSemaphore, INFINITE);
printf("Thread 1 holding semaphore \n");
}
#endif
if(thread_handle == Handle_Of_Thread_1)
{
/* sleep for 10 seconds */
Sleep(10 * 1000);
#ifdef USE_MUTEX
printf("Thread 1 about to release mutex \n");
#else
printf("Thread 1 about to release semaphore \n");
#endif
}
else
{
/* sleep for 3 secconds */
Sleep(3 * 1000);
}
#ifdef USE_MUTEX
/* release the lock*/
if(!ReleaseMutex(ghMutex))
{
printf("Release Mutex error in thread %d: error # %d\n", (thread_handle == Handle_Of_Thread_1 ? 1:2),GetLastError());
}
#else
if (!ReleaseSemaphore(ghSemaphore,1,NULL) )
{
printf("ReleaseSemaphore error in thread %d: error # %d\n",(thread_handle == Handle_Of_Thread_1 ? 1:2), GetLastError());
}
#endif
return 0;
}
DWORD WINAPI Thread_no_1( LPVOID lpParam )
{
my_critical_section_code(Handle_Of_Thread_1);
return 0;
}
DWORD WINAPI Thread_no_2( LPVOID lpParam )
{
my_critical_section_code(Handle_Of_Thread_2);
return 0;
}
The very fact that semaphore lets you signal "it is done using a resource", even though it never owned the resource, makes me think there is a very loose coupling between owning and signaling in the case of semaphores.
Best Solution
The only difference is
1.Mutex -> lock and unlock are under the ownership of a thread that locks the mutex.
2.Semaphore -> No ownership i.e; if one thread calls semwait(s) any other thread can call sempost(s) to remove the lock.
Mutex is used to protect the sensitive code and data, semaphore is used to synchronization.You also can have practical use with protect the sensitive code, but there might be a risk that release the protection by the other thread by operation V.So The main difference between bi-semaphore and mutex is the ownership.For instance by toilet , Mutex is like that one can enter the toilet and lock the door, no one else can enter until the man get out, bi-semaphore is like that one can enter the toilet and lock the door, but someone else could enter by asking the administrator to open the door, it's ridiculous.
I think most of the answers here were confusing especially those saying that mutex can be released only by the process that holds it but semaphore can be signaled by ay process. The above line is kind of vague in terms of semaphore. To understand we should know that there are two kinds of semaphore one is called counting semaphore and the other is called a binary semaphore. In counting semaphore handles access to n number of resources where n can be defined before the use. Each semaphore has a count variable, which keeps the count of the number of resources in use, initially, it is set to n. Each process that wishes to uses a resource performs a wait() operation on the semaphore (thereby decrementing the count). When a process releases a resource, it performs a release() operation (incrementing the count). When the count becomes 0, all the resources are being used. After that, the process waits until the count becomes more than 0. Now here is the catch only the process that holds the resource can increase the count no other process can increase the count only the processes holding a resource can increase the count and the process waiting for the semaphore again checks and when it sees the resource available it decreases the count again. So in terms of binary semaphore, only the process holding the semaphore can increase the count, and count remains zero until it stops using the semaphore and increases the count and other process gets the chance to access the semaphore.
The main difference between binary semaphore and mutex is that semaphore is a signaling mechanism and mutex is a locking mechanism, but binary semaphore seems to function like mutex that creates confusion, but both are different concepts suitable for a different kinds of work.
The answer may depend on the target OS. For example, at least one RTOS implementation I'm familiar with will allow multiple sequential "get" operations against a single OS mutex, so long as they're all from within the same thread context. The multiple gets must be replaced by an equal number of puts before another thread will be allowed to get the mutex. This differs from binary semaphores, for which only a single get is allowed at a time, regardless of thread contexts.
The idea behind this type of mutex is that you protect an object by only allowing a single context to modify the data at a time. Even if the thread gets the mutex and then calls a function that further modifies the object (and gets/puts the protector mutex around its own operations), the operations should still be safe because they're all happening under a single thread.
{
mutexGet(); // Other threads can no longer get the mutex.
// Make changes to the protected object.
// ...
objectModify(); // Also gets/puts the mutex. Only allowed from this thread context.
// Make more changes to the protected object.
// ...
mutexPut(); // Finally allows other threads to get the mutex.
}
Of course, when using this feature, you must be certain that all accesses within a single thread really are safe!
I'm not sure how common this approach is, or whether it applies outside of the systems with which I'm familiar. For an example of this kind of mutex, see the ThreadX RTOS.
Mutexes have ownership, unlike semaphores. Although any thread, within the scope of a mutex, can get an unlocked mutex and lock access to the same critical section of code,only the thread that locked a mutex should unlock it.
As many folks here have mentioned, a mutex is used to protect a critical piece of code (AKA critical section.) You will acquire the mutex (lock), enter critical section, and release mutex (unlock) all in the same thread.
While using a semaphore, you can make a thread wait on a semaphore (say thread A), until another thread (say thread B)completes whatever task, and then sets the Semaphore for thread A to stop the wait, and continue its task.
MUTEX
Until recently, the only sleeping lock in the kernel was the semaphore. Most users of semaphores instantiated a semaphore with a count of one and treated them as a mutual exclusion lock—a sleeping version of the spin-lock. Unfortunately, semaphores are rather generic and do not impose any usage constraints. This makes them useful for managing exclusive access in obscure situations, such as complicated dances between the kernel and userspace. But it also means that simpler locking is harder to do, and the lack of enforced rules makes any sort of automated debugging or constraint enforcement impossible. Seeking a simpler sleeping lock, the kernel developers introduced the mutex.Yes, as you are now accustomed to, that is a confusing name. Let’s clarify.The term “mutex” is a generic name to refer to any sleeping lock that enforces mutual exclusion, such as a semaphore with a usage count of one. In recent Linux kernels, the proper noun “mutex” is now also a specific type of sleeping lock that implements mutual exclusion.That is, a mutex is a mutex.
The simplicity and efficiency of the mutex come from the additional constraints it imposes on its users over and above what the semaphore requires. Unlike a semaphore, which implements the most basic of behaviour in accordance with Dijkstra’s original design, the mutex has a stricter, narrower use case:
n Only one task can hold the mutex at a time. That is, the usage count on a mutex is always one.
Whoever locked a mutex must unlock it. That is, you cannot lock a mutex in one
context and then unlock it in another. This means that the mutex isn’t suitable for more complicated synchronizations between kernel and user-space. Most use cases,
however, cleanly lock and unlock from the same context.
Recursive locks and unlocks are not allowed. That is, you cannot recursively acquire the same mutex, and you cannot unlock an unlocked mutex.
A process cannot exit while holding a mutex.
A mutex cannot be acquired by an interrupt handler or bottom half, even with
mutex_trylock().
A mutex can be managed only via the official API: It must be initialized via the methods described in this section and cannot be copied, hand initialized, or reinitialized.
[1] Linux Kernel Development, Third Edition Robert Love
Mutex and binary semaphore are both of the same usage, but in reality, they are different.
In case of mutex, only the thread which have locked it can unlock it. If any other thread comes to lock it, it will wait.
In case of semaphone, that's not the case. Semaphore is not tied up with a particular thread ID.