I tried to find out how to ensure a mutex should be entered into by each thread (POSIX thread in Linux) averagely.
In my program, there is a global queue and it has own mutex lock. A couple of writing threads write element into queue one at a time, and a single reading thread reads out a group of elements from the queue every time. The result is that the size of queue always grows large than the limitation.
so my question is how to ensure that the mutex should be accessed by every thread averagely. Any comments will be appreciated!
I am assuming the scenario of two writer threads, one reader thread and a common buffering queue with some buffer limit.
There are couple of ways doing this.
Create the reader thread with higher priority then writer threads. So every time when the lock will be released by any of the writer thread, it will be acquired by the reader thread immediately if it is waiting in the scheduler queue along-with the second writer thread.
Use a global synchronized flag to perform the task in queue, and give a threshold for certain reading and writing conditions (say if my queue count is 10, so if the max count will be achieved, next time I will be able to schedule reader thread only with the help of flag for a certain number of times and then will release the flag to work normally). This will help restricting the queue growing larger then the limit.
Hope you understand both the points.
Related
I was wondering if ProcessorContext.schedule is thread-safe so that I can spawn new thread to execute the Punctuator callback?
Also, if a consumer consumes just 1 partition but we set num.stream.threads=2. Does this automatically spawn a new thread for the scheduler?
After trying it a bit I found the answer may be "no".
Then what's the recommended the way to make spawning new thread for scheduler thread-safe?
Registering a punctuation will not spawn a new thread. The number of used threads in determined by num.stream.threads configuration only. Hence, if you register a punctuation, it's executed on the same thread as the topology and thus it is thread safe.
If you configure more threads than available input topic partitions, some threads would not get any work assigned, and thus, they would not execute any punctuations.
Suppose if multiple async tasks running in a serial queue are accessing a same shared resource, are there any chances we might face race condition?
Following the comment I've added, this is taken from Apple doc. In bold I put the emphasis to what you are looking for.
Serial queues (also known as private dispatch queues) execute one task
at a time in the order in which they are added to the queue. The
currently executing task runs on a distinct thread (which can vary
from task to task) that is managed by the dispatch queue. Serial
queues are often used to synchronize access to a specific resource.
If you are using a concurrent queue instead you could have a race condition. You can prevent it using dispatch barriers, for example. See Grand Central Dispatch In-Depth: Part 1/2 for more details.
For NSOperation and NSOperationQueue the same applies. NSOperationQueue can be made serial with maxConcurrentOperationCount set to 1. In addition, using dependencies through operations, you can synchronize the access to a shared resource.
No you can not run into a race condition, see what I did there, when running async tasks on a serial queue due to the fact that the type of queue has to deal with the ways the tasks are executed and while synchrony and asynchrony has to deal with the responsiveness of your application when completing an expensive task.
The reason it is easy to run into a race condition when on a concurrrent queue because on a concurrent queue tasks are allowed to be executed at the same time therefore different threads sometimes may be "racing" to perform an action and in all actuality they are overwriting the previous threads work due to the threads performing the same action. Where as on a serial queue tasks are executed one at a time so two threads cant race to complete a task because it happens in sequential order. Hope that helps!
Should we use a thread pool for long running threads or start our own threads? Is there some design pattern?
Unfortunately, it depends. There is no hard and fast rule saying that you should always employ thread pools.
Thread pools offer two main things:
Delegated creation/reuse of threads.
Back-pressure
IMO, it's the back-pressure property that's interesting, but often the most poorly understood. Your machine runs on a limited set of resources. If you have (say) 8 CPU cores and they are all busy working, you would like to signal that in some way that adding more work (submitting more tasks) isn't going to help, at least not in terms of latency.
This is the reason java.util.concurrent.ExecutorService implementations allow you to specify a java.util.concurrent.BlockingQueue of your choice. When this queue grows full, invoking threads will block until the thread pool has managed to complete tasks in progress.
Whether or not to have long-running threads inside the thread pool depends on what it's doing. If the thread is constantly busy (meaning it will never complete) then it will always occupy a slot in the thread pool, which is kind of pointless.
Regarding delegated creation/reuse of threads; maybe you could have two pools, one for long-running tasks and one for other tasks. Or perhaps a long-running thread pool with one single slot, this will prevent two long-running tasks from running at the same time, provided that is what you want.
As you can see, there is no single good answer. It really boils down to what you are trying to achieve and how you want to use the resources at hand.
I know what a binary semaphore is: it is a flag when is set to 1 by an ISR of an interrupt.
But what is a semaphore when we are using a pre-emptive kernel, say FreeRTOS? Is it the same as binary semaphore?
it is a flag when is set to 1 by an ISR of an interrupt.
That is neither a complete nor accurate description of a semaphore. What you have described is merely a flag. A semaphore is a synchronisation object; there are three forms provided by a typical RTOS:
Binary Semaphore
Counting Sempahore
Mutual Exclusion Semaphore (Mutex)
In the case of a binary semaphore, there are two operations give and take. A task taking a semaphore will block (i.e. suspend execution and allow other lower or equal priority threads to run threads to run) until some other thread or interrupt handler gives the semaphore. Binary semaphores are used to signal between threads and from ISRs to threads. They are often used to implement deferred interrupt handlers, so that an ISR can ve bery short, and the handler benefit from RTOS mechanisms that are not allowed in an ISR (anything that blocks or suspends execution).
Multiple threads may block on a single semaphore, but only one of those tasks will respond take the semaphore. Some RTOS have a flush operation (VxWorks for example) that puts all threads waiting on a semaphore in the ready state simultaneously - in which case they will run according to the priority scheduling scheme.
A Counting Semaphore is similar to a Binary Semaphore, except that it can be given multiple times, and tasks may take the semaphore without blocking until the count is zero.
A Mutex is used for resource locking. It is possible to use a binary semaphore for this, but a mutex provides features that make this safer. The operations on a mutex are lock and unlock. When a thread locks a mutex, and another task attempts to lock the same mutex, the second (and any subsequent) task blocks until the first task unlocks it. This can be used to prevent more than one thread accessing a resource (memory or I/O) simultaneously. A thread may lock a mutex multiple times; a count is maintained, so that it must be unlocked an equal number of times before the lock is released. This allows a thread to nest locks.
A special feature of a mutex is that if a thread with the lock is a lower priority that a task requesting the lock, then the lower priority task is boosted to the priority of the higher in order to prevent a priority inversion where a middle priority task may preempt the low priority task with the lock increasing the length of time the higher priority task must wait this rendering the scheduling non-deterministic.
The above descriptions are typical; specific RTOS implementations may differ. For example FreeRTOS distinguishes between a mutex and a recursive mutex, the latter supporting the nestability feature; while the first is marginally more efficient where nesting is not needed.
Semaphores are not just flags, or counts. They support send and wait operations. A user-space thread can wait on a semaphore without unnecessary and unwanted polling and be made ready/running 'immediately' when another thread, or an appropriately-designed driver/ISR, sends a unit.
By 'appropriately-designed driver/ISR', I mean one that can perform a send() operation and then exit via the OS scheduler whenever it needs to set a waiting thread ready/running.
Such a mechanism is vitally important on preemptive kernels because it allows them to achieve very good I/O performance without wasting time, CPU cycles and memory-bandwidth on polling. Non-preemptive systems are hopelessly slow, latency-ridden and wasteful at I/O and this is why they are essentially no longer used and why we put up with all the synchro/locking/queueing etc issues.
In a multi-threaded process,If one thread is busy on I/O will the entire process be blocked?
AFAIK, it totally depends on programmer that how they manage the threads inside the programs.
If another thread is there with no I/O, processor will never sit idle & start executing this thread. However, process in split threads such that one thread waits for the result of the other, the the entire process will be blocked.
Please comment if more information needs to be added.
Does there exist any other explaination?
If the process has only one thread, then yes.
If the process has multiple threads, then normally no if the operating system supports multithreading.
This question can also be addressed in terms of the underlying implementation of user threads. There are different models for multithreading models, in order to implement user threads they have to be mapped to a kernel thread:
Many-to-One: Many user threads to one kernel thread
One-to-One: Each user thread is assigned to a kernel thread.
Many-to-Many: Many user threads are split on different kernel threads.
In the many-to-one case, a single block-operation (system call) within the thread can block the whole process. This disadvantage is not present in the one-to-one model.