Mutex in RTOSes in this specific situation - rtos

Consider the following codes:
/*----------------------------------------------------------------------------
First Thread
*---------------------------------------------------------------------------*/
void Thread1 (void const *argument)
{
for (;;)
{
osMutexWait(mutex, osWaitForever);
Thread1_Functions;
osMutexRelease(mutex);
}
}
/*----------------------------------------------------------------------------
Second Thread
*---------------------------------------------------------------------------*/
void Thread2 (void const *argument)
{
for(;;)
{
osMutexWait(mutex, osWaitForever);
Thread2_Functions;
osMutexRelease(mutex);
}
}
As far as I've noticed from RTOS's scheduling ,RTOS assign a specific time to each task and after this time is over,it switches to the other task.
Then in this specific time,inside task's infinite loop ,maybe loop is repeated several times until task's specific time finished.
Assume task is finished in less than of it's time's half,then it has a time to fully run this task once again.
in last line after releasing mutex , then it will achieve mutex before than task2 for second time,Am I true ?
assume timer tick occur when MCU run Thread1_Functions for second time,then task2 cant run because mutex owned by task1, RTOS run task 1 again and if timer tick occur every time in the Thread1_Functions, then task2 has no chance to running,Am I true ?

First, let me clear up the scheduling method that you described. You said, "RTOS assign a specific time to each task and after this time is over, it switches to the other task." This scheduling method is commonly called "time slicing". And all RTOS do not necessarily use this method all the time. Time slicing may be used for tasks that have the same priority (or if the RTOS does not support task priorities). But if the tasks have different priorities then the scheduler will not use time-slicing and will instead schedule according to task priority.
But let's assume that the two tasks in your example have the same priority and the scheduler is time-slicing.
Thread1 runs and gets the mutex.
Thread1's time slice expires and the scheduler switches to Thread2.
Thread2 attempts to get the mutex but blocks since Thread1 already owns the mutex.
The scheduler switches back to Thread1 since Thread2 is blocked.
Thread1 releases the mutex.
When the mutex is released, the scheduler should switch to any higher priority task that is waiting for the mutex. But since Thread2 is the same priority, let's assume the scheduler does not switch and Thread1 continues to run within its time slice.
Thread1 attempts to get the mutex again.
In your scenario Thread1 successfully gets the mutex again and this could result in Thread2 never being able to run. In order to prevent this from happening the mutex service should prioritize requests for the mutex. Mutex requests from higher priority tasks receive higher priority. And mutex requests from equal priority tasks should be first come, first served. In other words, the mutex service should put requests from equal priority tasks into a queue. Remember Thread2 already has a pending request for the mutex (step 3 above). So when Thread1 attempts to get the mutex again (step 6), Thread1's request should be queued behind the earlier request from Thread2. And when Thread1's second request for the mutex get queued behind the request from Thread2, the scheduler should block Thread1 and switch to Thread2, giving the mutex to Thread2.
Update: The above is just an idea for how an unspecified RTOS might handle the situation in order to avoid starving Thread2. You didn't mention a specific RTOS until your comment below. I don't know whether Keil RTX works like I described above. And now I'm wondering what your question really is.
Are you asking what will Keil RTX do in this situation? I'm not sure. You'll have to look at the code for osMutexRelease() to see whether it switches to a task with the same priority. Also look at osMutexWait() to see how it prioritizes tasks of the same priority.
Or are you stating the Keil RTX allows Thread2 to starve in this situation and are you asking how to fix it. To fix this situation you could call osThreadYeild() after releasing the mutex. Like this:
void Thread1 (void const *argument)
{
for (;;)
{
osMutexWait(mutex, osWaitForever);
Thread1_Functions;
osMutexRelease(mutex);
osThreadYeild();
}
}

Related

ISR execution in a non-preemptive system

in a non preemptive system, after an ISR finishes execution, will the interrupted task continue execution even if a higher priority task was activated?
This answer is specific to FreeRTOS, and may not be relevant to other RTOS'es.
FreeRTOS is preemptive by default. However, it can also be configured to be non-preemptive by the config option in FreeRTOSConfig.h
#define configUSE_PREEMPTION 0
Normally, a return from ISR does not trigger a context switch. But in preemptive systems it's often desirable, so in most FreeRTOS examples, you see portYIELD_FROM_ISR(xHigherPriorityTaskWoken); at the end of the ISR, which triggers a context switch if xHigherPriorityTaskWoken is pdTRUE.
xHigherPriorityTaskWoken is initialized to pdFALSE at the start of the ISR (manually by the user), and operations which can cause a context switch, such as vTaskNotifyGiveFromISR() , xQueueSendToBackFromISR() etc. , take it as an argument and set it to pdTRUE if a context switch is required after the system call.
In a non-preemptive configuration, you simply pass NULL to such system calls instead of xHigherPriorityTaskWoken, and do not call portYIELD_FROM_ISR() at the end of the ISR. In this case, even if a higher priority task is awakened by the ISR, execution returns to the currently running task and remains there until this task yields or makes a blocking system call.
You may mix ISR yield mechanism with preemption method. For example, you can force a context switch (preemption) from ISR even when configUSE_PREEMPTION is 0, but this may cause problems if the interrupted/preempted task doesn't expect it to happen, so I don't recommend it.

Mutating conditional in DispatchQueue.main.asyncAfter behavior

I have a question regarding the behaviour of the DispatchQueue, particularly how asyncAfter would behave if you'd use a conditional of some published var that might change within the completion handler.
Let's say when the DispatchQueue is called, viewModel.someBool = true, but sometime during these 3.5 seconds, a function, that takes quite some time, is called that sets viewModel.someBool to false. Will the DispatchQueue always wait until all previous code is done executing, or is there any scenario in which the completion handler can run "in between" some other block of codes execution? All code is being run on the main thread, but I am still uncertain if this could cause bugs or not.
DispatchQueue.main.asyncAfter(deadline: .now() + 3.5) {
if viewModel.someBool {
// do something
}
}
Will the DispatchQueue always wait until all previous code is done executing, or is there any scenario in which the completion handler can run "in between" some other block of codes execution?
Depends if it's a Sequential queue or Concurrent queue. Sequential queue will finish task1 before starting task2; concurrent may run them in parallel. main thread is sequential, so you are good there, but...
You said yourself "sometime during these 3.5 seconds a function is called that sets viewModel.someBool to false". What if that "some time" is 1 nanosecond after a delayed task was picked up and started running?.. So now your function that changes viewModel.someBool to false needs to wait for your delayed task to complete.
So either this should be OK for your code (which is preferable, since such a strong dependency on order, especially in UI, usually means some design issues), or you need to guarantee the order in your code

Swift: Does DispatchQueue.global(qos: .userInitiated).async lock the main thread?

I am currently trying to resolve an 0x8BADF00D.
faultingThread is 0 which I suppose to be the main thread. However, I don't think the bulk work that I am doing actually goes down in the main thread.
In a function that does run on the main thread I do
DispatchQueue.global(qos: .userInitiated).async {
// ... heavy work
// and after the work is done I do
DispatchQueue.main.asyncAfter(deadline: .now()+1.0) {
// ... displaying heavy work
}
}
Is there an obvious fault in my logic? I thought the .userInitiated would leave the main thread, especially on .async.
"exception" : {"codes":"0x0000000000000000, 0x0000000000000000","rawCodes":[0,0],"type":"EXC_CRASH","signal":"SIGKILL"},
"termination" : {"flags":6,"code":2343432205,"namespace":"FRONTBOARD","reasons":["<RBSTerminateContext| domain:10 code:0x8BADF00D explanation:scene-update watchdog transgression: application<com.test>:579 exhausted real (wall clock) time allowance of 10.00 seconds","ProcessVisibility: Background","ProcessState: Running","WatchdogEvent: scene-update","WatchdogVisibility: Background","WatchdogCPUStatistics: (",""Elapsed total CPU time (seconds): 21.740 (user 21.740, system 0.000), 99% CPU",",""Elapsed application CPU time (seconds): 9.832, 45% CPU"",") reportType:CrashLog maxTerminationResistance:Interactive>"]},
"faultingThread" : 0,
The pattern here is correct (though a more reasonable QoS would be well advised). Your watchdog problem, where the main thread was blocked, rests elsewhere. But dispatching expensive process to a background queue, and then dispatching UI/model updates back to the main queue, is the correct procedure.
A word of caution: This general pattern can block the main thread in certain scenarios. Specifically, if you had thread explosion (e.g., more than 64 tasks dispatched to that global queue), one can block/deadlock. GCD has a very limited thread pool (64 at this point) and if you exceed this, subsequent attempts to dispatch to that queue will block until the queue is freed up. This generally is only a problem where you are using locks, semaphores, or otherwise waiting. Unfortunately, there is not enough in your code snippet for us to diagnose the problem in this particular case.
So, if you have a thread explosion, you should refactor to constrain the maximum degree of concurrency. Operation queues have maxConcurrentOperation to facilitate that. GCD has concurrentPerform which will not exceed the maximum number of threads permitted to run at any given time. The Swift 5.5 cooperative thread pool also constrains parallelism to a reasonable limit. Combine has “max publishers” for controlling the degree of concurrency. In past, we might have used non-zero semaphores to constrain the degree of concurrency (though nowadays we would tend to use one of the aforementioned contemporary solutions). There are lots of approaches that mitigate thread explosion.
All of this assumes that the problem rests with thread explosion (which is the likely culprit if the nested async dispatches are blocking). If you do not have thread explosion, then you simply have something else, completely unrelated, blocking the main thread.
QoS determines the priority at which the system schedules tasks for execution only.
You can always check a current thread with Thread.current:
DispatchQueue.global(qos: .userInitiated).async {
print(Thread.current.description)
}
Outputs:
<NSThread: 0x6000005bde00>{number = 7, name = (null)}`
The main thread has number = 0 and name = main only so your code runs on a background thread and you can do heavy work on it.

priority of Dispatch Queues in swift 3

I have read the tutorial about GCD and Dispatch Queue in Swift 3
But I'm really confused about the order of synchronous execution and asynchronous execution and main queue and background queue.
I know that if we use sync then we execute them one after the precious one, if we use async then we can use QoS to set their priority, but how about this case?
func queuesWithQoS() {
let queue1 = DispatchQueue(label: "com.appcoda.myqueue1")
let queue2 = DispatchQueue(label: "com.appcoda.myqueue2")
for i in 1000..<1005 {
print(i)
}
queue1.async {
for i in 0..<5{
print(i)
}
}
queue2.sync {
for i in 100..<105{
print( i)
}
}
}
The outcome shows that we ignore the asynchronous execution. I know queue2 should be completed before queue1 since it's synchronous execution but why we ignore the asynchronous execution and
what is the actual difference between async, sync and so-called main queue?
You say:
The outcome shows that we ignore the asynchronous execution. ...
No, it just means that you didn't give the asynchronously dispatched code enough time to get started.
I know queue2 should be completed before queue1 since it's synchronous execution ...
First, queue2 might not complete before queue1. It just happens to. Make queue2 do something much slower (e.g. loop through a few thousand iterations rather than just five) and you'll see that queue1 can actually run concurrently with respect to what's on queue2. It just takes a few milliseconds to get going and the stuff on your very simple queue2 is finishing before the stuff on queue1 gets a chance to start.
Second, this behavior is not technically because it's synchronous execution. It's just that async takes a few milliseconds to get it's stuff running on some worker thread, whereas the synchronous call, because of optimizations that I won't bore you with, gets started more quickly.
but why we ignore the asynchronous execution ...
We don't "ignore" it. It just takes a few milliseconds to get started.
and what is the actual difference between async, sync and so-called main queue?
"Async" merely means that the current thread may carry on and not wait for the dispatched code to run on some other thread. "Sync" means that the current thread should wait for the dispatched code to finish.
The "main thread" is a different topic and simply refers to the primary thread that is created for driving your UI. In practice, the main thread is where most of your code runs, basically running everything except that which you manually dispatch to some background queue (or code that is dispatched there for you, e.g. URLSession completion handlers).
sync and async are related to the same thread / queue. To see the difference please run this code:
func queuesWithQoS() {
let queue1 = DispatchQueue(label: "com.appcoda.myqueue1")
queue1.async {
for i in 0..<5{
print(i)
}
}
print("finished")
queue1.sync {
for i in 0..<5{
print(i)
}
}
print("finished")
}
The main queue is the thread the entire user interface (UI) runs on.
First of all I prefer to use the term "delayed" instead of "ignored" about the code execution, because all your code in your question is executed.
QoS is an enum, the first class means the highest priority, the last one the lowest priority, when you don't specify any priority you have a queue with default priority and default is in the middle:
userInteractive
userInitiated
default
utility
background
unspecified
Said that, you have two synchronous for-in loops and one async, where the priority is based by the position of the loops and the kind of the queues (sync/async) in the code because here we have 3 different queues (following the instructions about your link queuesWithQoS() could be launched in viewDidAppearso we can suppose is in the main queue)
The code show the creation of two queues with default priority, so the sequence of the execution will be:
the for-in loop with 1000..<1005 in the main queue
the synchronous queue2 with default priority
the asynchronous queue1 (not ignored, simply delayed) with default priority
Main queue have always the highest priority where all the UI instructions are executed.

Grand Central Dispatch async vs sync [duplicate]

This question already has answers here:
Difference between DispatchQueue.main.async and DispatchQueue.main.sync
(4 answers)
Closed 3 years ago.
I'm reading the docs on dispatch queues for GCD, and in it they say that the queues are FIFO, so I am woundering what effect this has on async / sync dispatches?
from my understand async executes things in the order that it gets things while sync executes things serial..
but when you write your GCD code you decide the order in which things happen.. so as long as your know whats going on in your code you should know the order in which things execute..
my questions are, wheres the benefit of async here? am I missing something in my understanding of these two things.
The first answer isn't quite complete, unfortunately. Yes, sync will block and async will not, however there are additional semantics to take into account. Calling dispatch_sync() will also cause your code to wait until each and every pending item on that queue has finished executing, also making it a synchronization point for said work. dispatch_async() will simply submit the work to the queue and return immediately, after which it will be executed "at some point" and you need to track completion of that work in some other way (usually by nesting one dispatch_async inside another dispatch_async - see the man page for example).
sync means the function WILL BLOCK the current thread until it has completed, async means it will be handled in the background and the function WILL NOT BLOCK the current thread.
If you want serial execution of blocks check out the creation of a serial dispatch queue
From the man page:
FUNDAMENTALS
Conceptually, dispatch_sync() is a convenient wrapper around dispatch_async() with the addition of a semaphore to wait for completion of the block, and a wrapper around the block to signal its completion.
See dispatch_semaphore_create(3) for more information about dispatch semaphores. The actual implementation of the dispatch_sync() function may be optimized and differ from the above description.
Tasks can be performed synchronously or asynchronously.
Synchronous function returns the control on the current queue only after task is finished. It blocks the queue and waits until the task is finished.
Asynchronous function returns control on the current queue right after task has been sent to be performed on the different queue. It doesn't wait until the task is finished. It doesn't block the queue.
Only in Asynchronous we can add delay -> asyncAfter(deadline: 10..