I am currently trying to resolve an 0x8BADF00D.
faultingThread is 0 which I suppose to be the main thread. However, I don't think the bulk work that I am doing actually goes down in the main thread.
In a function that does run on the main thread I do
DispatchQueue.global(qos: .userInitiated).async {
// ... heavy work
// and after the work is done I do
DispatchQueue.main.asyncAfter(deadline: .now()+1.0) {
// ... displaying heavy work
}
}
Is there an obvious fault in my logic? I thought the .userInitiated would leave the main thread, especially on .async.
"exception" : {"codes":"0x0000000000000000, 0x0000000000000000","rawCodes":[0,0],"type":"EXC_CRASH","signal":"SIGKILL"},
"termination" : {"flags":6,"code":2343432205,"namespace":"FRONTBOARD","reasons":["<RBSTerminateContext| domain:10 code:0x8BADF00D explanation:scene-update watchdog transgression: application<com.test>:579 exhausted real (wall clock) time allowance of 10.00 seconds","ProcessVisibility: Background","ProcessState: Running","WatchdogEvent: scene-update","WatchdogVisibility: Background","WatchdogCPUStatistics: (",""Elapsed total CPU time (seconds): 21.740 (user 21.740, system 0.000), 99% CPU",",""Elapsed application CPU time (seconds): 9.832, 45% CPU"",") reportType:CrashLog maxTerminationResistance:Interactive>"]},
"faultingThread" : 0,
The pattern here is correct (though a more reasonable QoS would be well advised). Your watchdog problem, where the main thread was blocked, rests elsewhere. But dispatching expensive process to a background queue, and then dispatching UI/model updates back to the main queue, is the correct procedure.
A word of caution: This general pattern can block the main thread in certain scenarios. Specifically, if you had thread explosion (e.g., more than 64 tasks dispatched to that global queue), one can block/deadlock. GCD has a very limited thread pool (64 at this point) and if you exceed this, subsequent attempts to dispatch to that queue will block until the queue is freed up. This generally is only a problem where you are using locks, semaphores, or otherwise waiting. Unfortunately, there is not enough in your code snippet for us to diagnose the problem in this particular case.
So, if you have a thread explosion, you should refactor to constrain the maximum degree of concurrency. Operation queues have maxConcurrentOperation to facilitate that. GCD has concurrentPerform which will not exceed the maximum number of threads permitted to run at any given time. The Swift 5.5 cooperative thread pool also constrains parallelism to a reasonable limit. Combine has “max publishers” for controlling the degree of concurrency. In past, we might have used non-zero semaphores to constrain the degree of concurrency (though nowadays we would tend to use one of the aforementioned contemporary solutions). There are lots of approaches that mitigate thread explosion.
All of this assumes that the problem rests with thread explosion (which is the likely culprit if the nested async dispatches are blocking). If you do not have thread explosion, then you simply have something else, completely unrelated, blocking the main thread.
QoS determines the priority at which the system schedules tasks for execution only.
You can always check a current thread with Thread.current:
DispatchQueue.global(qos: .userInitiated).async {
print(Thread.current.description)
}
Outputs:
<NSThread: 0x6000005bde00>{number = 7, name = (null)}`
The main thread has number = 0 and name = main only so your code runs on a background thread and you can do heavy work on it.
Related
There are many examples on how to enforce code to run on main thread. I have an opposite need for unit testing purposes. I want to exclude main thread from taking on some work.
Why do I need this: I would like to test that a particular function runs correctly, specifically when it's simultaneously called from 2 or more background threads (while main thread is free and available). This is because function itself makes use of main thread, while it is a typical use case that it will be called from background threads, possibly concurrently.
I already have one test case where one of the calling threads is the main thread. But I also want to test the case when main thread is not busy, when 2 or more other threads call the function.
The problem is that there's seems no way to leave main thread free. For instance even if queue is defined with qoc: .background, main thread was still taken:
private let queue = DispatchQueue(label: "bgqueue", qos: .background, attributes: .concurrent)
let iterations = 10
DispatchQueue.concurrentPerform(iterations: iterations) { _ in
queue.async {
callMyFunction() // still may run on main thread, even with 1-2 iterations
}
}
The only approach I can think of is blocking all threads at the same spot (using CountDownLatch for example), and then proceed to function from all threads, but main:
let latch = CountDownLatch(count: iterations)
DispatchQueue.concurrentPerform(iterations: iterations) { _ in
latch.countdown()
latch.await()
if !Thread.isMainThread {
callMyFunction()
}
}
The problems here are 1 - have to make sure iterations < available threads; 2 - feels wrong to block main thread.
So is there any better solution? How to exclude main thread from picking up DispatchQueue work within unit tests?
As long as you're not dispatching to the main queue (or some other queue that uses the main queue as its “target”), async will never use the main thread. There are some interesting optimizations with sync which affect which thread is used (which isn’t relevant here), but not for async. The asynchronously dispatched task will run on a worker thread associated with that QoS, not the main thread.
So, go ahead and insert a thread assertion in your test to make sure it’s not on the main thread, and I think you’ll find that it is fine:
XCTAssertFalse(Thread.isMainThread, "Shouldn't be on main thread")
That having been said, you allude to the fact that “function itself makes use of main thread”. If that’s the case, you can easily deadlock if your test is blocking the main thread, waiting for the called functions to finish processing on the main thread as well.
You can often avoid these sorts of unit test deadlocks by using “expectations”.
func testWillNotDeadlock() throws {
let expectation = self.expectation(description: "testWillNotDeadlock")
someAsyncMethod {
expectation.fulfill()
}
waitForExpectations(timeout: 100)
}
In this case, although someAsyncMethod may have used the main thread, we didn't deadlock because we made sure the test used expectations rather than blocking the current thread.
So, bottom line, if you're testing some asynchronous methods that must use the main thread, make sure that the tests don't block, but rather that they use expectations.
There are, admittedly, other sources of potential problems. For example, you are calling async with 100 iterations. You need to be careful with that pattern, because you can easily exhaust all of the worker threads with this “thread explosion”. The attempt to introduce concurrentPerform (which is often used to solve thread explosion problems) won't work if the code in the closure is calling async.
Considering this trivial piece of code
DispatchQueue.global().async {
print("2")
}
print("1")
we can say that output will be as following:
1
2
Are there any circumstances under which order of execution will be different (disregarding kind of used queue)? May they be forced to appear manually if any?
You said:
we can say that output will be as following ...
No, at best you can only say that the output will often/frequently be in that order, but is not guaranteed to be so.
The code snippet is dispatching code asynchronously to a global, concurrent queue, which will run it on a separate worker thread, and, in the absence of any synchronization, you have a classic race condition between the current thread and that worker thread. You have no guarantee of the sequence of these print statements, though, in practice, you will frequently see 1 before 2.
this is one of the common questions at tech interview; and for some reason interviewers expect conclusion that order of execution is constant here. So (as it was not aligned with my understanding) I decided to clarify.
Your understanding is correct, that the sequence of these two print statements is definitely not guaranteed.
Are there any circumstances under which order of execution will be different
A couple of thoughts:
By adjusting queue priorities, for example, you can change the likelihood that 1 will appear before 2. But, again, not guaranteed.
There are a variety of mechanisms to guarantee the order.
You can use a serial queue ... I gather you didn’t want to consider using another/different queue, but it is generally the right solution, so any discussion on this topic would be incomplete without that scenario;
You can use dispatch group ... you can notify on the global queue when the current queue satisfies the group;
You can use dispatch semaphores ... semaphores are a classic answer to the question, but IMHO semaphores should used sparingly as it’s so easy to make mistakes ... plus blocking threads is never a good idea;
For the sake of completeness, we should mention that you really can use any synchronization mechanism, such as locks.
I do a quick test too. Normally I will do a UI update after code execution on global queue is complete, and I would usually put in the end of code block step 2.
But today I suddenly found even I put that main queue code in the beginning of global queue block step 1, it still is executed after all global queue code execution is completed.
DispatchQueue.global().async {
// step 1
DispatchQueue.main.async {
print("1. on the main thread")
}
// code global queue
print("1. off the main thread")
print("2. off the main thread")
// step 2
DispatchQueue.main.async {
print("2. on the main thread")
}
}
Here is the output:
1. off the main thread
2. off the main thread
1. on the main thread
2. on the main thread
I have read the tutorial about GCD and Dispatch Queue in Swift 3
But I'm really confused about the order of synchronous execution and asynchronous execution and main queue and background queue.
I know that if we use sync then we execute them one after the precious one, if we use async then we can use QoS to set their priority, but how about this case?
func queuesWithQoS() {
let queue1 = DispatchQueue(label: "com.appcoda.myqueue1")
let queue2 = DispatchQueue(label: "com.appcoda.myqueue2")
for i in 1000..<1005 {
print(i)
}
queue1.async {
for i in 0..<5{
print(i)
}
}
queue2.sync {
for i in 100..<105{
print( i)
}
}
}
The outcome shows that we ignore the asynchronous execution. I know queue2 should be completed before queue1 since it's synchronous execution but why we ignore the asynchronous execution and
what is the actual difference between async, sync and so-called main queue?
You say:
The outcome shows that we ignore the asynchronous execution. ...
No, it just means that you didn't give the asynchronously dispatched code enough time to get started.
I know queue2 should be completed before queue1 since it's synchronous execution ...
First, queue2 might not complete before queue1. It just happens to. Make queue2 do something much slower (e.g. loop through a few thousand iterations rather than just five) and you'll see that queue1 can actually run concurrently with respect to what's on queue2. It just takes a few milliseconds to get going and the stuff on your very simple queue2 is finishing before the stuff on queue1 gets a chance to start.
Second, this behavior is not technically because it's synchronous execution. It's just that async takes a few milliseconds to get it's stuff running on some worker thread, whereas the synchronous call, because of optimizations that I won't bore you with, gets started more quickly.
but why we ignore the asynchronous execution ...
We don't "ignore" it. It just takes a few milliseconds to get started.
and what is the actual difference between async, sync and so-called main queue?
"Async" merely means that the current thread may carry on and not wait for the dispatched code to run on some other thread. "Sync" means that the current thread should wait for the dispatched code to finish.
The "main thread" is a different topic and simply refers to the primary thread that is created for driving your UI. In practice, the main thread is where most of your code runs, basically running everything except that which you manually dispatch to some background queue (or code that is dispatched there for you, e.g. URLSession completion handlers).
sync and async are related to the same thread / queue. To see the difference please run this code:
func queuesWithQoS() {
let queue1 = DispatchQueue(label: "com.appcoda.myqueue1")
queue1.async {
for i in 0..<5{
print(i)
}
}
print("finished")
queue1.sync {
for i in 0..<5{
print(i)
}
}
print("finished")
}
The main queue is the thread the entire user interface (UI) runs on.
First of all I prefer to use the term "delayed" instead of "ignored" about the code execution, because all your code in your question is executed.
QoS is an enum, the first class means the highest priority, the last one the lowest priority, when you don't specify any priority you have a queue with default priority and default is in the middle:
userInteractive
userInitiated
default
utility
background
unspecified
Said that, you have two synchronous for-in loops and one async, where the priority is based by the position of the loops and the kind of the queues (sync/async) in the code because here we have 3 different queues (following the instructions about your link queuesWithQoS() could be launched in viewDidAppearso we can suppose is in the main queue)
The code show the creation of two queues with default priority, so the sequence of the execution will be:
the for-in loop with 1000..<1005 in the main queue
the synchronous queue2 with default priority
the asynchronous queue1 (not ignored, simply delayed) with default priority
Main queue have always the highest priority where all the UI instructions are executed.
Recently I see this problem which is pretty similar to First reader/writer problem.
Implementing an N process barrier using semaphores
I am trying to modify it to made sure that it can be reuse and work correctly.
n = the number of threads
count = 0
mutex = Semaphore(1)
barrier = Semaphore(0)
mutex.wait()
count = count + 1
if (count == n){ barrier.signal()}
mutex.signal()
barrier.wait()
mutex.wait()
count=count-1
barrier.signal()
if(count==0){ barrier.wait()}
mutex.signal()
Is this correct?
I'm wondering if there exist some mistakes I didn't detect.
Your pseudocode correctly returns barrier back to initial state. Insignificant suggestion: replace
barrier.signal()
if(count==0){ barrier.wait()}
with IMHO more readable
if(count!=0){ barrier.signal()} //if anyone left pending barrier, release it
However, there may be pitfalls in the way, barrier is reused. Described barrier has two states:
Stop each thread until all of them are pending.
Resume each thread until all of them are running
There is no protection against mixing them: some threads are being resumed, while other have already hit the first stage again. For example, you have bunch of threads, which do some stuff and then sync up on barrier. Each thread body would be:
while (!exit_condition) {
do_some_stuff();
pend_barrier(); // implementation from current question
}
Programmer expects, that number of calls for do_some_stuff() will be the same for all threads. What may (or may not) happen depending on timing: the first thread released from barrier finishes do_some_stuff() calculations before all threads have left the barrier, so it reentered pending for the second time. As a result he will be released along with other threads in the current barrier release iteration and will have (at least) one more call to do_some_stuff().
Consider the following codes:
/*----------------------------------------------------------------------------
First Thread
*---------------------------------------------------------------------------*/
void Thread1 (void const *argument)
{
for (;;)
{
osMutexWait(mutex, osWaitForever);
Thread1_Functions;
osMutexRelease(mutex);
}
}
/*----------------------------------------------------------------------------
Second Thread
*---------------------------------------------------------------------------*/
void Thread2 (void const *argument)
{
for(;;)
{
osMutexWait(mutex, osWaitForever);
Thread2_Functions;
osMutexRelease(mutex);
}
}
As far as I've noticed from RTOS's scheduling ,RTOS assign a specific time to each task and after this time is over,it switches to the other task.
Then in this specific time,inside task's infinite loop ,maybe loop is repeated several times until task's specific time finished.
Assume task is finished in less than of it's time's half,then it has a time to fully run this task once again.
in last line after releasing mutex , then it will achieve mutex before than task2 for second time,Am I true ?
assume timer tick occur when MCU run Thread1_Functions for second time,then task2 cant run because mutex owned by task1, RTOS run task 1 again and if timer tick occur every time in the Thread1_Functions, then task2 has no chance to running,Am I true ?
First, let me clear up the scheduling method that you described. You said, "RTOS assign a specific time to each task and after this time is over, it switches to the other task." This scheduling method is commonly called "time slicing". And all RTOS do not necessarily use this method all the time. Time slicing may be used for tasks that have the same priority (or if the RTOS does not support task priorities). But if the tasks have different priorities then the scheduler will not use time-slicing and will instead schedule according to task priority.
But let's assume that the two tasks in your example have the same priority and the scheduler is time-slicing.
Thread1 runs and gets the mutex.
Thread1's time slice expires and the scheduler switches to Thread2.
Thread2 attempts to get the mutex but blocks since Thread1 already owns the mutex.
The scheduler switches back to Thread1 since Thread2 is blocked.
Thread1 releases the mutex.
When the mutex is released, the scheduler should switch to any higher priority task that is waiting for the mutex. But since Thread2 is the same priority, let's assume the scheduler does not switch and Thread1 continues to run within its time slice.
Thread1 attempts to get the mutex again.
In your scenario Thread1 successfully gets the mutex again and this could result in Thread2 never being able to run. In order to prevent this from happening the mutex service should prioritize requests for the mutex. Mutex requests from higher priority tasks receive higher priority. And mutex requests from equal priority tasks should be first come, first served. In other words, the mutex service should put requests from equal priority tasks into a queue. Remember Thread2 already has a pending request for the mutex (step 3 above). So when Thread1 attempts to get the mutex again (step 6), Thread1's request should be queued behind the earlier request from Thread2. And when Thread1's second request for the mutex get queued behind the request from Thread2, the scheduler should block Thread1 and switch to Thread2, giving the mutex to Thread2.
Update: The above is just an idea for how an unspecified RTOS might handle the situation in order to avoid starving Thread2. You didn't mention a specific RTOS until your comment below. I don't know whether Keil RTX works like I described above. And now I'm wondering what your question really is.
Are you asking what will Keil RTX do in this situation? I'm not sure. You'll have to look at the code for osMutexRelease() to see whether it switches to a task with the same priority. Also look at osMutexWait() to see how it prioritizes tasks of the same priority.
Or are you stating the Keil RTX allows Thread2 to starve in this situation and are you asking how to fix it. To fix this situation you could call osThreadYeild() after releasing the mutex. Like this:
void Thread1 (void const *argument)
{
for (;;)
{
osMutexWait(mutex, osWaitForever);
Thread1_Functions;
osMutexRelease(mutex);
osThreadYeild();
}
}