I'm having trouble with the below pattern. I need to synchronously wait for an initial async request, where the completion block in turn calls a list of async calls where each of the async calls need to wait for the previous one to end before it can start.
The below code would call all of the nested requests at once, which is not what I want.
let semaphore = DispatchSemaphore.init(value: 0)
self.getThings { (things) -> (Void) in
for thing in things {
self.getSomething { (somevalue) -> (Void) in
}
}
semaphore.signal()
}
semaphore.wait()
So, what I've tried, is to add another semaphore inside the for loop but this has the effect that the nested request are never carried out - it waits indefinitely for semaphore2 signal which never happens. How do I fix this?
let semaphore = DispatchSemaphore.init(value: 0)
self.getThings { (things) -> (Void) in
for thing in things {
let semaphore2 = DispatchSemaphore.init(value: 0)
self.getSomething { (somevalue) -> (Void) in
semaphore2.signal()
}
semaphore2.wait()
}
semaphore.signal()
}
semaphore.wait()
Related
I'm trying to wait for AVAssetExportSession.exportAsynchronously to complete synchronously, and here's what I've got so far:
func foo() {
let semaphore = DispatchSemaphore(value: 0)
session.exportAsynchronously {
defer {
semaphore.signal()
}
// ...
}
semaphore.wait()
}
I was wondering, is there any possibility that blocking the current thread while exportAsynchronously is running will cause a deadlock?
In other words, is exportAsynchronously guaranteed to run on another DispatchQueue?
I'd like to efficiently implement this behaviour:
A function is asked to run (by the user). Knowing that this function is also automatically repeatedly called by a timer, I'd like to make sure the function returns whenever it is already running.
In pseudo code:
var isRunning = false
func process() {
guard isRunning == false else { return }
isRunning = true
defer {
isRunning = false
}
// doing the job
}
I am aware of the semaphore concept:
let isRunning = DispatchSemaphore(value: 1)
func process() {
// *but this blocks and then passthru rather than returning immediately if the semaphore count is not zero.
isRunning.wait()
defer {
isRunning.signal()
}
// doing the job
}
How would you use the semaphore to implement this behaviour with a semaphore OR any other solution?
You can use wait(timeout:) with a timeout value of now() to test the
semaphore. If the semaphore count is zero then this returns .timedOut,
otherwise it returns .success (and decreases the semaphore count).
let isRunning = DispatchSemaphore(value: 1)
func process() {
guard isRunning.wait(timeout: .now()) == .success else {
return // Still processing
}
defer {
isRunning.signal()
}
// doing the job
}
I'm reviewing some Alamofire sample Retrier code:
func should(_ manager: SessionManager, retry request: Request, with error: Error, completion: #escaping RequestRetryCompletion) {
lock.lock() ; defer { lock.unlock() }
if let response = request.task.response as? HTTPURLResponse, response.statusCode == 401 {
requestsToRetry.append(completion)
if !isRefreshing {
refreshTokens { [weak self] succeeded, accessToken, refreshToken in
guard let strongSelf = self else { return }
strongSelf.lock.lock() ; defer { strongSelf.lock.unlock() }
...
}
}
} else {
completion(false, 0.0)
}
}
I don't follow how you can have lock.lock() on the first line of the function and then also have that same line strongSelf.lock.lock() within the closure passed to refreshTokens.
If the first lock is not released until the end of the should method when the defer unlock is executed then how does the the second strongSelf.lock.lock() successfully execute while the first lock is held?
The trailing closure of refreshTokens, where this second call to lock()/unlock() is called, runs asynchronously. This is because the closure is #escaping and is called from within a responseJSON inside the refreshTokens routine. So the should method will have performed its deferred unlock by the time the closure of refreshTokens is actually called.
Having said that, this isn't the most elegant code that I've seen, where the utility of the lock is unclear and the risk of deadlocking is so dependent upon the implementation details of other routines. It looks like it's OK here, but I don't blame you for raising an eyebrow at it.
I am currently playing around with Grand Central Dispatch and discovered a class called DispatchWorkItem. The documentation seems a little incomplete so I am not sure about using it the right way. I created the following snippet and expected something different. I expected that the item will be cancelled after calling cancel on it. But the iteration continues for some reason. Any ideas what I am doing wrong? The code seems fine for me.
#IBAction func testDispatchItems() {
let queue = DispatchQueue.global(attributes:.qosUserInitiated)
let item = DispatchWorkItem { [weak self] in
for i in 0...10000000 {
print(i)
self?.heavyWork()
}
}
queue.async(execute: item)
queue.after(walltime: .now() + 2) {
item.cancel()
}
}
GCD does not perform preemptive cancelations. So, to stop a work item that has already started, you have to test for cancelations yourself. In Swift, cancel the DispatchWorkItem. In Objective-C, call dispatch_block_cancel on the block you created with dispatch_block_create. You can then test to see if was canceled or not with isCancelled in Swift (known as dispatch_block_testcancel in Objective-C).
func testDispatchItems() {
let queue = DispatchQueue.global()
var item: DispatchWorkItem?
// create work item
item = DispatchWorkItem { [weak self] in
for i in 0 ... 10_000_000 {
if item?.isCancelled ?? true { break }
print(i)
self?.heavyWork()
}
item = nil // resolve strong reference cycle of the `DispatchWorkItem`
}
// start it
queue.async(execute: item!)
// after five seconds, stop it if it hasn't already
queue.asyncAfter(deadline: .now() + 5) {
item?.cancel()
item = nil
}
}
Or, in Objective-C:
- (void)testDispatchItem {
dispatch_queue_t queue = dispatch_get_global_queue(QOS_CLASS_DEFAULT, 0);
static dispatch_block_t block = nil; // either static or property
__weak typeof(self) weakSelf = self;
block = dispatch_block_create(0, ^{
for (long i = 0; i < 10000000; i++) {
if (dispatch_block_testcancel(block)) { break; }
NSLog(#"%ld", i);
[weakSelf heavyWork];
}
block = nil;
});
// start it
dispatch_async(queue, block);
// after five seconds, stop it if it hasn't already
dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(5 * NSEC_PER_SEC)), dispatch_get_main_queue(), ^{
if (block) { dispatch_block_cancel(block); }
});
}
There is no asynchronous API where calling a "Cancel" method will cancel a running operation. In every single case, a "Cancel" method will do something so the operation can find out whether it is cancelled, and the operation must check this from time to time and then stop doing more work by itself.
I don't know the API in question, but typically it would be something like
for i in 0...10000000 {
if (self?.cancelled)
break;
print(i)
self?.heavyWork()
}
DispatchWorkItem without DispatchQueue
let workItem = DispatchWorkItem{
//write youre code here
}
workItem.cancel()// For Stop
DispatchWorkItem with DispatchQueue
let workItem = DispatchWorkItem{
//write youre code here
}
DispatchQueue.main.async(execute: workItem)
workItem.cancel()// For Stop
Execute
workItem.perform()// For Execute
workItem.wait()// For Delay Execute
I want to enhance the code below: when i click the "submitData" button, the added code should cancel the completion handler.
func returnUserData(completion:(result:String)->Void){
for index in 1...10000 {
print("\(index) times 5 is \(index * 5)")
}
completion(result: "END");
}
func test(){
self.returnUserData({(result)->() in
print("OK")
})
}
#IBAction func submintData(sender: AnyObject) {
self.performSegueWithIdentifier("TestView", sender: self)
}
Can you tell me how to do this?
You can use NSOperation subclass for this. Put your calculation inside the main method, but periodically check cancelled, and if so, break out of the calculation.
For example:
class TimeConsumingOperation : NSOperation {
var completion: (String) -> ()
init(completion: (String) -> ()) {
self.completion = completion
super.init()
}
override func main() {
for index in 1...100_000 {
print("\(index) times 5 is \(index * 5)")
if cancelled { break }
}
if cancelled {
completion("cancelled")
} else {
completion("finished successfully")
}
}
}
Then you can add the operation to an operation queue:
let queue = NSOperationQueue()
let operation = TimeConsumingOperation { (result) -> () in
print(result)
}
queue.addOperation(operation)
And, you can cancel that whenever you want:
operation.cancel()
This is, admittedly, a fairly contrived example, but it shows how you can cancel your time consuming calculation.
Many asynchronous patterns have their built-in cancelation logic, eliminating the need for the overhead of an NSOperation subclass. If you are trying to cancel something that already supports cancelation logic (e.g. NSURLSession, CLGeocoder, etc.), you don't have to go through this work. But if you're really trying to cancel your own algorithm, the NSOperation subclass handles this quite gracefully.