proper use of Alamofire queue - swift

Here is the scenario, everything works but I get hanged up on the main queue. I have:
singleton class to manage API connection. Everything works (execution time aside....)
a number of view controllers calling GET API via the above singleton class to get the data
I normally call the above from either viewDidLoad or viewWillAppear
they all work BUT ....
if I call a couple of API methods implemented with Alamofire.request() with a closure (well, I need to know when it is
time to reload!), one of the two gets hung waiting for the default
(main) queue to give it a thread and it can take up to 20 seconds
if I call only one, do my thing and then call a POST API, this
latter one ends up in the same situation as (5), it takes a long
time to grab a slot in the default queue.
I am not specifying a queue in Alamofiore.request() and it sounds to me like I should so I tried it. I added a custom concurrent queue to my singleton API class and I tried adding that to my Alamofiore.request() .....and that did absolutely nothing. Help please, I must be missing something obvious?!
Here is my singleton API manager (excerpt) class:
class APIManager {
// bunch of stuff here
static let sharedInstance = APIController()
// more stuff here
let queue = DispatchQueue(label: "com.teammate.response-queue", qos: .utility, attributes: [.concurrent])
// more stuff
func loadSports(completion: #escaping (Error?) -> Void) {
let parameters: [String: Any?] = [:]
let headers = getAuthenticationHeaders()
let url = api_server+api_sport_list
Alamofire.request(url, method: .get, parameters: parameters, encoding: JSONEncoding.default, headers: headers).responseString (queue: queue) { response in
if let json = response.result.value {
if let r = JSONDeserializer<Response<[Sport]>>.deserializeFrom(json: json) {
if r.status == 200 {
switch r.content{
case let content as [Sport]:
self.sports = content
NSLog("sports loaded")
completion(nil)
default:
NSLog("could not read the sport list payload!")
completion(GenericError.reportError("could not read sports payload"))
}
}
else {
NSLog("sports not obtained, error %d %#",r.status, r.error)
completion(GenericError.reportError(r.error))
}
}
}
}
}
// more stuff
}
And this is how I call the methods from APIManager once I get the sigleton:
api.loadSports(){ error in
if error != nil {
// something bad happened, more code here to handle the error
}
else {
self.someViewThingy.reloadData()
}
}
Again, it all works it is just that if I make multiple Alamofire calls from the same UIViewController, the first is fast, every other call sits for ever to get a spot in the queue an run.

UI updates must happen on the main queue, so by moving this stuff to a concurrent queue is only going to introduce problems. In fact, if you change the completion handler queue to your concurrent queue and neglect to dispatch UI updates back to the main queue, it's going to just make it look much slower than it really is.
I actually suspect you misunderstand the purpose of the queue parameter of responseString. It isn't how the requests are processed (they already happen concurrently with respect to the main queue), but merely on which queue the completion handler will be run.
So, a couple of thoughts:
If you're going to use your own queue, make sure to dispatch UI updates to the main queue.
If you're going to use your own queue and you're going to update your model, make sure to synchronize those updates with any interaction you might be doing on the main queue. Either create a synchronization queue for that or, easier, dispatch all model updates back to the main queue.
I see nothing here that justifies the overhead and hassle of running the completion handler on anything other than the main queue. If you don't supply a queue to responseString, it will use the main queue for the completion handlers (but won't block anything, either), and it solves the prior two issues.

Related

Is there a way to enforce serial scheduling of async/await calls similar to a GCD serial queue?

Using Swift's new async/await functionality, I want to emulate the scheduling behavior of a serial queue (similar to how one might use a DispatchQueue or OperationQueue in the past).
Simplifying my use case a bit, I have a series of async tasks I want to fire off from a call-site and get a callback when they complete but by design I want to execute only one task at a time (each task depends on the previous task completing).
Today this is implemented via placing Operations onto an OperationQueue with a maxConcurrentOperationCount = 1, as well as using the dependency functionality of Operation when appropriate. I've build an async/await wrapper around the existing closure-based entry points using await withCheckedContinuation but I'm trying to figure out how to migrate this entire approach to the new system.
Is that possible? Does it even make sense or am I fundamentally going against the intent of the new async/await concurrency system?
I've dug some into using Actors but as far as I can tell there's no way to truly force/expect serial execution with that approach.
--
More context - This is contained within a networking library where each Operation today is for a new request. The Operation does some request pre-processing (think authentication / token refreshing if applicable), then fires off the request and moves on to the next Operation, thus avoiding duplicate authentication pre-processing when it is not required. Each Operation doesn't technically know that it depends on prior operations but the OperationQueue's scheduling enforces the serial execution.
Adding sample code below:
// Old entry point
func execute(request: CustomRequestType, completion: ((Result<CustomResponseType, Error>) -> Void)? = nil) {
let operation = BlockOperation() {
// do preprocessing and ultimately generate a URLRequest
// We have a URLSession instance reference in this context called session
let dataTask = session.dataTask(with: urlRequest) { data, urlResponse, error in
completion?(/* Call to a function which processes the response and creates the Result type */)
dataTask.resume()
}
// queue is an OperationQueue with maxConcurrentOperationCount = 1 defined elsewhere
queue.addOperation(operation)
}
// New entry point which currently just wraps the old entry point
func execute(request: CustomRequestType) async -> Result<CustomResponseType, Error> {
await withCheckedContinuation { continuation in
execute(request: request) { (result: Result<CustomResponseType, Error>) in
continuation.resume(returning: result)
}
}
}
A few observations:
For the sake of clarity, your operation queue implementation does not “[enforce] the serial execution” of the network requests. Your operations are only wrapping the preparation of those requests, but not the performance of those requests (i.e. the operation completes immediately, and does not waiting for the request to finish). So, for example, if your authentication is one network request and the second request requires that to finish before proceeding, this BlockOperation sort of implementation is not the right solution.
Generally, if using operation queues to manage network requests, you would wrap the whole network request and response in a custom, asynchronous Operation subclass (and not a BlockOperation), at which point you can use operation queue dependencies and/or maxConcurrentOperationCount. See https://stackoverflow.com/a/57247869/1271826 if you want to see what a Operation subclass wrapping a network request looks like. But it is moot, as you should probably just use async-await nowadays.
You said:
I could essentially skip the queue entirely and replace each Operation with an async method on an actor to accomplish the same thing?
No. Actors can ensure sequential execution of synchronous methods (those without await calls, and in those cases, you would not want an async qualifier on the method itself).
But if your method is truly asynchronous, then, no, the actor will not ensure sequential execution. Actors are designed for reentrancy. See SE-0306 - Actors » Actor reentrancy.
If you want subsequent network requests to await the completion of the authentication request, you could save the Task of the authentication request. Then subsequent requests could await that task:
actor NetworkManager {
let session: URLSession = ...
var loginTask: Task<Bool, Error>?
func login() async throws -> Bool {
loginTask = Task { () -> Bool in
let _ = try await loginNetworkRequest()
return true
}
return try await loginTask!.value
}
func someOtherRequest(with value: String) async throws -> Foo {
let isLoggedIn = try await loginTask?.value ?? false
guard isLoggedIn else {
throw URLError(.userAuthenticationRequired)
}
return try await foo(for: createRequest(with: value))
}
}
Perhaps this is unrelated, but if you are introducing async-await, I would advise against withCheckedContinuation. Obviously, if iOS 15 (or macOS 12) and later, I would use the new async URLSession methods. If you need to go back to iOS 13, for example, I would use withTaskCancellationHandler and withThrowingCheckedContinuation. See https://stackoverflow.com/a/70416311/1271826.

How do I wait for a download to complete before continuing?

I have this block of code. It fetches data from the API and adds it to a locationDetails array, which is part of a singleton.
private func DownloadLocationDetails(placeID: String) {
let request = AF.request(GoogleAPI.shared.getLocationDetailsLink(placeID: placeID))
request.responseJSON { (data) in
guard let detail = try? JSONDecoder().decode(LocationDetailsBase.self, from: data.data!),
let result = detail.result else {
print("Something went wrong fetching nearby locations.")
return
}
DownloadManager.shared.locationDetails.append(result)
}
}
This block of code is the block in question. I'm creating a caching system of sorts that only downloads new information and retains any old information. This is being done to save calls to the API and for performance gains. The line DownloadLocationDetails(placeID: placeID) is a problem for me because if I execute this line of code it will continue to loop over and over again using unnecessary API calls while waiting for the download to complete. How do I effectively manage this?
func GetLocationDetail(placeID: String) -> LocationDetail {
for location in locationDetails {
if location.place_id == placeID { return location }
}
DownloadLocationDetails(placeID: placeID)
return GetLocationDetail(placeID: placeID)
}
I expect this GetLocationDetail(....) to be called whenever a user interacts with an interface object, so how do I also ensure that the view that calls this is properly notified that the download is complete?
I attempted using a closure but I can't get it to return the way I'm wanting it to. I have a property on the singleton that I want to set this value so that it can be called globally. I am also considering using GCD but I'm not sure of the structure for that.
Generally the pattern for something like this is to store the request object you created in DownloadLocationDetails so you can check to see if one is active before making another call. If you only want to support one at a time, then it's as simple as keeping the bare reference to the request object, but you could make a dictionary of request objects keyed off the placeID (and you probably want to think about maximum request count, and queue up additional requests).
Then the trick is to get notified when the given request object completes. There are a couple ways you could do this, such as keeping a list of callbacks to invoke when it completes, but the easiest would probably be just to refactor the code a bit so that you always update your UI when the request completes, so something like:
private func DownloadLocationDetails(placeID: String) {
let request = AF.request(GoogleAPI.shared.getLocationDetailsLink(placeID: placeID))
request.responseJSON { (data) in
guard let detail = try? JSONDecoder().decode(LocationDetailsBase.self, from: data.data!),
let result = detail.result else {
print("Something went wrong fetching nearby locations.")
return
}
DownloadManager.shared.locationDetails.append(result)
// Notify the UI to refresh for placeID
}
}

Race condition in unit tests

I'm currently testing a number of classes that do network stuff like REST API calls, and a Realm database is mutated in the process. When I run all the different tests I have at once, race conditions appear (but of course, when I run them one by one, they all pass). How can I reliably make the tests pass?
I have tried to call the mentioned functions in a GCD block like this:
DispatchQueue.main.async {
self.function.start()
}
One of my tests are still failing, so I guess the above didn't work. I have enabled Thread Sanitizer and it reports, from time to time, that race conditions appear.
I can't post code, so I'm looking for conceptual solutions.
Typically some form of dependency injection. Be it an internally exposed var to the DispatchQueue, a default argument in a function with the queue, or a constructor argument. You just need some way to pass a test queue that dispatches the event when you need to.
DispatchQueue.main.async will schedule the block async to the callee on the main queue and therefore isn't guarenteed by the time you make an assertion.
Example (disclaimer: I'm typing from memory so it might not compile but it gives the idea):
// In test code.
struct TestQueue: DispatchQueue {
// make sure to impement other necessary protocol methods
func async(block: () -> Void) {
// you can even have some different behavior for when to execute the block.
// also you can pass XCTestExpectations to this TestQueue to be fulfilled if necessary.
block()
}
}
// In source code. In test, pass the Test Queue to the first argument
func doSomething(queue: DispatchQueue = DispatchQueue.main, completion: () -> Void) {
queue.async(block: completion)
}
Other methods of testing async and eliminating race conditions revolve around craftily fulfilling an XCTestExpectation.
If you have access to the completion block that is eventually invoked:
// In source
class Subject {
func doSomethingAsync(completion: () -> Void) {
...
}
}
// In test
func testDoSomethingAsync() {
let subject = Subject()
let expect = expectation(description: "does something asnyc")
subject.doSomethingAsync {
expect.fulfill()
}
wait(for: [expect], timeout: 1.0)
// assert something here
// or the wait may be good enough as it will fail if not fulfilled
}
If you don't have access to the completion block it usually means finding a way to inject or subclass a test double that you can set an XCTestExpectation on and will eventually fulfill the expectation when the async work has completed.

Realm notifications registration while in write transaction

I understand that you can not register a Realm .observe block on an object or collection if the Realm is in a write transaction.
This is easier to manage if everything is happening on the main thread however I run into this exception often because I prefer to hand my JSON parsing off to a background thread. This works great because I don't have to bog down the main thread and with Realm's beautiful notification system I can get notified of all modifications if I have already registered to listen for those changes.
Right now, if I am about to add an observation block I check to make sure my Realm is not in a write transaction like this:
guard let realm = try? Realm(), !realm.isInWriteTransaction else {
return
}
self.myToken = myRealmObject.observe({ [weak self] (change) in
//Do what ever
}
This successfully guards against this exception. However I never get a chance to re - register this token unless I get a little creative.
Does the Realm team have any code examples/ suggestions on a better pattern to avoid this exception? Any tricks I'm missing to successfully register the token?
In addition to the standard function, I do use an extension for Results to avoid this in general. This issue popped up, when our data load grew bigger and bigger.
While we do now rewrite our observe functions logic, this extension is an interims solution to avoid the crashes at a first place.
Idea is simple: when currently in a write transaction, try it again.
import Foundation
import RealmSwift
extension Results {
public func safeObserve(on queue: DispatchQueue? = nil,
_ block: #escaping (RealmSwift.RealmCollectionChange<RealmSwift.Results<Element>>) -> Void)
-> RealmSwift.NotificationToken {
// If in Write transaction, call it again
if self.realm?.isInWriteTransaction ?? false {
DispatchQueue.global().sync {
Thread.sleep(forTimeInterval: 0.1) // Better to have some delay than a crash, hm?
}
return safeObserve(on: queue, block)
}
// Aight, we can proceed to call Realms Observe function
else {
return self.observe(on: queue, block)
}
}
}
Then call it like
realmResult.safeObserve({ [weak self] (_: RealmCollectionChange<Results<AbaPOI>>) in
// Do anything
})

Why my NSOperation is not cancelling?

I have this code to add a NSOperation instance to a queue
let operation = NSBlockOperation()
operation.addExecutionBlock({
self.asyncMethod() { (result, error) in
if operation.cancelled {
return
}
// etc
}
})
operationQueue.addOperation(operation)
When user leaves the view that triggered this above code I cancel operation doing
operationQueue.cancelAllOperations()
When testing cancelation, I'm 100% sure cancel is executing before async method returns so I expect operation.cancelled to be true. Unfortunately this is not happening and I'm not able to realize why
I'm executing cancellation on viewWillDisappear
EDIT
asyncMethod contains a network operation that runs in a different thread. That's why the callback is there: to handle network operation returns. The network operation is performed deep into the class hierarchy but I want to handle NSOperations at root level.
Calling the cancel method of this object sets the value of this
property to YES. Once canceled, an operation must move to the finished
state.
Canceling an operation does not actively stop the receiver’s code from
executing. An operation object is responsible for calling this method
periodically and stopping itself if the method returns YES.
You should always check the value of this property before doing any
work towards accomplishing the operation’s task, which typically means
checking it at the beginning of your custom main method. It is
possible for an operation to be cancelled before it begins executing
or at any time while it is executing. Therefore, checking the value at
the beginning of your main method (and periodically throughout that
method) lets you exit as quickly as possible when an operation is
cancelled.
import Foundation
let operation1 = NSBlockOperation()
let operation2 = NSBlockOperation()
let queue = NSOperationQueue()
operation1.addExecutionBlock { () -> Void in
repeat {
usleep(10000)
print(".", terminator: "")
} while !operation1.cancelled
}
operation2.addExecutionBlock { () -> Void in
repeat {
usleep(15000)
print("-", terminator: "")
} while !operation2.cancelled
}
queue.addOperation(operation1)
queue.addOperation(operation2)
sleep(1)
queue.cancelAllOperations()
try this simple example in playground.
if it is really important to run another asynchronous code, try this
operation.addExecutionBlock({
if operation.cancelled {
return
}
self.asyncMethod() { (result, error) in
// etc
}
})
it's because you doing work wrong. You cancel operation after it executed.
Check this code, block executed in one background thread. Before execution start – operation cancel, remove first block from queue.
Swift 4
let operationQueue = OperationQueue()
operationQueue.qualityOfService = .background
let ob1 = BlockOperation {
print("ExecutionBlock 1. Executed!")
}
let ob2 = BlockOperation {
print("ExecutionBlock 2. Executed!")
}
operationQueue.addOperation(ob1)
operationQueue.addOperation(ob2)
ob1.cancel()
// ExecutionBlock 2. Executed!
Swift 2
let operationQueue = NSOperationQueue()
operationQueue.qualityOfService = .Background
let ob1 = NSBlockOperation()
ob1.addExecutionBlock {
print("ExecutionBlock 1. Executed!")
}
let ob2 = NSBlockOperation()
ob2.addExecutionBlock {
print("ExecutionBlock 2. Executed!")
}
operationQueue.addOperation(ob1)
operationQueue.addOperation(ob2)
ob1.cancel()
// ExecutionBlock 2. Executed!
The Operation does not wait for your asyncMethod to be finished. Therefore, it immediately returns if you add it to the Queue. And this is because you wrap your async network operation in an async NSOperation.
NSOperation is designed to give a more advanced async handling instead for just calling performSelectorInBackground. This means that NSOperation is used to bring complex and long running operations in background and not block the main thread. A good article of a typically used NSOperation can be found here:
http://www.raywenderlich.com/19788/how-to-use-nsoperations-and-nsoperationqueues
For your particular use case, it does not make sense to use an NSOperation here, instead you should just cancel your running network request.
It does not make sense to put an asynchronous function into a block with NSBlockOperation. What you probably want is a proper subclass of NSOperation as a concurrent operation which executes an asynchronous work load. Subclassing an NSOperation correctly is however not that easy as it should.
You may take a look here reusable subclass for NSOperation for an example implementation.
I am not 100% sure what you are looking for, but maybe what you need is to pass the operation, as parameter, into the asyncMethod() and test for cancelled state in there?
operation.addExecutionBlock({
asyncMethod(operation) { (result, error) in
// Result code
}
})
operationQueue.addOperation(operation)
func asyncMethod(operation: NSBlockOperation, fun: ((Any, Any)->Void)) {
// Do stuff...
if operation.cancelled {
// Do something...
return // <- Or whatever makes senes
}
}