Best practice with asynchronous functions Swift & Combine - swift

I'm converting my Swift app to use Combine as well as async/await and I'm trying to understand what's the best way to handle interactions between asynchronous functions and the main thread.
Here's an asynchronous function that loads a user:
class AccountManager {
static func fetchOrLoadUser() async throws -> AppUser {
if let user = AppUser.current.value {
return user
}
let syncUser = try await loadUser()
let user = try AppUser(syncUser: syncUser)
AppUser.current.value = user // [warning]: "Publishing changes from background threads is not allowed"
return user
}
}
And a class:
class AppUser {
static var current = CurrentValueSubject<AppUser?,Never>(nil)
// ...
}
Note: I chose to use CurrentValueSubject because it allows me to both (1) read this value synchronously whenever I need it and (2) subscribe for changes.
Now, on the line marked above I get the error Publishing changes from background threads is not allowed, which I understand. I see different ways to solve this issue:
1. Mark whole AccountManager class as #MainActor
Since most of the work done in asynchronous functions is to wait for network results, I'm wondering if there is an issue with simply running everything on the main thread. Would that cause performance issues or not?
2. Englobe error line in DispatchQueue.main.sync
Is that a reasonable solution, or would that cause threading problems like deadlocks?
3. Use DispatchGroup with enter(), leave() and wait()
Like in this answer. Is there a difference at all with solution #2? Because this solution needs more lines of code so I'd rather not use it if possible —I prefer clean code.

You can wrap the call in an await MainActor.run { } block. I think this is the most Swifty way of doing that.
You should not use Dispatch mechanism while using Swift Concurrency, event though I think DispatchQueue.main.async { } is safe to use here.
The #MainActor attribute is safe but shouldn’t be used on anObservableObject and could potentially slow down the UI if CPU-bound code is run in a method of the annotated type.

Related

Why does a Task within a #MainActor not block the UI?

Today I refactored a ViewModel for a SwiftUI view to structured concurrency. It fires a network request and when the request comes back, updates a #Published property to update the UI. Since I use a Task to perform the network request, I have to get back to the MainActor to update my property, and I was exploring different ways to do that. One straightforward way was to use MainActor.run inside my Task, which works just fine. I then tried to use #MainActor, and don't quite understand the behaviour here.
A bit simplified, my ViewModel would look somewhat like this:
class ContentViewModel: ObservableObject {
#Published var showLoadingIndicator = false
#MainActor func reload() {
showLoadingIndicator = true
Task {
try await doNetworkRequest()
showLoadingIndicator = false
}
}
#MainActor func someOtherMethod() {
// does UI work
}
}
I would have expected this to not work properly.
First, I expected SwiftUI to complain that showLoadingIndicator = false happens off the main thread. It didn't. So I put in a breakpoint, and it seems even the Task within a #MainActor is run on the main thread. Why that is is maybe a question for another day, I think I haven't quite figured out Task yet. For now, let's accept this.
So then I would have expected the UI to be blocked during my networkRequest - after all, it is run on the main thread. But this is not the case either. The network request runs, and the UI stays responsive during that. Even a call to another method on the main actor (e.g. someOtherMethod) works completely fine.
Even running something like Task.sleep() within doNetworkRequest will STILL work completely fine. This is great, but I would like to understand why.
My questions:
a) Am I right in assuming a Task within a MainActor does not block the UI? Why?
b) Is this a sensible approach, or can I run into trouble by using #MainActor for dispatching asynchronous work like this?
await is a yield point in Swift. It's where the current Task releases the queue and allows something else to run. So at this line:
try await doNetworkRequest()
your Task will let go of the main queue, and let something else be scheduled. It won't block the queue waiting for it to finish.
This means that after the await returns, it's possible that other code has been run by the main actor, so you can't trust the values of properties or other preconditions you've cached before the await.
Currently there's no simple, built-in way to say "block this actor until this finishes." Actors are reentrant.

GCD serial queue like approach using swift async/await api?

I am adopting the new async/await Swift API. Things work great.
Currently in my apps I am using GCD serial queues as a pipeline to enforce tasks to happen serially.
For manipulating data of some critical models in the app I use a serial queue accessible from anywhere in the app:
let modelQueue = DispatchQueue(label: "com.myapp.updatemodelqueue")
Anytime a model needs to modify some model data I dispatch to that queue:
modelQueue.async {
// ... model updates
}
With the new async/await changes I am making I still want to force model actual updates to happen serially. So, for example, when I import new model data from the server I want to make sure they happen serially.
For example, I may have a call in my model that looks like this:
func updateCarModel() async {
let data = await getModelFromServer()
modelQueue.async {
// update model
}
}
Writing that function using that pattern however would not wait for the model update changes because of the modelQueue.async. I do not want to use modelQueue.sync to avoid deadlocks.
So then after watching WWDC videos and looking at documentation I implemented this way, leveraging withCheckedContinuation:
func updateCarModel() async {
let data = await getModelFromServer()
await withCheckedContinuation({ continuation in
modelQueue.async {
// update model
continuation.resume()
}
})
}
However, to my understanding withCheckedContinuation is really meant to allow us to incrementally transition to fully adopt the new async/await Swift API. So, it does not seem to be what I should use as a final approach.
I then looked into actor, but I am not sure how that would allow me to serialize any model work I want to serialize around the app like I did with a static queue like shown above.
So, how can I enforce my model around the app to do model updates serially like I used to while also fully adopting the new await/async swift API without using withCheckedContinuation?
By making the model an actor, Swift synchronizes access to it' shared mutable state. If the model is written like this:
actor Model {
var data: Data
func updateModel(newData: Data) {
data = newData
}
}
The updateModel function here is synchronous, it's execution is uninterrupted after it's invoked. Because Model is an actor, Swift restricts you to treat it as if you are calling an asynchronous funtion from the outside. You'd have to await, which results in suspension of your active thread.
If in case you'd want to make updateModel async, the code within will always be synchronous unless if you explicitly suspend it by calling await. The order of execution of multiple updateModel calls is not very deterministic. As far as you don't suspend within the updateModel block, it is sure that they execute serially. In such case, there is no use making the updateModel async.
If your update model code is synchronous you can make your model actor type to synchronize access. Actors in swift behave similar to serial DispatchQueue, they perform only one task at a time in the order of submission. However, current swift actors are re-entrant, which means if you are calling any async method actor suspends the current task until the async function completes and proceeds to process other submitted tasks.
If your update code is asynchronous, using an actor might introduce data race. To avoid this, you can wait for non-reentrant actor support in swift. Or you can try this workaround TaskQueue I have created to synchronize between asynchronous tasks or use other synchronization mechanisms that are also created by me.

is there a library similar to Reentrant Lock in Java for Swift?

I am currently working on a function that can be used by multiple threads. The issue is that the function needs to complete first and store the result in the cache. In the meantime, other threads could be calling this function and I would need them to wait until is completed. We were able to accomplish this on Java using Reentrant Lock is there a similar library in swift? I saw that NSRecursiveLock approaches what we are trying to do, however, we want to keep it with swift only. I have also been seeing multiple articles such as this one that talks about using GCD, however, I believe this is for something similar but different: https://medium.com/#prasanna.aithal/multi-threading-in-ios-using-swift-82f3601f171c
Thank you in advance.
Recursion with locking is always a bit of a pain point. A clean solution would be to refactor your function that requires the lock into an external API that acquires the lock and forwards to an internal API that doesn't. Internally don't call the external API.
A simple example might be something like this (this is almost Swift code - parameters and actual work implementations need to be filled in)
extension DispatchSemaphore
{
func withLock<R>(_ block: () throws -> R) rethrows -> R
{
wait()
defer { signal() }
return try block()
}
}
let myLock = DispatchSemaphore(value: 1)
func recursiveLockingFunction(parameters)
{
func nonLockingFunc(parameters) {
if /* some terminating case goes here */ {
// Do the terminating case
return
}
// Do whatever you need to do to handle the partial problem and
// and reduce the parameters
nonLockingFunc(reducedParameters)
}
myLock.withLock { nonLockingFunc(parameters) }
}
Whether this will work for you depends on your design, but should work if the only problem is that the function you want to lock is recursive. And it only uses GCD (DispatchSemaphore) to achieve it.

How can I freeze Kotlin objects created in Swift?

I’m using Kotlin-Native with native-mt coroutine support and the Ktor library. I have several suspended functions that take in an object built using a builder pattern. I understand I need to call the suspended function on the main/ui thread. However, I can’t guarantee that the builder objects will be created on that thread. My understanding is they would need to be frozen before sending them to the main thread to be called with the suspended function. Is that correct?
For instance, this would fail because the query object hasn’t been frozen:
func loadData() {
DispatchQueue.global(qos: .background).async {
let query = CustomerQuery().emails(value: ["customer#gmail.com"])
self.fetchCustomersAndDoSomething(query: query)
}
}
func fetchCustomersAndDoSomething(query: CustomerQuery) {
DispatchQueue.main.async {
self.mylibrary.getCustomers(query: query) { response, err in
// do something with response
}
}
}
If that’s true, am I correct that I would need to add a method to every such object in order to ‘freeze’ it, since the freeze() Kotlin function from Freezing.kt doesn’t seem to be accessible from the Swift code importing my library? This is further complicated by the fact that freezing only applies to the iOS code, as the Android code doesn't need it.
Is there a simpler way to pass in Kotlin objects created by Swift to a suspended function, without requiring that those objects be created on the main thread?
In the Kotlin/Native world, whenever you are sharing objects between threads you have to make sure they are frozen (immutable), if you are not planning on making them #ThreadLocal. Android is an exception, since JVM is not that strict, and let's you share mutable objects between threads.
You have two options:
Either expose a freeze() function and use that
freeze() every incoming object in your shared code
Also if you don't freeze, probably you'll bump into IncorrectDereferenceException, which means you are trying to share mutable/non-frozen state
You don't freeze Swift classes. If CustomerQuery is a Kotlin class, you would need to freeze that.
However, you only need to call suspend functions on the main thread if you rely on the auto-generated Objc interface from the Kotlin compiler. We generally recommend not doing that because you can't control the lifecycle, but that's a whole different discussion.

How to handle multiple network call in Alamofire

I need to call 2 apis in a view controller to fetch some data from server, I want them to start at the same time, but next step will only be triggered if both of them are returned(doesn't matter it's a success or failure).
I can come up with 2 solutions :
1. Chain them together. Call api1, call api2 in api1's result handler, wait for api2's result
2. Set 2 Bool indicator variables, create a check function, if both of these indicators are true, do next. In both Apis result handler, set corresponding indicator variable, then call check function to decide if it's good to go
First one is not sufficient enough, and I can't say the second one is a elegant solution. Does Alamofire has something like combine signal in Reactivecocoa? Or any better solution?
Your assessment is 100% correct. At the moment, the two options you laid out are really the only possible approaches. I agree with you that your second option is much better than the first given your use case.
If you wish to combine ReactiveCocoa with Alamofire, then that's certainly possible, but hasn't been done yet to my knowledge. You could also investigate whether PromiseKit would be able to offer some assistance, but it hasn't been glued together with Alamofire yet either. Trying to combine either of these libraries with the Alamofire response serializers will not be a trivial task by any means.
Switching gears a bit, I don't really think ReactiveCocoa or PromiseKit are very well suited for your use case since you aren't chaining service calls, you are running them in parallel. Additionally, you still need to run all your parsing logic and determine whether each one succeeded or failed and then update your application accordingly. What I'm getting at is that Option 2 is going to be your best bet by far unless you want to go to all the effort of combining PromiseKit or ReactiveCocoa with Alamofire's response serializers.
Here's what I would suggest to keep things less complicated.
import Foundation
import Alamofire
class ParallelServiceCaller {
var firstServiceCallComplete = false
var secondServiceCallComplete = false
func startServiceCalls() {
let firstRequest = Alamofire.request(.GET, "http://httpbin.org/get", parameters: ["first": "request"])
firstRequest.responseString { request, response, dataString, error in
self.firstServiceCallComplete = true
self.handleServiceCallCompletion()
}
let secondRequest = Alamofire.request(.GET, "http://httpbin.org/get", parameters: ["second": "request"])
secondRequest.responseString { request, response, dataString, error in
self.secondServiceCallComplete = true
self.handleServiceCallCompletion()
}
}
private func handleServiceCallCompletion() {
if self.firstServiceCallComplete && self.secondServiceCallComplete {
// Handle the fact that you're finished
}
}
}
The implementation is really clean and simple to follow. While I understand your desire to get rid of the completion flags and callback function, the other options such as ReactiveCocoa and/or PromiseKit are still going to have additional logic as well and may end up making things more complicated.
Another possible option is to use dispatch groups and semaphores, but that really adds complexity, but could get you much closer to a ReactiveCocoa or PromiseKit styled approach.
I hope that helps shed some light.
DispatchGroup would be a good option to handle multiple dependent requests in parallel
func loadData() {
let dispatchGroup = DispatchGroup()
func startRequests() {
dispatchGroup.enter()
loadDataRequest1()
dispatchGroup.enter()
loadDataRequest2()
dispatchGroup.notify(queue: .main) { [weak self] in
// Process your responses
}
loadDataRequest1() {
// Save your response
dispatchGroup.leave()
}
loadDataRequest2() {
// Save your response
dispatchGroup.leave()
}
}
startRequests()
}