I would like to wrap a simple callback so that it would be able to be used as a Combine Publisher. Specifically the NSPersistentContainer.loadPersistentStore callback so I can publish when the container is ready to go.
func createPersistentContainer(name: String) -> AnyPublisher<NSPersistentContainer, Error> {
// What goes here?
// Happy path: send output NSPersistentContainer; send completion.
// Not happy path: send failure Error; send completion.
}
For instance, what would the internals of a function, createPersistentContainer given above, look like to enable me to do something like this in my AppDelegate.
final class AppDelegate: UIResponder, UIApplicationDelegate {
let container = createPersistentContainer(name: "DeadlyBattery")
.assertNoFailure()
.eraseToAnyPublisher()
// ...
}
Mostly this boils down to, how do you wrap a callback in a Publisher?
As one of the previous posters #Ryan pointed out, the solution is to use the Future publisher.
The problem of using only the Future, though, is that it is eager, which means that it starts executing its promise closure at the moment of creation, not when it is subscribed to. The answer to that challenge is to wrap it in the Deferred publisher:
func createPersistentContainer(name: String) -> AnyPublisher<NSPersistentContainer, Error> {
return Deferred {
Future<NSPersistentContainer, Error> { promise in
let container = NSPersistentContainer(name: name)
container.loadPersistentStores { _, error in
if let error = error {
promise(.failure(error))
} else {
promise(.success(container))
}
}
}
}.eraseToAnyPublisher()
}
It seems that Combine's Future is the correct tool for the job.
func createPersistentContainer(name: String) -> AnyPublisher<NSPersistentContainer, Error> {
let future = Future<NSPersistentContainer, Error> { promise in
let container = NSPersistentContainer(name: name)
container.loadPersistentStores { _, error in
if let error = error {
promise(.failure(error))
} else {
promise(.success(container))
}
}
}
return AnyPublisher(future)
}
NSPersistentContainer is just a convenience wrapper around a core data stack, you would be better off subscribing at the source:
NotificationCenter.default.publisher(for: .NSPersistentStoreCoordinatorStoresDidChange)
Related
Picture an image loading function with a closure completion. Let's say it returns a token ID that you can use to cancel the asynchronous operation if needed.
#discardableResult
func loadImage(url: URL, completion: #escaping (Result<UIImage, Error>) -> Void) -> UUID? {
if let image = loadedImages[url] {
completion(.success(image))
return nil
}
let id = UUID()
let task = URLSession.shared.dataTask(with: url) { data, response, error in
defer {
self.requests.removeValue(forKey: id)
}
if let data, let image = UIImage(data: data) {
DispatchQueue.main.async {
self.loadedImages[url] = image
completion(.success(image))
}
return
}
if let error = error as? NSError, error.code == NSURLErrorCancelled {
return
}
//TODO: Handle response errors
print(response as Any)
completion(.failure(.loadingError))
}
task.resume()
requests[id] = task
return id
}
func cancelRequest(id: UUID) {
requests[id]?.cancel()
requests.removeValue(forKey: id)
print("ImageLoader: cancelling request")
}
How would we accomplish this (elegantly) with swift concurrency? Is it even possible or practical?
I haven't done much testing on this, but I believe this is what you're looking for. It allows you to simply await an image load, but you can cancel using the URL from somewhere else. It also merges near-simultaneous requests for the same URL so you don't re-download something you're in the middle of.
actor Loader {
private var tasks: [URL: Task<UIImage, Error>] = [:]
func loadImage(url: URL) async throws -> UIImage {
if let imageTask = tasks[url] {
return try await imageTask.value
}
let task = Task {
// Rather than removing here, you could skip that and this would become a
// cache of results. Making that correct would take more work than the
// question asks for, so I won't go into it
defer { tasks.removeValue(forKey: url) }
let data = try await URLSession.shared.data(from: url).0
guard let image = UIImage(data: data) else {
throw DecodingError.dataCorrupted(.init(codingPath: [],
debugDescription: "Invalid image"))
}
return image
}
tasks[url] = task
return try await task.value
}
func cancelRequest(url: URL) {
// Remove, and cancel if it's removed
tasks.removeValue(forKey: url)?.cancel()
print("ImageLoader: cancelling request")
}
}
Calling it looks like:
let image = try await loader.loadImage(url: url)
And you can cancel a request if it's still pending using:
loader.cancelRequest(url: url)
A key lesson here is that it is very natural to access task.value multiple times. If the task has already completed, then it will just return immediately.
Return a task in a tuple or other structure.
In the cases where you don't care about the ID, do this:
try await imageTask(url: url).task.value
private var requests: [UUID: Task<UIImage, Swift.Error>] = [:]
func imageTask(url: URL) -> (id: UUID?, task: Task<UIImage, Swift.Error>) {
switch loadedImages[url] {
case let image?: return (id: nil, task: .init { image } )
case nil:
let id = UUID()
let task = Task {
defer { requests[id] = nil }
guard let image = UIImage(data: try await URLSession.shared.data(from: url).0)
else { throw Error.loadingError }
try Task.checkCancellation()
Task { #MainActor in loadedImages[url] = image }
return image
}
requests[id] = task
return (id: id, task: task)
}
}
Is it even possible or practical?
Yes to both.
As I say in a comment, I think you may be missing the fact that a Task is an object you can retain and later cancel. Thus, if you create an architecture where you apply an ID to a task as you ask for the task to start, you can use that same ID to cancel that task before it has returned.
Here's a simple demonstration. I've deliberately written it as Playground code (though I actually developed it in an iOS project).
First, here is a general TimeConsumer class that wraps a single time-consuming Task. We can ask for the task to be created and started, but because we retain the task, we can also cancel it midstream. It happens that my task doesn't return a value, but that's neither here nor there; it could if we wanted.
class TimeConsumer {
var current: Task<(), Error>?
func consume(seconds: Int) async throws {
let task = Task {
try await Task.sleep(for: .seconds(seconds))
}
current = task
_ = await task.result
}
func cancel() {
current?.cancel()
}
}
Now then. In front of my TimeConsumer I'll put a TaskVendor actor. A TimeConsumer represents just one time-consuming task, but a TaskVendor has the ability to maintain multiple time-consuming tasks, identifying each task with an identifier.
actor TaskVendor {
private var tasks = [UUID: TimeConsumer?]()
func giveMeATokenPlease() -> UUID {
let uuid = UUID()
tasks[uuid] = nil
return uuid
}
func beginTheTask(uuid: UUID) async throws {
let consumer = TimeConsumer()
tasks[uuid] = consumer
try await consumer.consume(seconds: 10)
tasks[uuid] = nil
}
func cancel(uuid: UUID) {
tasks[uuid]??.cancel()
tasks[uuid] = nil
}
}
That's all there is to it! Observe how TaskVendor is configured. I can do three things: I can ask for a token (really my actual TaskVendor needn't bother doing this, but I wanted to centralize everything for generality); I can start the task with that token; and, optionally, I can cancel the task with that token.
So here's a simple test harness. Here we go!
let vendor = TaskVendor()
func test() async throws {
let uuid = await vendor.giveMeATokenPlease()
print("start")
Task {
try await Task.sleep(for: .seconds(2))
print("cancel?")
// await vendor.cancel(uuid: uuid)
}
try await vendor.beginTheTask(uuid: uuid)
print("finish")
}
Task {
try await test()
}
What you will see in the console is:
start
[two seconds later] cancel?
[eight seconds after that] finish
We didn't cancel anything; the word "cancel?" signals the place where our test might cancel, but we didn't, because I wanted to prove to you that this is working as we expect: it takes a total of 10 seconds between "start" and "finish", so sure enough, we are consuming the expected time fully.
Now uncomment the await vendor.cancel line. What you will see now is:
start
[two seconds later] cancel?
[immediately!] finish
We did it! We made a cancellable task vendor.
I'm including one possible answer to the question, for the benefit of others. I'll leave the question in place in case someone has another take on it.
The only way that I know of having a 'one-shot' async method that would return a token before returning the async result is by adding an inout argument:
func loadImage(url: URL, token: inout UUID?) async -> Result<UIImage, Error> {
token = UUID()
//...
}
Which we may call like this:
var requestToken: UUID? = nil
let result = await imageLoader.loadImage(url: url, token: &requestToken)
However, this approach and the two-shot solution by #matt both seem fussy, from the api design standpoint. Of course, as he suggests, this leads to a bigger question: How do we implement cancellation with swift concurrency (ideally without too much overhead)? And indeed, using tasks and wrapper objects seems unavoidable, but it certainly seems like a lot of legwork for a fairly simple pattern.
I am trying organize and design a Combine-based API framework for communicating and reacting to OBS Studio (live-streaming software), using obs-websocket. I've written my own Combine Subject-wrapped Publisher for WebSocket communication, and am now using that to talk with OBS. I've created Publishers for sending request messages to OBS, as well as listening for event messages emitted by OBS. I've also been developing this alongside an actual SwiftUI app that uses the new framework.
While developing the app, I found that there are some complex combinations of requests and event listeners that I would need to use around the app. So, as part of the framework, I built out a few Publishers that merge results from an initial request and event listeners for changes to those properties. Testing these out, they worked great, so I committed to git and attempted to implement them in my app code. Instead, they didn't work consistently. I realized that by using the same Publishers in multiple places of the app, it was creating duplicate Publishers (because almost all Publishers are structs/value-types).
So, I found an article about implementing the .share() operator (https://www.swiftbysundell.com/articles/using-combines-share-operator-to-avoid-duplicate-work/) and I tried it out. Specifically, I set up a system for storing the different publishers that could be recalled while still active. Some of them are keyed by relevant values (like keying by the URL in the article), but others are just single values, as there wouldn't be more than one of that publisher at a time. That worked fine.
class PublisherStore {
typealias ResponsePublisher = AnyPublisher<OBSRequestResponse, Error>
var responsePublishers = [String: ResponsePublisher]()
typealias BatchResponsePublisher = AnyPublisher<OpDataTypes.RequestBatchResponse, Error>
var batchResponsePublishers = [String: BatchResponsePublisher]()
typealias EventPublisher = AnyPublisher<OBSEvent, Error>
var eventPublishers = [OBSEvents.AllTypes: EventPublisher]()
var eventGroupPublishers = [String: EventPublisher]()
var anyOpCode: AnyPublisher<UntypedMessage, Error>? = nil
var anyOpCodeData: AnyPublisher<OBSOpData, Error>? = nil
var allMessagesOfType = [OBSEnums.OpCode: AnyPublisher<OBSOpData, Error>]()
var studioModeState: AnyPublisher<Bool, Error>? = nil
var currentSceneNamePair: AnyPublisher<SceneNamePair, Error>? = nil
var sceneList: AnyPublisher<[OBSRequests.Subtypes.Scene], Error>? = nil
var sceneItemList = [String: AnyPublisher<[OBSRequests.Subtypes.SceneItem], Error>]()
var activeSceneItemList: AnyPublisher<[OBSRequests.Subtypes.SceneItem], Error>? = nil
var sceneItemState = [String: AnyPublisher<SceneItemStatePair, Error>]()
}
Where I started running into issues is attempting to implement the final part of the article: adding a custom DispatchQueue. What's been confusing me is the placement of subscribe(on:)/receive(on:) operators, and which ones should be to DispatchQueue.main vs. my internal custom queue. Here's what I have in my primary chain that calls one of the custom merged Publishers:
try connectToOBS()
.handleEvents(receiveOutput: { _ in print("Main thread outside before?:", Thread.isMainThread) })
// <1>
.tryFlatMap { _ in try studioModeStatePublisher() } // <- custom Publisher
// <2>
.handleEvents(receiveOutput: { _ in print("Main thread outside after?:", Thread.isMainThread) })
.output(in: 0..<4)
.sink(receiveCompletion: { print("Sink completion:", $0); expectation1.fulfill() },
receiveValue: { _ in })
.store(in: &observers)
I have .receive(on: DispatchQueue.main)
Should I be placing .receive(on: DispatchQueue.main) at <1> or <2>? When I put it at <2> or leave it out, I don't get any print outs past the custom publisher. If I put it at <1>, it works, but is that the right way to do it? Here is the code for the custom publisher (sorry for it being a bit messy):
public func getStudioModeStateOnce() throws -> AnyPublisher<Bool, Error> {
return try sendRequest(OBSRequests.GetStudioModeEnabled())
.map(\.studioModeEnabled)
// If error is thrown because studio mode is not active, replace that error with false
.catch { error -> AnyPublisher<Bool, Error> in
guard case Errors.requestResponseNotSuccess(let status) = error,
status.code == .studioModeNotActive else { return Fail(error: error).eraseToAnyPublisher() }
return Just(false)
.setFailureType(to: Failure.self)
.eraseToAnyPublisher()
}
.eraseToAnyPublisher()
}
public func studioModeStatePublisher() throws -> AnyPublisher<Bool, Error> {
// <3>
if let pub = publishers.studioModeState {
return pub
}
// Get initial value
let pub = try getStudioModeStateOnce()
// Merge with listener for future values
.merge(with: try listenForEvent(OBSEvents.StudioModeStateChanged.self, firstOnly: false)
.map(\.studioModeEnabled))
.removeDuplicates()
.receive(on: sessionQueue) // <- this being what Sundell's article suggested.
.handleEvents(receiveCompletion: { [weak self] _ in
self?.publishers.studioModeState = nil
})
.share()
.eraseToAnyPublisher()
// <4>
publishers.studioModeState = pub
return pub
}
Calls at <3> and <4> probably need to be done on the sessionQueue. What would be the best practice for that? The deepest level of publisher that this relies on looks like this:
try self.wsPublisher.send(msg, encodingMode: self.encodingProtocol)
// <5>
.eraseToAnyPublisher()
Should I put .receive(on: sessionQueue) at <5>? Even when the tests work, I'm not sure if I'm doing it right. Sorry for such a long thing, but I tried to add as much detail as possible and tried to bold my questions. Any and all help would be welcomed, and I'd be happy to provide any extra details if needed. Thanks!
Edit: I've realized that it was actually subscribing to the stored Publisher and would receive values if the value changed in OBS post-subscription. But the issue of receiving the most recent value is the issue. I replaced .share() with the .shareReplay idea from this article: (https://www.onswiftwings.com/posts/share-replay-operator/). Again, it works in my testing (including delays in subscription), but still doesn't receive the most recent value when used in my SwiftUI app. Anyone have any ideas?
So I have here an async method defined on an actor type. The issue im having is that people isnt being mutated at all; mainly due to what from what I understand concurrency issues. From what I think I know the completion handler is executed concurrently on the same thread and so cannot mutate people.
Nonetheless my knowledge in this area is pretty foggy, so any suggestions to solve it and maybe a better explanation as to why this is happening would be great! Ive thought of a few things but im still new to concurrency.
func getAllPeople() async -> [PersonModelProtocol] {
let decoder = JSONDecoder()
var people: [Person] = []
let dataTask = session.dataTask(with: request!) { data, response, error in
do {
let newPeople = try! decoder.decode([Person].self, from: data!)
people = newPeople
} catch {
print(error)
}
}
dataTask.resume()
return people
}
If you do want to use async/await you have to use the appropriate API.
func getAllPeople() async throws -> [Person] {
let (data, _ ) = try await session.data(for: request!)
return try JSONDecoder().decode([Person].self, from: data)
}
In a synchronous context you cannot return data from a completion handler anyway.
vadian is right but it's better-expressed as a computed property nowadays.
var allPeople: [PersonModelProtocol] {
get async throws {
try JSONDecoder().decode(
[Person].self,
from: await session.data(for: request!).0
)
}
}
And also, your code does mutate people, but returning its empty value before it gets mutated.
I'm using a dataTaskPublisher to fetch some data:
func downloadData(_ req: URLRequest) {
self.cancelToken = dataTaskPublisher(for: req).sink { /* ... */ }
}
If the function is called while the request is in progress, I would like to return.
Currently I either:
1. Set the cancelToken to nil in the sink or
2. Crate and manage a isDownloading variable.
Is there a built-in way to check if the dataTaskPublisher is running (and optionally its progress)?
I mostly agree with #Rob, however you can control state of DataTask initiated by DataTaskPublisher and its progress by using of URLSession methods:
func downloadData(_ req: URLRequest) {
URLSession.shared.getAllTasks { (tasks) in
let task = tasks.first(where: { (task) -> Bool in
return task.originalRequest?.url == req.url
})
switch task {
case .some(let task) where task.state == .running:
print("progress:", Double(task.countOfBytesReceived) / Double(task.countOfBytesExpectedToReceive))
return
default:
self.cancelToken = URLSession.shared.dataTaskPublisher(for: req).sink { /* ... */ }
}
}
}
Regarding getting progress from this publisher, looking at the source code, we can see that it’s just doing a completion-block-based data task (i.e. dataTask(with:completionHandler:)). But the resulting URLSessionTask is a private property of the private Subscription used by DataTaskPublisher. Bottom line, this publisher doesn’t provide any mechanism to monitor progress of the underlying task.
As Denis pointed out, you are limited to querying the URLSession for information about its tasks.
I have a recursive, async function that queries Google Drive for a file ID using the REST api and a completion handler:
func queryForFileId(query: GTLRDriveQuery_FilesList,
handler: #escaping FileIdCompletionHandler) {
service.executeQuery(query) { ticket, data, error in
if let error = error {
handler(nil, error)
} else {
let list = data as! GTLRDrive_FileList
if let pageToken = list.nextPageToken {
query.pageToken = pageToken
self.queryForFileId(query: query, handler: handler)
} else if let id = list.files?.first?.identifier {
handler(id, nil)
} else {
handler(nil, nil) // no file found
}
}
}
}
Here, query is set up to return the nextPageToken and files(id) fields, service is an instance of GTLRDriveService, and FileIdCompletionHandler is just a typealias:
typealias FileIdCompletionHandler = (String?, Error?) -> Void
I've read how to convert async functions into promises (as in this thread) but I don't see how that can be applied to a recursive, async function. I guess I can just wrap the entire method as a Promise:
private func fileIdPromise(query: GTLRDriveQuery_FilesList) -> Promise<String?> {
return Promise { fulfill, reject in
queryForFileId(query: query) { id, error in
if let error = error {
reject(error)
} else {
fulfill(id)
}
}
}
}
However, I was hoping to something a little more direct:
private func queryForFileId2(query: GTLRDriveQuery_FilesList) -> Promise<String?> {
return Promise { fulfill, reject in
service.executeQuery(query) { ticket, data, error in
if let error = error {
reject(error)
} else {
let list = data as! GTLRDrive_FileList
if let pageToken = list.nextPageToken {
query.pageToken = pageToken
// WHAT DO I DO HERE?
} else if let id = list.files?.first?.identifier {
fulfill(id)
} else {
fulfill(nil) // no file found
}
}
}
}
}
So: what would I do when I need to make another async call to executeQuery?
If you want to satisfy a recursive set of promises, at where your "WHAT DO I DO HERE?" line, you'd create a new promise.then {...}.else {...} pattern, calling fulfill in the then clause and reject in the else clause. Obviously, if no recursive call was needed, though, you'd just fulfill directly.
I don't know the Google API and you didn't share your code for satisfying a promise for a list of files, so I'll have to keep this answer a bit generic: Let's assume you had some retrieveTokens routine that returned a promise that is satisfied only when all of the promises for the all files was done. Let's imagine that the top level call was something like:
retrieveTokens(for: files).then { tokens in
print(tokens)
}.catch { error in
print(error)
}
You'd then have a retrieveTokens that returns a promise that is satisfied only when then promises for the individual files were satisfied. If you were dealing with a simple array of File objects, you might do something like:
func retrieveTokens(for files: [File]) -> Promise<[Any]> {
var fileGenerator = files.makeIterator()
let generator = AnyIterator<Promise<Any>> {
guard let file = fileGenerator.next() else { return nil }
return self.retrieveToken(for: file)
}
return when(fulfilled: generator, concurrently: 1)
}
(I know this isn't what yours looks like, but I need this framework to show my answer to your question below. But it’s useful to encapsulate this “return all promises at a given level” in a single function, as it allows you to keep the recursive code somewhat elegant, without repeating code.)
Then the routine that returns a promise for an individual file would see if a recursive set of promises needed to be returned, and put its fulfill inside the then clause of that new recursively created promise:
func retrieveToken(for file: File) -> Promise<Any> {
return Promise<Any> { fulfill, reject in
service.determineToken(for: file) { token, error in
// if any error, reject
guard let token = token, error == nil else {
reject(error ?? FileError.someError)
return
}
// if I don't have to make recursive call, `fulfill` immediately.
// in my example, I'm going to see if there are subfiles, and if not, `fulfill` immediately.
guard let subfiles = file.subfiles else {
fulfill(token)
return
}
// if I got here, there are subfiles and I'm going to start recursive set of promises
self.retrieveTokens(for: subfiles).then { tokens in
fulfill(tokens)
}.catch { error in
reject(error)
}
}
}
}
Again, I know that the above isn't a direct answer to your question (as I'm not familiar with Google Drive API nor how you did your top level promise logic). So, in my example, I created model objects sufficient for the purposes of the demonstration.
But hopefully it's enough to illustrate the idea behind a recursive set of promises.