As I port some Objective-C code to Swift, I'm trying to better understand the new Combine framework and how I can use it to re-create a common design pattern.
In this case, the design pattern is a single object (Manager, Service, etc) that any number of "clients" can register with as a delegate to receive callbacks. It's a basic 1:Many pattern using delegates.
Combine looks ideal for this, but the sample code is a bit thin. Below is a working example but I'm not sure if it's correct or being used as intended. In particular, I'm curious about reference cycles between the objects.
class Service {
let tweets = PassthroughSubject<String, Never>()
func start() {
// Simulate the need send to send updates.
DispatchQueue.global(qos: .utility).async {
while true {
self.sendTweet()
usleep(100000)
}
}
}
func sendTweet() {
tweets.send("Message \(Date().timeIntervalSince1970)")
}
}
class Client : Subscriber {
typealias Input = String
typealias Failure = Never
let service:Service
var subscription:Subscription?
init(service:Service) {
self.service = service
// Is this a retain cycle?
// Is this thread-safe?
self.service.tweets.subscribe(self)
}
func receive(subscription: Subscription) {
print("Received subscription: \(subscription)")
self.subscription = subscription
self.subscription?.request(.unlimited)
}
func receive(_ input: String) -> Subscribers.Demand {
print("Received tweet: \(input)")
return .unlimited
}
func receive(completion: Subscribers.Completion<Never>) {
print("Received completion")
}
}
// Dependency injection is used a lot throughout the
// application in a similar fashion to this:
let service = Service()
let client = Client(service:service)
// In the real world, the service is started when
// the application is launched and clients come-and-go.
service.start()
Output is:
Received subscription: PassthroughSubject
Received tweet: Message 1560371698.300811
Received tweet: Message 1560371698.4087949
Received tweet: Message 1560371698.578027
...
Is this even remotely close to how Combine was intended to be used?
lets check it! the simplest way is add deinit to both classes and limit the live of service
class Service {
let tweets = PassthroughSubject<String, Never>()
func start() {
// Simulate the need send to send updates.
DispatchQueue.global(qos: .utility).async {
(0 ... 3).forEach { _ in
self.sendTweet()
usleep(100000)
}
}
}
func sendTweet() {
tweets.send("Message \(Date().timeIntervalSince1970)")
}
deinit {
print("server deinit")
}
}
now it is easy to check that
do {
let service = Service()
//_ = Client(service:service)
// In the real world, the service is started when
// the application is launched and clients come-and-go.
service.start()
}
will finished as expected
server deinit
modify it with subscribing client
do {
let service = Service()
_ = Client(service:service)
service.start()
}
and you immediately know the result
Received subscription: PassthroughSubject
Received tweet: Message 1580816649.7355099
Received tweet: Message 1580816649.8548698
Received tweet: Message 1580816650.001649
Received tweet: Message 1580816650.102639
there is a memory cycle, as you expected :-)
Generally, there is very low probability, that you need your own subscriber implementation.
First modify the service, so the client will know when no more messages will arrive
func start() {
// Simulate the need send to send updates.
DispatchQueue.global(qos: .utility).async {
// send some tweets
(0 ... 3).forEach { _ in
self.sendTweet()
usleep(100000)
}
// and send "finished"
self.tweets.send(completion: .finished)
}
}
and next use "build-in" subcriber in your publisher by invoking his .sink method. .sink return AnyCancelable (it is a reference type) which you have to store somewhere.
var cancelable: AnyCancellable?
do {
let service = Service()
service.start()
// client
cancelable = service.tweets.sink { (s) in
print(s)
}
}
now, everythig works, es expected ...
Message 1580818277.2908669
Message 1580818277.4674711
Message 1580818277.641886
server deinit
But what about cancelable? Let check it!
var cancelable: AnyCancellable?
do {
let service = Service()
service.start()
// client
cancelable = service.tweets.sink { (s) in
print(s)
}
}
DispatchQueue.main.asyncAfter(deadline: .now() + 2) {
print(cancelable)
}
it prints
Message 1580819227.5750608
Message 1580819227.763901
Message 1580819227.9366078
Message 1580819228.072041
server deinit
Optional(Combine.AnyCancellable)
so you have to release it "manualy", if you don't need it anymore. .sink is there again!
var cancelable: AnyCancellable?
do {
let service = Service()
service.start()
// client
cancelable = service.tweets.sink(receiveCompletion: { (completion) in
print(completion)
// this inform publisher to "unsubscribe" (not necessery in this scenario)
cancelable?.cancel()
// and we can release our canceleble
cancelable = nil
}, receiveValue: { (message) in
print(message)
})
}
DispatchQueue.main.asyncAfter(deadline: .now() + 2) {
print(cancelable)
}
and result
Message 1580819683.462331
Message 1580819683.638145
Message 1580819683.74383
finished
server deinit
nil
Combine has almost everything you need in real word application, the trouble is a lack of documentation, but a lot of sources are available on the internet.
Custom Combine Subscriber should also conform to Cancellable protocol that provides a method to forward cancellation to the received subscription object from Publisher. That way you do not have to expose Subscription property. According to doc:
If you create a custom Subscriber, the publisher sends a Subscription object when you first subscribe to it. Store this subscription, and then call its cancel() method when you want to cancel publishing. When you create a custom subscriber, you should implement the Cancellable protocol, and have your cancel() implementation forward the call to the stored subscription.
https://developer.apple.com/documentation/combine/receiving_and_handling_events_with_combine
Related
I am converting some code into Combine in order to get familiar with it. I am doing fine with the easy stuff, but here it gets a little trickier. I am trying to report to the user when incoming GPS data is accurate and also whether it's stale.
So I have
let locationPublisher = PassthroughSubject<CLLocation,Never>()
private var cancellableSet: Set<AnyCancellable> = []
var status:GPSStatus = .red //enum
and in init I have
locationPublisher
.map(gpsStatus(from:)) //maps CLLocation to GPSStatus enum
.assign(to: \.gpsStatus, on: self)
.store(in: &cancellableSet)
locationPublisher.sink() { [weak self] location in
self?.statusTimer?.invalidate()
self?.setStatusTimer()
}
.store(in: &cancellableSet)
setStatusTimer()
Here is the setStatusTimer function
func setStatusTimer () {
statusTimer = Timer.scheduledTimer(withTimeInterval: 20, repeats: false) {#MainActor _ in
self.updateGPSStatus(.red)
}
}
Is there a more "Combine" way of doing this? I know there are Timer.TimerPublishers, but I'm not sure how to incorporate them?
My tendency is to think there is some kind of combineLatest with one input being the gps status publisher and the other one being some kind go publisher that fires if the upstream pub hasn't fired for x seconds.
Thanks!
This is a bit tricky. You don't need a TimerPublisher but you can use the timeout operator. The tricky part is that timeout will create a publisher that stops publishing when it times out. The question is "how do you start again". To do that, you can use the catch operator.
The solution looks like this:
import UIKit
import Combine
enum GPSStatus {
// Karma Chameleon...
case red
case gold
case green
}
func gpsStatus(from location: String) -> GPSStatus {
switch location {
case "ok":
return .green
default:
return .gold
}
}
class UnnamedLocationThingy {
let locationPublisher = PassthroughSubject<String,Never>()
var gpsStatus:GPSStatus = .red {
didSet { print("set the new value to \(String(describing: gpsStatus))") }
}
private enum LocationError: Error {
case timeout
}
private var statusProvider: AnyCancellable?
init() {
statusProvider = makeStatusProvider()
.assign(to: \.gpsStatus, on: self)
}
func makeStatusProvider() -> AnyPublisher<GPSStatus, Never> {
return locationPublisher
.map(gpsStatus(from:)) //maps CLLocation to GPSStatus enum
.setFailureType(to: LocationError.self)
.timeout(.seconds(2), scheduler: DispatchQueue.main) {
return LocationError.timeout
}
.catch { _ in
self.gpsStatus = .red;
return self.makeStatusProvider()
}.eraseToAnyPublisher()
}
}
let thingy = UnnamedLocationThingy();
Task {
for delay in [1, 1, 1, 3, 1, 1, 3, 1] {
try await Task.sleep(for: .seconds(delay))
thingy.locationPublisher.send("ok")
}
}
The heart of it is the makeStatusProvider function. This function creates a publisher that will convert published locations to GPSStatus values as long as they come in in a time limit. But if one doesn't come fast enough, it times out. I set up the timeout operator with a customError: handler so that it doesn't just terminate the publisher, but sends an error. I can catch that error in the catch operator and substitute a new publisher to replace the old one. The new publisher I substitute is a brand new publisher created by makeStatusProvider which, as we've just seen is a publisher that converts published locations into GPSStatus values until it hits a timeout...
It's a form of recursion.
I've decorated your code with enough stuff to make a Playground and added a bit of code at the end to exercise the functionality.
I've always thought .share(replay: 1, scope: .forever) shares the single upstream subscription no matter how many downstream subscribers there are.
However, I've just discovered that if the count of the downstream subscriptions drops to zero, it stops "sharing" and releases the subscription on the upstream (because refCount() is used under the hood). So when a new downstream subscription happens, it has to re-subscribe on the upstream. In the following example:
let sut = Observable<Int>
.create { promise in
print("create")
promise.onNext(0)
return Disposables.create()
}
.share(replay: 1, scope: .forever)
sut.subscribe().dispose()
sut.subscribe().dispose()
I would expect create to be printed just once, but it gets printed twice. And if I remove .dispose() calls - just once.
How do I set up the chain where the upstream is guaranteed to be subscribed at most once?
The goal you describe implies you should be using multicast (or one of operators that use it, like publish(), replay(_:) or replayAll()) and not share...
let sut = Observable<Int>
.create { observer in
print("create")
observer.onNext(0)
return Disposables.create()
}
.replay(1)
let disposable = sut.connect() // subscription will stay alive until dispose() is called on this disposable...
sut.debug("one").subscribe().dispose()
sut.debug("two").subscribe().dispose()
To understand the difference between .forever and .whileConnected, read the documentation in the "ShareReplayScope.swift" file. Both are refcounted, but the difference is in how re-subscription operators are handled. Here is some test code to show the difference...
class SandboxTests: XCTestCase {
var scheduler: TestScheduler!
var observable: Observable<String>!
override func setUp() {
super.setUp()
scheduler = TestScheduler(initialClock: 0)
// creates an observable that will error on the first subscription, then call `.onNext("A")` on the second.
observable = scheduler.createObservable(timeline: "-#-A")
}
func testWhileConnected() {
// this shows that re-subscription gets through the while connected share to the source observable
let result = scheduler.start { [observable] in
observable!
.share(scope: .whileConnected)
.retry(2)
}
XCTAssertEqual(result.events, [
.next(202, "A")
])
}
func testForever() {
// however re-subscription doesn't get through on a forever share
let result = scheduler.start { [observable] in
observable!
.share(scope: .forever)
.retry(2)
}
XCTAssertEqual(result.events, [
.error(201, NSError(domain: "Test Domain", code: -1))
])
}
}
I am not sure why .share(replay: 1, scope: .forever) is not giving the behaviour you want (I also think it should work like you describe) but what about this other way without share?
// You will subscribe to this and not directly on sut (maybe hiding Subject interface to avoid onNext calls from observers)
let subject = ReplaySubject<Int>.create(bufferSize: 1)
let sut = Observable<Int>.create { obs in
print("Performing work ...")
obs.onNext(0)
return Disposables.create()
}
// This subscription is hidden, happens only once and stays alive forever
sut.subscribe(subject)
// Observers subscribe to the public stream
subject.subscribe().dispose()
subject.subscribe().dispose()
I didn't like the leaking Disposable in the suggested solutions so came up with the following:
extension ObservableType {
func shareReplayForever() -> Observable<Element> {
let relay = BehaviorRelay<Element?>(value: nil)
let disposeBag = DisposeBag()
var subscribeOnce: () -> Void = {
self.bind(to: relay).disposed(by: disposeBag)
}
return relay
.compactMap { $0 }
.do(onSubscribe: {
subscribeOnce()
subscribeOnce = { }
}, onDispose: {
_ = disposeBag
})
}
}
The trick is to capture the disposeBag in a downstream closure onDispose. As long as any code is holding a reference to the downstream observable (thus being capable to subscribe in the future), the disposeBag stays alive. However, it does get disposed when all the downstream observables are deallocated (no one down the stream can subscribe anymore - we can release the upstream)
I have a situation where my code needs to make one network call to fetch a bunch of items, but while waiting for those to come down, another network call might fetch an update to those items. I'd love to be able to enqueue those secondary results until the first one has finished. Is there a way to accomplish that with Combine?
Importantly, I am not able to wait before making the second request. It’s actually a connection to a websocket that gets made at the same time as the first request, and the updates come over the websocket outside of my control.
Update
After examining Matt’s thorough book on Combine, I settled on .prepend(). But as Matt warned me in the comments, .prepend() doesn’t even subscribe to the other publisher until after the first one completes. This means I miss any signals sent prior to that. What I need is a Subject that enqueues values, but perhaps that’s not so hard to make. Anyway, this is where I got:
Initially I was going to use .append(), but I realized with .prepend() I could avoid keeping a reference to one of the publishers. So here’s a simplified version of what I’ve got. There might be syntax errors in this, as I’ve whittled it down from my (employer’s) code.
There’s the ItemFeed, which handles fetching a list of items and simultaneously handling item update events. The latter can arrive before the initial list of items, and thus must be sequenced via Combine to arrive after it. I attempt to do this by prepending the initial items source to the update PassthroughSubject.
Below that is an XCTestCase that simulates a lengthy initial item load, and adds an update before that load can complete. It attempts to subscribe to changes to the list of items, and tries to test that the first update is the initial 63 items, and the subsequent update is for 64 items (in this case, “update” results in adding an item).
Unfortunately, while the initial list is published, the update never arrives. I also tried removing the .output(at:) operators, but the two sinks are only called once.
After the test case sets up the delayed “fetch,” and subscribes to changes in feed.items, it calls feed.handleItemUpatedEvent. This calls ItemFeed.updateItems.send(_:), but unfortunately that is lost to oblivion.
class
ItemFeed
{
typealias InitialItemsSource = Deferred<Future<[[String : Any]], Error>>
let updateItems = PassthroughSubject<[Item], Error>()
var funnel : AnyCancellable?
#Published var items = [Item]()
init(initialItemSource inSource: InitialItemsSource)
{
// Passthrough subject each time items are updated…
var pub = self.updateItems.eraseToAnyPublisher()
// Prepend the initial items we need to fetch…
let initialItems = source.tryMap { try $0.map { try Item(object: $0) } }
pub = pub.prepend(initialItems).eraseToAnyPublisher()
// Sink on the funnel to add or update to self.items…
self.funnel =
pub.sink { inCompletion in
// Handle errors
}
receiveValue: {
self.update(items: inItems)
}
}
func handleItemUpdatedEvent(_ inItem: Item) {
self.updateItems.send([inItem])
}
func update(items inItems: [Item]) {
// Update or add inItems to self.items
}
}
class
ItemFeedTests : XCTestCase
{
func
testShouldUpdateItems()
throws
{
// Set up a mock source of items…
let source = fetchItems(named: "items", delay: 3.0) // 63 items
let expectation = XCTestExpectation(description: "testShouldUpdateItems")
expectation.expectedFulfillmentCount = 2
let feed = ItemFeed(initialItemSource: source)
let sub1 = feed.$items
.output(at: 0)
.receive(on: DispatchQueue.main)
.sink { inItems in
expectation.fulfill()
debugLog("Got first items: \(inItems.count)")
XCTAssertEqual(inItems.count, 63)
}
let sub2 = feed.$items
.output(at: 1)
.receive(on: DispatchQueue.main)
.sink { inItems in
expectation.fulfill()
debugLog("Got second items: \(inItems.count)")
XCTAssertEqual(inItems.count, 64)
}
// Send an update right away…
let item = try loadItem(named: "Item3")
feed.handleItemUpdatedEvent(item)
XCTAssertEqual(feed.items.count, 0) // Should be no items yet
// Wait for stuff to complete…
wait(for: [expectation], timeout: 10.0)
sub1.cancel() // Not necessary, but silence the compiler warning
sub2.cancel()
}
}
The accepted answer didn't actually have any code, but I was able to figure out a solution. I ended up creating a custom publisher that subscribed to the "gate publisher" and created a subscription that creates a sink for the upstream publisher. I buffer the values from upstream and emit gate publisher values based on demand until it completes, then I switch to sending the buffer downstream based on demand. The tricky part is keeping track of the upstream / gate publisher and sending demand to the right one.
After a fair bit of trial and error, I found a solution. I created a custom Publisher and Subscription that immediately subscribes to its upstream publisher and begins enqueuing elements (up to some specifiable capacity). It then waits for a subscriber to come along, and provides that subscriber with all the values up until now, and then continues providing values. Here’s a marble diagram:
I then use this in conjunction with .prepend() like so:
extension
Publisher
{
func
enqueue<P>(gatedBy inGate: P, capacity inCapacity: Int = .max)
-> AnyPublisher<Self.Output, Self.Failure>
where
P : Publisher,
P.Output == Output,
P.Failure == Failure
{
let qp = Publishers.Queueing(upstream: self, capacity: inCapacity)
let r = qp.prepend(inGate).eraseToAnyPublisher()
return r
}
}
And this is how you use it…
func
testShouldReturnAllItemsInOrder()
{
let gate = PassthroughSubject<Int, Never>()
let stream = PassthroughSubject<Int, Never>()
var results = [Int]()
let sub = stream.enqueue(gatedBy: gate)
.sink
{ inElement in
debugLog("element: \(inElement)")
results.append(inElement)
}
stream.send(3)
stream.send(4)
stream.send(5)
XCTAssertEqual(results.count, 0)
gate.send(1)
gate.send(2)
gate.send(completion: .finished)
XCTAssertEqual(results.count, 5)
XCTAssertEqual(results, [1,2,3,4,5])
sub.cancel()
}
This prints what you would expect:
element: 1
element: 2
element: 3
element: 4
element: 5
It works well because creating the .enqueue(gatedBy:) operator creates the queuing publisher qp, which immediately subscribes to stream and enqueues any values it sends. It then calls .prepend() on qp, which first subscribes to gate, and waits for it to complete. When it finishes, it then subscribes to qp, which immediately provides it with all the enqueued values, and then continues to provide it with values from the upstream publisher.
Here’s the code I finally ended up with.
//
// QueuingPublisher.swift
// Latency: Zero, LLC
//
// Created by Rick Mann on 2021-06-03.
//
import Combine
import Foundation
extension
Publishers
{
final
class
Queueing<Upstream: Publisher>: Publisher
{
typealias Output = Upstream.Output
typealias Failure = Upstream.Failure
private let upstream : Upstream
private let capacity : Int
private var queue : [Output] = [Output]()
private var subscription : QueueingSubscription<Queueing<Upstream>, Upstream>?
fileprivate var completion : Subscribers.Completion<Failure>? = nil
init(upstream inUpstream: Upstream, capacity inCapacity: Int)
{
self.upstream = inUpstream
self.capacity = inCapacity
// Subscribe to the upstream right away so we can start
// enqueueing values…
let sink = AnySubscriber { $0.request(.unlimited) }
receiveValue:
{ [weak self] (inValue: Output) -> Subscribers.Demand in
self?.relay(inValue)
return .none
}
receiveCompletion:
{ [weak self] (inCompletion: Subscribers.Completion<Failure>) in
self?.completion = inCompletion
self?.subscription?.complete(with: inCompletion)
}
inUpstream.subscribe(sink)
}
func
receive<S: Subscriber>(subscriber inSubscriber: S)
where
Failure == S.Failure,
Output == S.Input
{
let subscription = QueueingSubscription(publisher: self, subscriber: inSubscriber)
self.subscription = subscription
inSubscriber.receive(subscription: subscription)
}
/**
Return up to inDemand values.
*/
func
request(_ inDemand: Subscribers.Demand)
-> [Output]
{
let count = inDemand.min(self.queue.count)
let elements = Array(self.queue[..<count])
self.queue.removeFirst(count)
return elements
}
private
func
relay(_ inValue: Output)
{
// TODO: The Wenderlich example code checks to see if the upstream has completed,
// but I feel like want to send all the values we've gotten first?
// Save the new value…
self.queue.append(inValue)
// Discard the oldest if we’re over capacity…
if self.queue.count > self.capacity
{
self.queue.removeFirst()
}
// Send the buffer to our subscriber…
self.subscription?.dataAvailable()
}
final
class
QueueingSubscription<QP, Upstream> : Subscription
where
QP : Queueing<Upstream>
{
typealias Output = Upstream.Output
typealias Failure = Upstream.Failure
let publisher : QP
var subscriber : AnySubscriber<Output,Failure>? = nil
private var demand : Subscribers.Demand = .none
init<S>(publisher inP: QP,
subscriber inS: S)
where
S: Subscriber,
Failure == S.Failure,
Output == S.Input
{
self.publisher = inP
self.subscriber = AnySubscriber(inS)
}
func
request(_ inDemand: Subscribers.Demand)
{
self.demand += inDemand
emitAsNeeded()
}
func
cancel()
{
complete(with: .finished)
}
/**
Called by our publisher to let us know new
data has arrived.
*/
func
dataAvailable()
{
emitAsNeeded()
}
private
func
emitAsNeeded()
{
guard let subscriber = self.subscriber else { return }
let newValues = self.publisher.request(self.demand)
self.demand -= newValues.count
newValues.forEach
{
let nextDemand = subscriber.receive($0)
self.demand += nextDemand
}
if let completion = self.publisher.completion
{
complete(with: completion)
}
}
fileprivate
func
complete(with inCompletion: Subscribers.Completion<Failure>)
{
guard let subscriber = self.subscriber else { return }
self.subscriber = nil
subscriber.receive(completion: inCompletion)
}
}
}
} // extension Publishers
extension
Publisher
{
func
enqueue<P>(gatedBy inGate: P, capacity inCapacity: Int = .max)
-> AnyPublisher<Self.Output, Self.Failure>
where
P : Publisher,
P.Output == Output,
P.Failure == Failure
{
let qp = Publishers.Queueing(upstream: self, capacity: inCapacity)
let r = qp.prepend(inGate).eraseToAnyPublisher()
return r
}
}
extension
Subscribers.Demand
{
func
min(_ inValue: Int)
-> Int
{
if self == .unlimited
{
return inValue
}
return Swift.min(self.max!, inValue)
}
}
Context
I'm developing a Mac app. In this app, I want to run a websocket server. To do this, I'm using Swift NIO and Websocket-Kit. My full setup is below.
Question
All of the documentation for Websocket-Kit and SwiftNIO is geared towards a creating a single server-side process that starts up when you launch it from the command line and then runs infinitely.
In my app, I must be able to start the websocket server and then shut it down and restart it on demand, without re-launching my application. The code below does that, but I would like confirmation of two things:
In the test() function, I send some text to all connected clients. I am unsure if this is thread-safe and correct. Can I store the WebSocket instances as I'm doing here and message them from the main thread of my application?
Am I shutting down the websocket server correctly? The result of the call to serverBootstrap(group:)[...].bind(host:port:).wait() creates a Channel and then waits infinitely. When I call shutdownGracefully() on the associated EventLoopGroup, is that server cleaned up correctly? (I can confirm that port 5759 is free again after this shutdown, so I'm guessing everything is cleaned up?)
Thanks for the input; it's tough to find examples of using SwiftNIO and Websocket-Kit inside an application.
Code
import Foundation
import NIO
import NIOHTTP1
import NIOWebSocket
import WebSocketKit
#objc class WebsocketServer: NSObject
{
private var queue: DispatchQueue?
private var eventLoopGroup: MultiThreadedEventLoopGroup?
private var websocketClients: [WebSocket] = []
#objc func startServer()
{
queue = DispatchQueue.init(label: "socketServer")
queue?.async
{
let upgradePipelineHandler: (Channel, HTTPRequestHead) -> EventLoopFuture<Void> = { channel, req in
WebSocket.server(on: channel) { ws in
ws.send("You have connected to WebSocket")
DispatchQueue.main.async {
self.websocketClients.append(ws)
print("websocketClients after connection: \(self.websocketClients)")
}
ws.onText { ws, string in
print("received")
ws.send(string.trimmingCharacters(in: .whitespacesAndNewlines).reversed())
}
ws.onBinary { ws, buffer in
print(buffer)
}
ws.onClose.whenSuccess { value in
print("onClose")
DispatchQueue.main.async
{
self.websocketClients.removeAll { (socketToTest) -> Bool in
return socketToTest === ws
}
print("websocketClients after close: \(self.websocketClients)")
}
}
}
}
self.eventLoopGroup = MultiThreadedEventLoopGroup(numberOfThreads: 2)
let port: Int = 5759
let promise = self.eventLoopGroup!.next().makePromise(of: String.self)
let server = try? ServerBootstrap(group: self.eventLoopGroup!)
// Specify backlog and enable SO_REUSEADDR for the server itself
.serverChannelOption(ChannelOptions.backlog, value: 256)
.serverChannelOption(ChannelOptions.socketOption(.so_reuseaddr), value: 1)
.childChannelInitializer { channel in
let webSocket = NIOWebSocketServerUpgrader(
shouldUpgrade: { channel, req in
return channel.eventLoop.makeSucceededFuture([:])
},
upgradePipelineHandler: upgradePipelineHandler
)
return channel.pipeline.configureHTTPServerPipeline(
withServerUpgrade: (
upgraders: [webSocket],
completionHandler: { ctx in
// complete
})
)
}.bind(host: "0.0.0.0", port: port).wait()
_ = try! promise.futureResult.wait()
}
}
///
/// Send a message to connected clients, then shut down the server.
///
#objc func test()
{
self.websocketClients.forEach { (ws) in
ws.eventLoop.execute {
ws.send("This is a message being sent to all websockets.")
}
}
stopServer()
}
#objc func stopServer()
{
self.websocketClients.forEach { (ws) in
try? ws.eventLoop.submit { () -> Void in
print("closing websocket: \(ws)")
_ = ws.close()
}.wait() // Block until complete so we don't shut down the eventLoop before all clients get closed.
}
eventLoopGroup?.shutdownGracefully(queue: .main, { (error: Error?) in
print("Eventloop shutdown now complete.")
self.eventLoopGroup = nil
self.queue = nil
})
}
}
In the test() function, I send some text to all connected clients. I am unsure if this is thread-safe and correct. Can I store the WebSocket instances as I'm doing here and message them from the main thread of my application?
Exactly as you're doing here, yes, that should be safe. ws.eventLoop.execute will execute that block on the event loop thread belonging to that WebSocket connection. This will be safe.
When I call shutdownGracefully() on the associated EventLoopGroup, is that server cleaned up correctly? (I can confirm that port 5759 is free again after this shutdown, so I'm guessing everything is cleaned up?)
Yes. shutdownGracefully forces all connections and listening sockets closed.
I have this publisher and subscribers (example code):
import Combine
let publisher = PassthroughSubject<ComplexStructOrClass, Never>()
let sub1 = publisher.sink { (someString) in
// Async work...
}
let sub2 = publisher.sink { (someString) in
// Async work, but it has to wait until sub1 has finished his work
}
So the publisher constant has 2 subscribers. When I use the method send on the publisher constant, it should send the value first to sub1 and after sub1 finished processing (with a callback or something like that), publisher should and notify sub2.
So in the comments its stated that Combine is made for this. What publisher do I need to use? A PassthroughSubject may be the wrong decision.
Usecase
I need to publish values throughout the lifetime of my app to a dynamic number of subscribers, for a few different publishers (I hope I can make a protocol). So a subscriber can be added and removed from a publisher at any given time. A subscriber look as follows:
It has a NSPersistentContainer
A callback should be made by the publisher when a new value has arrived. That process looks like:
the publisher will create a backgroundContext of the container of the subscriber, because it knows a subscriber has a container
the publisher sends the context along with the new published value to the subscriber
the publisher waits until it receives a callback of the subscriber. The subscriber shouldn't save the context, but the publisher must hold a reference to the context. The subscriber gives a callback of an enum, which has a ok case and some error cases.
When a subscriber gives a callback with an error enum case, the publisher must rollback the contexts it created for each subscriber.
When a subscriber gives a callback with the ok case, the publisher repeats step 1 till 5 for every subscriber
This step will only be reached when no subscriber gave a error enum case or there are no subscribers. The publisher will save all the contexts created by the subscribers.
Current code, no Combine
This is some code without using Combine:
// My publisher
protocol NotiPublisher {
// Type of message to send
associatedtype Notification
// List of subscribers for this publisher
static var listeners: Set<AnyNotiPublisher<Notification>> { get set }
}
// My subscriber
protocol NotificationListener: Hashable {
associatedtype NotificationType
var container: NSPersistentContainer { get }
// Identifier used to find this subscriber in the list of 'listeners' in the publisher
var identifier: Int32 { get }
var notify: ((_ notification: NotificationType, _ context: NSManagedObjectContext, #escaping CompletionHandlerAck) -> ()) { get }
}
// Type erased version of the NotificationListener and some convience methods here, can add them if desired
// In a extension of NotiPublisher, this method is here
static func notify(queue: DispatchQueue, notification: Notification, completionHander: #escaping CompletionHandlerAck) throws {
let dispatchGroup = DispatchGroup()
var completionBlocks = [SomeCompletionHandler]()
var contexts = [NSManagedObjectContext]()
var didLoop = false
for listener in listeners {
if didLoop {
dispatchGroup.wait()
} else {
didLoop = true
}
dispatchGroup.enter()
listener.container.performBackgroundTask { (context) in
contexts.append(context)
listener.notify(notification, context, { (completion) in
completionBlocks.append(completion)
dispatchGroup.leave()
})
}
}
dispatchGroup.notify(queue: queue) {
let err = completion.first(where: { element in
// Check if an error has occured
})
if err == nil {
for context in contexts {
context.performAndWait {
try! context.save()
}
}
}
completionHander(err ?? .ok(true))
}
}
This is pretty complex code, I am wondering if I can make use of the power of Combine to make this code more readable.
I wrote the following to chain async operations from a publisher using flatMap that allows you to return another publisher. I'm not a fan, and it might not meet your need to dynamically change the subs, but it might help someone:
let somePublisher = Just(12)
let anyCancellable = somePublisher.flatMap{ num in
//We can return a publisher from flatMap, so lets return a Future one because we want to do some async work
return Future<Int,Never>({ promise in
//do an async thing using dispatch
DispatchQueue.main.asyncAfter(deadline: .now() + 3, execute: {
print("flat map finished processing the number \(num)")
//now just pass on the value
promise(.success(num))
})
})
}.flatMap{ num in
//We can return a publisher from flatMap, so lets return a Future one because we want to do some async work
return Future<Int,Never>({ promise in
//do an async thing using dispatch
DispatchQueue.main.asyncAfter(deadline: .now() + 3, execute: {
print("second flat map finished processing the number \(num)")
//now just pass on the value
promise(.success(num))
})
})
}.sink { num in
print("This sink runs after the async work in the flatMap/Future publishers")
}
You can try to use serial OperationQueue to receive values. Seems like it will wait until sink completes before call another.
let queue = OperationQueue()
queue.maxConcurrentOperationCount = 1
publisher
.receive(on: queue)
.sink { _ in }