Siesta handling multiple requests - swift

I have a loop where I POST requests to the server:
for (traineeId, points) in traineePointsDict {
// create a new point
let parameters: NSDictionary = [
"traineeId": "\(traineeId)",
"numPoints": points,
"exerciseId": "\(exerciseId)"
]
DataManager.sharedInstance.api.points.request(.POST, json: parameters).success { data in
if data.json["success"].int == 1 {
self.pointCreated()
} else {
self.pointFailToCreate()
}
}.failure { error in
self.pointFailToCreate()
}
}
The problem is that for some reason the last request fails and I am guessing that this is due to posting too many requests to the server at the same time.
Is there a way to chain these requests so they wait for the one before to complete before executing the next?
I have been looking at PromiseKit, but I don't really know how to implement this and I am looking for a quick solution.

Siesta does not control how requests are queued or how many requests run concurrently. You have two choices:
control it on the app side, or
control it in the network layer.
I’d investigate option 2 first. It gives you less fine-grained control, but it give you more robust options on the cheap and is less prone to mistakes. If you are using URLSession as your networking layer (which is Siesta’s default), then investigate whether the HTTPMaximumConnectionsPerHost property of URLSessionConfiguration does what you need. (Here are some examples of passing custom configuration to Siesta.)
If that doesn’t work for you, a simple version of #1 is to use a completion handler to chain the requests:
func chainRequests(_ queue: [ThingsToRequest])
guard let thing = queue.first else { return }
params = makeParamsFor(thing)
resource.request(.POST, json: params)
.onSuccess {
...
}.onFailure {
...
}.onCompletion { _ in
chainRequests(queue[1 ..< queue.count])
}
}
Note that you can attach multiple overlapping handlers to the same request, and they’re called in the order you attached them. Note also that Siesta guarantees that the completion block is always called, no matter the outcome. This means that each request will result in calls to either closures 1 & 3 or closures 2 & 3. That’s why this approach works.

Related

How to cancel a tokio tcp connecting gracefully?

When we connect to a remote host via tcp, it can be a time-consuming operation. And while waiting for a connection, the user may cancel the operation at any time.
When connecting using async tokio, TcpStream::connect() returns a Future<TcpStream, io::Error> object, assumed to be called tcps_ft.
There are two parts, one is the normal logic of the program, which should call .awati() on tcp_ft above, and the other part is the UI of the program, where the user wants to call drop(tcps_ft) if he clicks the cancel button. But this seems impossible to do, because both calls consume tcps_ft.
#[tokio::test]
async fn test_cancel_by_drop() {
let addr = "192.168.1.100:8080";
let tcps_ft = TcpStream::connect(addr);
let mut tcps = tcps_ft.await.unwrap();
// simulate user's operation.
let cancel_jh = tokio::spawn(async move {
tokio::time::sleep(Duration::from_millis(100)).await;
drop(tcps_ft); // can not compile:: tcps_ft moved when await
});
// simulate user's program
tcps.shutdown().await;
cancel_jh.await;
}
So I considered using Task to do it, after all the Task::abort() function will not consume the atjh: Future<JoinHandle> object corresponding to this task. But I still can't call atjh.await before abort() returns synchronously, and in any case, await will consume the variable, making it impossible to call abort() asynchronously. (In other words, the call to abort() must be executed synchronously before await.)
#[tokio::test]
async fn test_cancel_by_abort() {
let addr = "192.168.1.100:8080";
let atjh = tokio::spawn(async move { TcpStream::connect(addr).await.unwrap() });
// simulate user's operation.
let cancel_jh = tokio::spawn(async {
tokio::time::sleep(Duration::from_millis(100)).await;
&atjh.abort();
});
// simulate user's program
let mut tcps = atjh.await.unwrap(); // can not compile:: atjh moved when await
tcps.shutdown().await;
cancel_jh.await;
}
Of course, one less direct way is to use callback functions. In my asynchronous connection task, when connect().await returns, the user's callback function is called to notify the user to call atjh.await.
But here the callback function is introduced again, and I know await/async itself is designed to solve the callback hell problem.
Further, for user-supplied asynchronous callback functions, the compiler may impose very many requirements, such as implementing Send, avoiding cross-thread safety issues, etc. This is certainly not something that async would like to encounter.
How can I do it asynchronously and gracefully to cancel this asynchronous connection process? Is there a suggested model to handle it?

F# async: parent/child cancellation?

So here we go: given a Confluent.Kafka IConsumer<>, it wraps it into a dedicated async CE and consumes as long as cancellation hasn't been requested. This piece of code is also defends itself against the OperationCancelledException and runs finally block to ensure graceful termination of consumer.
let private consumeUntiCancelled callback (consumer: IConsumer<'key, 'value>) =
async {
let! ct = Async.CancellationToken
try
try
while not ct.IsCancellationRequested do
let consumeResult = consumer.Consume(ct)
if not consumeResult.IsPartitionEOF then do! (callback consumeResult)
with
| :? OperationCanceledException -> return ()
finally
consumer.Close()
consumer.Dispose()
}
Question #1: is this code correct or am I abusing the async?
So far so good. In my app I have to deal with lots of consumers that must die altogether. So, assuming that consumers: seq<Async<unit>> represents them, the following code is what I came up with:
async {
for consumer in consumers do
do! (Async.StartChild consumer |> Async.Ignore).
}
I expect this code to chain childs to the parent's cancellation context, and once it is cancelled, childs gonna be cancelled as well.
Question #2: is my finally block guaranteed to be ran even though child got cancelled?
I have two observations about your code:
Your use of Async.StartChild is correct - all child computations will inherit the same cancellation token and they will all get cancelled when the main token is cancelled.
The async workflow can be cancelled after you call consumer.Consume(ct) and before you call callback. I'm not sure what this means for your specific problem, but if it removes some data from a queue, the data could be lost before it is processed. If that's an issue, then I think you'll need to make callback non-asynchronous, or invoke it differently.
In your consumeUntilCancelled function, you do not explicity need to check while not if ct.IsCancellationRequested is true. The async workflow does this automatically in every do! or let!, so you can replace this with just a while loop.
Here is a minimal stand-alone demo:
let consume s = async {
try
while true do
do! Async.Sleep 1000
printfn "%s did work" s
finally
printfn "%s finalized" s }
let work =
async {
for c in ["A"; "B"; "C"; "D"] do
do! Async.StartChild (consume c) |> Async.Ignore }
Now we create the computation with a cancellation token:
// Run this in F# interactive
let ct = new System.Threading.CancellationTokenSource()
Async.Start(work, ct.Token)
// Run this sometime later
ct.Cancel()
Once you call ct.Cancel, all the finally blocks will be called and all the loops will stop.

Get specific .childrenCount from Firebase

In my new app (Project Control, iOS App Store ;)) I want users to take part of development decisions. For this I have added a path in my Firebase database called "claps". I would like to enter the number of the following in my TableView for the different concepts. I have tried the following
self.posts.append(Post(title: post_title, des: post_description, info: "\(post_date) - \(post_user) - \(post_claps) 👏", claps: Int(post_claps)))
for var item in self.posts {
g.ref.child("concepts").child(item.title).queryOrdered(byChild: "claps").observe(.childAdded) { (snapshotClaps: DataSnapshot!) in
item.claps = Int(snapshotClaps.childrenCount)
}
}
DispatchQueue.main.async() {
self.tableView.reloadData()
}
However, it does not yet represent the right one, but is one before it. I don't know how to make the reference more specific to really get only what's under claps.
This ist my Database:
Currently my output is 5 but it should be 4. You see its observing one "layer" to early. Help will be appreciated. Improvements too :)
UPDATE:
Through testing I could reveal that the problem is in the reference. The Int five is coming from the 5 Childs of "top-layer" "Journal". My problem is that I cant get any deeper in the structure because I don't have a specific String for .child()
Since you're observing the .childAdded event, your closure gets called for each matching child node. If you want to count the number of matching child nodes, you'll want to observe the .value event, which ensures your closure gets called for all matching nodes at once.
Something like:
g.ref.child("concepts").child(item.title).observe(.value) { (snapshotClaps: DataSnapshot!) in
item.claps = Int(snapshotClaps.childrenCount)
}
Note that I also removed the orderBy clause, since that has no useful meaning if all you use is the count.
create an Array and allow the firebase to populate it. or do something like
g.ref.child("concepts").child(item.title).observe(.value) { (snapshotClaps: DataSnapshot!) in
item.claps = Int(snapshotClaps.childrenCount)
}
observing value makes sure your closure gets its matching nodes.
There's a couple of great solutions but the issue in reading a node by .value is it reads in everything in that node.
While that would be fine for nodes that have a limited amount of data, it would overwhelm the device when the node contains a lot of data.
So another option is to leverage that Firebase executes all .childAdded events before .value events. That way, we can use a .value as a trigger that all nodes have been read.
Here's a function that uses .childAdded to iterate and count all of the users in the users node. Also, there's a .value observer that reads in just the last node, removes the .childAdded observer and passes the count back to the calling function via a completion handler. Remember that even though we are attaching both observers, the .childAdded events will all fire before the .value event.
func countUsers( completion: #escaping(Int) -> Void) {
var count = 0
let usersRef = self.ref.child("users")
usersRef.observe(.childAdded, with: { snapshot in
count+=1
})
let query = usersRef.queryOrderedByKey().queryLimited(toLast: 1)
query.observeSingleEvent(of: .value, with: { snapshot in
usersRef.removeAllObservers()
completion(count)
})
}
to call the function, here's the code
func getUserCount() {
self.countUsers(completion: { userCount in
print("number of users: \(userCount)")
})
}

process Swift DispatchQueue without affecting resource

I have a Swift DispatchQueue that receives data at 60fps.
However, depending on phones or amount of data received, the computation of those data becomes expensive to process at 60fps. In actuality, it is okay to process only half of them or as much as the computation resource allows.
let queue = DispatchQueue(label: "com.test.dataprocessing")
func processData(data: SomeData) {
queue.async {
// data processing
}
}
Does DispatchQueue somehow allow me to drop some data if a resource is limited? Currently, it is affecting the main UI of SceneKit. Or, is there something better than DispatchQueue for this type of task?
There are a couple of possible approaches:
The simple solution is to keep track of your own Bool as to whether your task is in progress or not, and when you have more data, only process it if there's not one already running:
private var inProgress = false
private var syncQueue = DispatchQueue(label: Bundle.main.bundleIdentifier! + ".sync.progress") // for reasons beyond the scope of this question, reader-writer with concurrent queue is not appropriate here
func processData(data: SomeData) {
let isAlreadyRunning = syncQueue.sync { () -> Bool in
if self.inProgress { return true }
self.inProgress = true
return false
}
if isAlreadyRunning { return }
processQueue.async {
defer {
self.syncQueue.async { self.inProgress = false }
}
// process `data`
}
}
All of that syncQueue stuff is to make sure that I have thread-safe access to the inProgress property. But don't get lost in those details; use whatever synchronization mechanism you want (e.g. a lock or whatever). All we want to make sure is that we have thread-safe access to the Bool status flag.
Focus on the basic idea, that we'll keep track of a Bool flag to know whether the processing queue is still tied up processing the prior set of SomeData. If it is busy, return immediately and don't process this new data. Otherwise, go ahead and process it.
While the above approach is conceptually simple, it won't offer great performance. For example, if your processing of data always takes 0.02 seconds (50 times per second) and your input data is coming in at a rate of 60 times per second, you'll end up getting 30 of them processed per second.
A more sophisticated approach is to use a GCD user data source, something that says "run the following closure when the destination queue is free". And the beauty of these dispatch user data sources is that it will coalesce them together. These data sources are useful for decoupling the speed of inputs from the processing of them.
So, you first create a data source that simply indicates what should be done when data comes in:
private var dataToProcess: SomeData?
private lazy var source = DispatchSource.makeUserDataAddSource(queue: processQueue)
func configure() {
source.setEventHandler() { [unowned self] in
guard let data = self.syncQueue.sync(execute: { self.dataToProcess }) else { return }
// process `data`
}
source.resume()
}
So, when there's data to process, we update our synchronized dataToProcess property and then tell the data source that there is something to process:
func processData(data: SomeData) {
syncQueue.async { self.dataToProcess = data }
source.add(data: 1)
}
Again, just like the previous example, we're using syncQueue to synchronize our access to some property across multiple threads. But this time we're synchronizing dataToProcess rather than the inProgress state variable we used in the first example. But the idea is the same, that we must be careful to synchronize our interation with a property across multiple threads.
Anyway, using this pattern with the above scenario (input coming in at 60 fps, whereas processing can only process 50 per second), the resulting performance much closer to the theoretical max of 50 fps (I got between 42 and 48 fps depending upon the queue priority), rather than 30 fps.
The latter process can conceivably lead to more frames (or whatever you're processing) to be processed per second and results in less idle time on the processing queue. The following image attempts to graphically illustrate how the two alternatives compare. In the former approach, you'll lose every other frame of data, whereas the latter approach will only lose a frame of data when two separate sets of input data came in prior to the processing queue becoming free and they were coalesced into a single call to the dispatch source.

RxJava2 Single take different route based on the item

I have the following code:
Single<Response<User>> single = service.registerUser();
single
.subscribeOn(Schedulers.io())
.observeOn(Schedulers.computation())
.map(Response::body)
.flatMap(parentsRepsitory::writeUser)
.observeOn(AndroidSchedulers.mainThread())
.flatMap(parentsRepsitory::getUser)
Where the parentsRepository is a repo wraping my realm database. The problems come when the server returns validation errors, however. So somewhere in my stream i want to have the equivalent of
if(response.code() == 201){
// CONTINUE STREAM USING THE LOGIC THAT HANDLES SUCCESS
}elseif(response.code() == 400){
// CONTINUE STREAM USING LOGIC TO HANDLE THE VALIDATION ERRORS
}
A solution I have previously implemented is as follows:
Observable<Response<User>> observable_from_api =
service.attemptLogin(username, password)
.share();
observable_from_api
.filter(response -> response.code() == HttpStatus.HTTP_STATUS_200_OK)
.//handle logic for success
observable_from_api
.filter(response -> response.code() == HttpStatus.HTTP_STATUS_400_BAD_REQUEST)
.//handle logic for validation errors
I don't like this solution for several different reasons. The main one being it just does not seem right. The second one being that the .share() method is only available on an Observable object. Since my network operation emits only one responce I would much rather use Single instead, but the .share() method is not available there.
Excuse me if this is a duplicate question, I have done some digging around and only found the solution I mentioned. I want to either see the optimal solution or be told explicitly that this is in fact the optimal solution.
I think you need to define which kind of data you want your consumer to receive. I assume you want to receive in the consumer a User object.
These are the signatures of the method that you should create:
Single<User> handleSuccess(Response<User> response)
Single<User> handleError(Response<User> response)
And then you create you stream in this way:
service.registerUser()
.flatMap(response -> {
if (response.success) {
return handleSuccess(response);
} else {
return handleError(response);
}
})
.subscribe(user -> logd("user: " + user.name));