Swift async/await in for loop - swift

I am scratching my head on new async/await pattern in Swift 5.5 announced in WWDC 2021 and there seems to be lot of learning involved and not as easy to grasp as is pretended to be. I just saw this for loop for instance in WWDC video:
for await id in staticImageIDsURL.lines {
let thumbnail = await fetchThumbnail(for: id)
collage.add(thumbnail)
}
let result = await collage.draw()
As I understand, every iteration of for loop will suspend the for loop till fetchThumbnail() finishes running (probably on a different thread). My questions:
What is the objective of await id in the for loop line? What if we have the for loop written as following without await?
for id in staticImageIDsURL.lines {
}
Does the for loop above always ensures that images are added to collage in sequential manner and not in random order depending on which thumbnails are fetched early? Because in classic completion handler way of writing code, ensuring sequential order in array requires some more logic to the code.

The await id means geting an id element from the staticImageIDsURL.lines is an asynchronous operation in itself.
for await id in staticImageIDsURL.lines
This operation has to complete before we enter for loop's body for that iteration. You should read AsyncSequence docs to know more on this OR can watch Meet AsyncSequence WWDC 2021 session.
For each iteration, you are waiting for current operation to complete when you say this.
let thumbnail = await fetchThumbnail(for: id)
This line will suspend the function each time a new fetch call is initiated, so these thumbnails calls are guaranteed to be completed sequentially. These calls NEVER happen in parallel, first has to complete before second one is initiated.

Related

rxdart: Get buffered elements on stream subscription cancel

I'm using a rxdart ZipStream within my app to combine two streams of incoming bluetooth data. Those streams are used along with "bufferCount" to collect 500 elements each before emitting. Everything works fine so far, but if the stream subscription gets cancelled at some point, there might be a number of elements in those buffers that are omitted after that. I could wait for a "buffer cycle" to complete before cancelling the stream subscription, but as this might take some time depending on the configured sample rate, I wonder if there is a solution to get those buffers as they are even if the number of elements might be less than 500.
Here is some simplified code for explanation:
subscription = ZipStream.zip2(
streamA.bufferCount(500),
streamB.bufferCount(500),
(streamABuffer, streamBBuffer) {
return ...;
},
).listen((data) {
...
});
Thanks in advance!
So for anyone wondering: As bufferCount is implemented with BufferCountStreamTransformer which extends BackpressureStreamTransformer, there is a dispatchOnClose property that defaults to true. That means if the underlying stream whose emitted elements are buffered is closed, then the remaining elements in that buffer are emitted finally. This also applies to the example above. My fault was to close the stream and to cancel the stream subscription instantly. With awaiting the stream's closing and cancelling the stream subscription afterwards, everything works as expected.

Silence between tracks in just_audio

I want to have a variable length pause between tracks in a playlist created with a just_audio AudioPlayer instance (there is a background track which I want playing during this interval). Something to the effect of:
_voiceAudioPlayer.currentIndexStream.listen((event) {
_voiceAudioPlayer.pause();
Future.delayed(const Duration(seconds: 4), () => _voiceAudioPlayer.play());
});
This throws an error:
"Unhandled Exception: Bad state: Cannot fire new event. Controller is already firing an event"
Is there a clean way to do this? I'm considering adding silent mp3s at every other track in the playlist, but feel there there ought to be a better way.
This error happens because currentIndexStream is a "sync" broadcast stream, so you can't trigger another state change event while the current event is being processed (i.e. in the same cycle of the event loop). But you can get around that by scheduling a microtask to happen after the current cycle:
_voiceAudioPlayer.currentIndexStream.listen((index) {
scheduleMicrotask(() async {
_voiceAudioPlayer.pause();
await Future.delayed(Duration(seconds: 4));
_voiceAudioPlayer.play();
});
});
Still, I wouldn't depend on this callback being executed soon enough due to the gapless nature of just_audio's playlists. That is, the next audio track will begin playing immediately, so you're bound to hear at least a fraction of the next item's audio before the pause happens.
There is an open feature request for a SilenceAudioSource which could be inserted into a playlist (you can vote for that issue by clicking the thumbs up button if you'd like it to be implemented.) A silent audio file which you proposed is actually the simplest alternative to SilenceAudioSource.
Otherwise, another approach would be to not use the gapless playlists feature at all (since you don't need the gapless feature anyway), and just implement your own logic to advance the queue:
final queue = [source1, source2, source3, ...];
for (var source in queue) {
await _voiceAudioPlayer.setAudioSource(source);
await _voiceAudioPlayer.play();
await Future.delayed(seconds: 4);
}
The above example does not handle pause/resume logic, but it is just to show that it is possible for you to take the playlist logic into your own hands if you don't require the gapless feature.

Async/Await/then in Dart/Flutter

I have a flutter application where I am using the SQFLITE plugin to fetch data from SQLite DB. Here I am facing a weird problem. As per my understanding, we use either async/await or then() function for async programming.
Here I have a db.query() method which is conducting some SQL queries to fetch data from the DB. After this function fetches the data, we do some further processing in the .then() function. However, in this approach, I was facing some issues. From where I am calling this getExpensesByFundId(int fundId)function, it doesn't seem to fetch the data properly. It's supposed to return Future> object which will be then converted to List when the data is available. But when I call it doesn't work.
However, I just did some experimentation with it and added "await" keyword in front of the db.query() function and somehow it just started to work fine. Can you explain why adding the await keyword is solving this issue? I thought when using .then() function, we don't need to use the await keyword.
Here are my codes:
Future<List<Expense>> getExpensesByFundId(int fundId) async {
Database db = await database;
List<Expense> expenseList = List();
// The await in the below line is what I'm talking about
await db.query(expTable,where: '$expTable.$expFundId = $fundId')
.then((List<Map<String,dynamic>> expList){
expList.forEach((Map<String, dynamic> expMap){
expenseList.add(Expense.fromMap(expMap));
});
});
return expenseList;
}
In simple words:
await is meant to interrupt the process flow until the async method has finished.
then however does not interrupt the process flow (meaning the next instructions will be executed) but enables you to run code when the async method is finished.
In your example, you cannot achieve what you want when you use then because the code is not 'waiting' and the return statement is processed and thus returns an empty list.
When you add the await, you explicitly say: 'don't go further until my Future method is completed (namely the then part).
You could write your code as follows to achieve the same result using only await:
Future<List<Expense>> getExpensesByFundId(int fundId) async {
Database db = await database;
List<Expense> expenseList = List();
List<Map<String,dynamic>> expList = await db.query(expTable,where: '$expTable.$expFundId = $fundId');
expList.forEach((Map<String, dynamic> expMap) {
expenseList.add(Expense.fromMap(expMap));
});
return expenseList;
}
You could also choose to use only the then part, but you need to ensure that you call getExpensesByFundId properly afterwards:
Future<List<Expense>> getExpensesByFundId(int fundId) async {
Database db = await database;
List<Expense> expenseList = List();
return db.query(expTable,where: '$expTable.$expFundId = $fundId')
.then((List<Map<String,dynamic>> expList){
expList.forEach((Map<String, dynamic> expMap){
expenseList.add(Expense.fromMap(expMap));
});
});
}
// call either with an await
List<Expense> list = await getExpensesByFundId(1);
// or with a then (knowing that this will not interrupt the process flow and process the next instruction
getExpensesByFundId(1).then((List<Expense> l) { /*...*/ });
Adding to the above answers.
Flutter Application is said to be a step by step execution of code, but it's not like that.
There are a lot of events going to be triggered in the lifecycle of applications like Click Event, Timers, and all. There must be some code that should be running in the background thread.
How background work execute:
So there are two Queues
Microtask Queue
Event Queue
Microtask Queue runs the code which not supposed to be run by any event(click, timer, etc). It can contain both sync and async work.
Event Queue runs when any external click event occurs in the application like Click event, then that block execution done inside the event loop.
The below diagram will explain in detail how execution will proceed.
Note: At any given point of application development Microtask queue will run then only Event Queue will be able to run.
When making class use async for using await its simple logic to make a wait state in your function until your data is retrieve to show.
Example: 1) Its like when you follow click button 2) Data first store in database than Future function use to retrieve data 3) Move that data into variable and than show in screen 4) Variable show like increment in your following/profile.
And then is use one by one step of code, store data in variable and then move to next.
Example: If I click in follow button until data store in variable it continuously retrieve some data to store and not allow next function to run, and if one task is complete than move to another.
Same as your question i was also doing experiment in social media flutter app and this is my understanding. I hope this would help.
A Flutter question from an answer from your answer.
await is meant to interrupt the process flow until the async method has finished. then however does not interrupt the process flow but enables you to run code when the async method is finished. So, I am asking diff. between top down & bottom down process in programming.

Sharing cold and hot observables

I'm confused by the behavior of a shared stream that is created using Rx.Observable.just.
For example:
var log = function(x) { console.log(x); };
var cold = Rx.Observable
.just({ foo: 'cold' });
cold.subscribe(log); // <-- Logs three times
cold.subscribe(log);
cold.subscribe(log);
var coldShare = Rx.Observable
.just({ foo: 'cold share' })
.share();
coldShare.subscribe(log); // <-- Only logs once
coldShare.subscribe(log);
coldShare.subscribe(log);
Both streams only emit one event, but the un-shared one can be subscribed to three times. Why is this?
I need to "fork" a stream but share its value (and then combine the forked streams).
How can I share the value of a stream but also subscribe to it multiple times?
I realize that this is probably related to the concept of "cold" and "hot" observables. However:
Is the stream created by Rx.Observable.just() cold or hot?
How is one supposed to determine the answer to the previous question?
Is the stream created by Rx.Observable.just() cold or hot?
Cold.
How is one supposed to determine the answer to the previous question?
I guess the documentation is the only guide.
How can I share the value of a stream but also subscribe to it multiple times?
You are looking for the idea of a connectable observable. By example:
var log = function(x) { console.log(x); };
var coldShare = Rx.Observable
.just({ foo: 'cold share' })
.publish();
coldShare.subscribe(log); // Does nothing
coldShare.subscribe(log); // Does nothing
coldShare.subscribe(log); // Does nothing
coldShare.connect(); // Emits one value to its three subscribers (logs three times)
var log = function(x) {
document.write(JSON.stringify(x));
document.write("<br>");
};
var coldShare = Rx.Observable
.just({ foo: 'cold share' })
.publish();
coldShare.subscribe(log); // <-- Only logs once
coldShare.subscribe(log);
coldShare.subscribe(log);
coldShare.connect();
<script src="https://cdnjs.cloudflare.com/ajax/libs/rxjs/4.0.7/rx.all.min.js"></script>
The example above logs three times. Using publish and connect, you essentially "pause" the observable until the call to connect.
See also:
How do I share an observable with publish and connect?
Are there 'hot' and 'cold' operators?
I don-t understand your first question, but about the last one, as I have been having problem getting that one too:
Rxjs implementation of Observables/Observers is based on the observer pattern, which is similar to the good old callback mechanism.
To exemplify, here is the basic form of creating an observable (taken from the doc at https://github.com/Reactive-Extensions/RxJS/blob/master/doc/api/core/operators/create.md)
var source = Rx.Observable.create(function (observer) {
observer.onNext(42);
observer.onCompleted();
// Note that this is optional, you do not have to return this if you require no cleanup
return function () {
console.log('disposed');
};
});
Rx.Observable.create takes as argument a function (say factory_fn to be original) which takes an observer. Your values are generated by a computation of your choice in the body of factory_fn, and because you have the observer in parameter you can process/push the generated values when you see fit. BUT factory_fn is not executed, it is just registered (like a callback would). It will be called everytime there is a subscribe(observer) on the related observable (i.e. the one returned by Rx.Observable.create(factory_fn).
Once subscription is done (creation callback called), values flow to your observer according to the logic in the factory function and it remains that way till your observable completes or the observer unsubscribes (supposing you did implement an action to cancel value flow as the return value of factory_fn).
What that basically means is by default, Rx.Observables are cold.
My conclusion after using quite a bit of the library, is that unless it is duely documented, the only way to know FOR SURE the temperature of an observable is to eye the source code. Or add a side effect somewhere, subscribe twice and see if the side effect happens twice or only once (which is what you did). That, or ask on stackoverflow.
For instance, Rx.fromEvent produce hot observables, as you can see from the last line in the code (return new EventObservable(element, eventName, selector).publish().refCount();). (code here : https://github.com/Reactive-Extensions/RxJS/blob/master/src/core/linq/observable/fromevent.js). The publish operator is among those operators which turns a cold observable into a hot one. How that works is out of scope so I won-t detail it here.
But Rx.DOM.fromWebSocket does not produce hot observables (https://github.com/Reactive-Extensions/RxJS-DOM/blob/master/src/dom/websocket.js). Cf. How to buffer stream using fromWebSocket Subject
Confusion often comes I think from the fact that we conflate the actual source (say stream of button clicks) and its representation (Rx.Observable). It is unfortunate when that happens but what we imagine as hot sources can end up being represented by a cold Rx.Observable.
So, yes, Rx.Observable.just creates cold observables.

Play 1.2.3 framework - Right way to commit transaction

We have a HTTP end-point that takes a long time to run and can also be called concurrently by users. As part of this request, we update the model inside a synchronized block so that other (possibly concurrent) requests pick up that change.
E.g.
MyModel m = null;
synchronized (lockObject) {
m = MyModel.findById(id);
if (m.status == PENDING) {
m.status = ACTIVE;
} else {
//render a response back to user that the operation is not allowed
}
m.save(); //Is not expected to be called unless we set m.status = ACTIVE
}
//Long running operation continues here. It can involve further changes to instance "m"
The reason for the synchronized block is to ensure that even concurrent requests get to pick up the latest status. However, the underlying JPA does not commit my changes (m.save()) until the request is complete. Since this is a long-running request, I do not want to wait until the request is complete and still want to ensure that other callers are notified of the change in status. I tried to call "m.em().flush(); JPA.em().getTransaction().commit();" after m.save(), but that makes the transaction unavailable for the subsequent action as part of the same request. Can I just given "JPA.em().getTransaction().begin();" and let Play handle the transaction from then on? If not, what is the best way to handle this use-case?
UPDATE:
Based on the response, I modified my code as follows:
MyModel m = null;
synchronized (lockObject) {
m = MyModel.findById(id);
if (m.status == PENDING) {
m.status = ACTIVE;
} else {
//render a response back to user that the operation is not allowed
}
m.save(); //Is not expected to be called unless we set m.status = ACTIVE
}
new MyModelUpdateJob(m.id).now();
And in my job, I have the following line:
doJob() {
MyModel m = MyModel.findById(id);
print m.status; //This still prints the old status as-if m.save() had no effect...
}
What am I missing?
Put your update code in a job an call
new MyModelUpdateJob(id).now().get();
thus the update will be done in another transaction that is commited at the end of the job
ouch, as soon as you add more play servers, you will be in trouble. You may want to play with optimistic locking in your example or and I advise against it pessimistic locking....ick.
HOWEVER, looking at your code, maybe read the article Building on Quicksand. I am not sure you need a synchronized block in that case at all...try to go after being idempotent.
In your case if
1. user 1 and user 2 both call that method and it is pending, then it goes to active(Idempotent)
If user 1 or user 2 wins, well that would be like you had the synchronization block anyways.
I am sure however you have a more complex scenario not shown here, BUT READ that article Building on Quicksand as it really changes the traditional way of thinking and is how google and amazon and very large scale systems operate.
Another option for distributed transactions across play servers is zookeeper which the big large nosql guys use BUT only as a last resort ;) ;)
later,
Dean