This is the dartdoc & declaration of FocusOnKeyCallback:
/// Signature of a callback used by [Focus.onKey] and [FocusScope.onKey]
/// to receive key events.
///
/// The [node] is the node that received the event.
typedef FocusOnKeyCallback = bool Function(FocusNode node, RawKeyEvent event);
The callback has a return type of bool, but how that value will be used is not clear. Would void have worked here?
(originally asked on Flutter's GitHub: https://github.com/flutter/flutter/issues/45367)
It returns bool to mark that the event was handled by any of focus nodes, and if it wasn't, it calls the assert. Take a look at _handleRawKeyEvent method of FocusManager class:
...
bool handled = false;
for (FocusNode node in <FocusNode>[_primaryFocus, ..._primaryFocus.ancestors]) {
if (node.onKey != null && node.onKey(node, event)) {
assert(_focusDebug('Node $node handled key event $event.'));
handled = true;
break;
}
}
if (!handled) {
assert(_focusDebug('Key event not handled by anyone: $event.'));
}
...
So basically it's to prevent onKey event propagation.
Related
I am relatively new at flutter. I am creating a timer app using bloc. In the timer you are supposed to have a session, then a break and so on and so forth. To track which type of session to start(a break or session), I am using a boolean value isBreak which I am tracking in the SessionsBloc.
Here is the definition of the SessionsState:
part of 'sessions_bloc.dart';
abstract class SessionsState extends Equatable { final bool isBreak; final int sessionCount; const SessionsState(this.isBreak, this.sessionCount);
#override List<Object> get props => []; }
class SessionsInitial extends SessionsState { const SessionsInitial(super.isBreak, super.sessionCount); }
class SessionTrackingState extends SessionsState { const SessionTrackingState(super.isBreak, super.sessionCount); }
I then use A BlocListener to check for the TimerFinishedState from another bloc TimerBloc, after which I add an event, SessionTrackingEvent, that is supposed to change the aforementioned boolean value.
Here is the code for the listener:
listener: (context, state) {
Task currentTask =
BlocProvider.of<TasksBloc>(context).state.currentTask;
bool isTimeBoxed = currentTask.isTimeBoxed;
int sessionDuration;
int breakDuration;
if (state is TimerCompleteState) {
//Get the SessionsBloc state
final sessionState = BlocProvider.of<SessionsBloc>(context).state;
//Get the current value of the isBreak boolean value
bool isBreak = sessionState.isBreak;
int sessionCount = sessionState.sessionCount;
//Print statements: Can't Debug properly yet:(
print(sessionState.isBreak);
print(sessionState.sessionCount);
if (isTimeBoxed) {
sessionDuration = currentTask.sessionTimeBox!;
breakDuration = currentTask.breakTimeBox ?? 2;
// sessionCount = HiveDb().getTaskSessionCount(currentTask.taskName);
} else {
sessionDuration = 5;
breakDuration = 3;
}
if (isBreak) {
//Set timer with duration time
BlocProvider.of<TimerBloc>(context)
.add(InitializeTimerEvent(duration: sessionDuration));
//Add Event to track session count and next countdown if break or session
BlocProvider.of<SessionsBloc>(context).add(SessionTrackingEvent(
isBreak: isBreak,
sessionCount: sessionCount,
));
} else {
//Add event to reset timer
BlocProvider.of<TimerBloc>(context)
.add(InitializeTimerEvent(duration: breakDuration));
//Emit a state that notifies Button Bloc that it's a break and deactivate repeat button.
BlocProvider.of<TimerBloc>(context)
.add(OnBreakEvent(duration: breakDuration));
//Add Event to track session count and next countdown if break or session
BlocProvider.of<SessionsBloc>(context).add(SessionTrackingEvent(
isBreak: isBreak,
sessionCount: sessionCount += 1,
));
}
}
},
Finally, in the SessionsBloc, I only have super constructor which initializes the boolean value to false and one event handler that is supposed to change it as appropriate.
class SessionsBloc extends Bloc<SessionsEvent, SessionsState> {
SessionsBloc() : super(const SessionsInitial(false, 0)) {
on<SessionTrackingEvent>((event, emit) {
emit(SessionTrackingState(
event.isBreak ? false : true, event.sessionCount));
});
}
}
The expected result is that for each SessionTrackingEvent added, the boolean should be toggled to the opposite value. However, what actually happens is that it Works the first time, turning the initialized value of false to true and from there it just stays the same. Here is a screenshot of my print statement which outputs the value of IsBreak after every call to SessionTrackingEvent.
Here is a screenshot of my print statement which outputs the value of IsBreak after every call to SessionTrackingEvent.
I have tried changing the variable type from final because I thought maybe it's a flutter constraint about reassigning variables.
I have tried moving the reading of the block state value into the build method outside of the listener because I thought maybe it doesn't read the value as frequently.
What could be the problem, what might be preventing the value from changing as appropriate?
You forgot to pass your SessionsState properties into props list, so the Bloc can't differentiate between old and new states without it.
abstract class SessionsState extends Equatable {
final bool isBreak;
final int sessionCount;
const SessionsState(this.isBreak, this.sessionCount);
#override
List<Object> get props => [isBreak, sessionCount]; // your props should go here like this
}
I want to batch many futures into a single request that triggers either when a maximum batch size is reached, or a maximum time since the earliest future was received is reached.
Motivation
In flutter, I have many UI elements which need to display the result of a future, dependent on the data in the UI element.
For instance, I have a widget for a place, and a sub-widget which displays how long it will take to walk to a place. To compute the how long it will take to walk, I issue a request to Google Maps API to get the travel time to the place.
It is more efficient and cost-effective to batch all these API requests into a batch API request. So if there are 100 requests made instantaneously by the widgets, then the futures could be proxied through a single provider, which batches the futures into a single request to Google, and unpacks the result from Google into all the individual requests.
The provider needs to know when to stop waiting for more futures and when to actually issue the request, which should be controllable by the maximum "batch" size (i.e., # of travel time requests), or the maximum amount of time you are willing to wait for batching to take place.
The desired API would be something like:
// Client gives this to tell provider how to compute batch result.
abstract class BatchComputer<K,V> {
Future<List<V>> compute(List<K> batchedInputs);
}
// Batching library returns an object with this interface
// so that client can submit inputs to completed by the Batch provider.
abstract class BatchingFutureProvider<K,V> {
Future<V> submit(K inputValue);
}
// How do you implement this in dart???
BatchingFutureProvider<K,V> create<K,V>(
BatchComputer<K,V> computer,
int maxBatchSize,
Duration maxWaitDuration,
);
Does Dart (or a pub package) already provide this batching functionality, and if not, how would you implement the create function above?
This sounds perfectly reasonable, but also very specialized.
You need a way to represent a query, to combine these queries into a single super-query, and to split the super-result into individual results afterwards, which is what your BatchComputer does. Then you need a queue which you can flush through that under some conditions.
One thing that is clear is that you will need to use Completers for the results because you always need that when you want to return a future before you have the value or future to complete it with.
The approach I would choose would be:
import "dart:async";
/// A batch of requests to be handled together.
///
/// Collects [Request]s until the pending requests are flushed.
/// Requests can be flushed by calling [flush] or by configuring
/// the batch to automatically flush when reaching certain
/// tresholds.
class BatchRequest<Request, Response> {
final int _maxRequests;
final Duration _maxDelay;
final Future<List<Response>> Function(List<Request>) _compute;
Timer _timeout;
List<Request> _pendingRequests;
List<Completer<Response>> _responseCompleters;
/// Creates a batcher of [Request]s.
///
/// Batches requests until calling [flush]. At that pont, the
/// [batchCompute] function gets the list of pending requests,
/// and it should respond with a list of [Response]s.
/// The response to the a request in the argument list
/// should be at the same index in the response list,
/// and as such, the response list must have the same number
/// of responses as there were requests.
///
/// If [maxRequestsPerBatch] is supplied, requests are automatically
/// flushed whenever there are that many requests pending.
///
/// If [maxDelay] is supplied, requests are automatically flushed
/// when the oldest request has been pending for that long.
/// As such, The [maxDelay] is not the maximal time before a request
/// is answered, just how long sending the request may be delayed.
BatchRequest(Future<List<Response>> Function(List<Request>) batchCompute,
{int maxRequestsPerBatch, Duration maxDelay})
: _compute = batchCompute,
_maxRequests = maxRequestsPerBatch,
_maxDelay = maxDelay;
/// Add a request to the batch.
///
/// The request is stored until the requests are flushed,
/// then the returned future is completed with the result (or error)
/// received from handling the requests.
Future<Response> addRequest(Request request) {
var completer = Completer<Response>();
(_pendingRequests ??= []).add(request);
(_responseCompleters ??= []).add(completer);
if (_pendingRequests.length == _maxRequests) {
_flush();
} else if (_timeout == null && _maxDelay != null) {
_timeout = Timer(_maxDelay, _flush);
}
return completer.future;
}
/// Flush any pending requests immediately.
void flush() {
_flush();
}
void _flush() {
if (_pendingRequests == null) {
assert(_timeout == null);
assert(_responseCompleters == null);
return;
}
if (_timeout != null) {
_timeout.cancel();
_timeout = null;
}
var requests = _pendingRequests;
var completers = _responseCompleters;
_pendingRequests = null;
_responseCompleters = null;
_compute(requests).then((List<Response> results) {
if (results.length != completers.length) {
throw StateError("Wrong number of results. "
"Expected ${completers.length}, got ${results.length}");
}
for (int i = 0; i < results.length; i++) {
completers[i].complete(results[i]);
}
}).catchError((error, stack) {
for (var completer in completers) {
completer.completeError(error, stack);
}
});
}
}
You can use that as, for example:
void main() async {
var b = BatchRequest<int, int>(_compute,
maxRequestsPerBatch: 5, maxDelay: Duration(seconds: 1));
var sw = Stopwatch()..start();
for (int i = 0; i < 8; i++) {
b.addRequest(i).then((r) {
print("${sw.elapsedMilliseconds.toString().padLeft(4)}: $i -> $r");
});
}
}
Future<List<int>> _compute(List<int> args) =>
Future.value([for (var x in args) x + 1]);
See https://pub.dev/packages/batching_future/versions/0.0.2
I have almost exactly the same answer as #lrn, but have put some effort to make the main-line synchronous, and added some documentation.
/// Exposes [createBatcher] which batches computation requests until either
/// a max batch size or max wait duration is reached.
///
import 'dart:async';
import 'dart:collection';
import 'package:quiver/iterables.dart';
import 'package:synchronized/synchronized.dart';
/// Converts input type [K] to output type [V] for every item in
/// [batchedInputs]. There must be exactly one item in output list for every
/// item in input list, and assumes that input[i] => output[i].
abstract class BatchComputer<K, V> {
const BatchComputer();
Future<List<V>> compute(List<K> batchedInputs);
}
/// Interface to submit (possible) batched computation requests.
abstract class BatchingFutureProvider<K, V> {
Future<V> submit(K inputValue);
}
/// Returns a batcher which computes transformations in batch using [computer].
/// The batcher will wait to compute until [maxWaitDuration] is reached since
/// the first item in the current batch is received, or [maxBatchSize] items
/// are in the current batch, whatever happens first.
/// If [maxBatchSize] or [maxWaitDuration] is null, then the triggering
/// condition is ignored, but at least one condition must be supplied.
///
/// Warning: If [maxWaitDuration] is not supplied, then it is possible that
/// a partial batch will never finish computing.
BatchingFutureProvider<K, V> createBatcher<K, V>(BatchComputer<K, V> computer,
{int maxBatchSize, Duration maxWaitDuration}) {
if (!((maxBatchSize != null || maxWaitDuration != null) &&
(maxWaitDuration == null || maxWaitDuration.inMilliseconds > 0) &&
(maxBatchSize == null || maxBatchSize > 0))) {
throw ArgumentError(
"At least one of {maxBatchSize, maxWaitDuration} must be specified and be positive values");
}
return _Impl(computer, maxBatchSize, maxWaitDuration);
}
// Holds the input value and the future to complete it.
class _Payload<K, V> {
final K k;
final Completer<V> completer;
_Payload(this.k, this.completer);
}
enum _ExecuteCommand { EXECUTE }
/// Implements [createBatcher].
class _Impl<K, V> implements BatchingFutureProvider<K, V> {
/// Queues computation requests.
final controller = StreamController<dynamic>();
/// Queues the input values with their futures to complete.
final queue = Queue<_Payload>();
/// Locks access to [listen] to make queue-processing single-threaded.
final lock = Lock();
/// [maxWaitDuration] timer, as a stored reference to cancel early if needed.
Timer timer;
/// Performs the input->output batch transformation.
final BatchComputer computer;
/// See [createBatcher].
final int maxBatchSize;
/// See [createBatcher].
final Duration maxWaitDuration;
_Impl(this.computer, this.maxBatchSize, this.maxWaitDuration) {
controller.stream.listen(listen);
}
void dispose() {
controller.close();
}
#override
Future<V> submit(K inputValue) {
final completer = Completer<V>();
controller.add(_Payload(inputValue, completer));
return completer.future;
}
// Synchronous event-processing logic.
void listen(dynamic event) async {
await lock.synchronized(() {
if (event.runtimeType == _ExecuteCommand) {
if (timer?.isActive ?? true) {
// The timer got reset, so ignore this old request.
// The current timer needs to inactive and non-null
// for the execution to be legitimate.
return;
}
execute();
} else {
addPayload(event as _Payload);
}
return;
});
}
void addPayload(_Payload _payload) {
if (queue.isEmpty && maxWaitDuration != null) {
// This is the first item of the batch.
// Trigger the timer so we are guaranteed to start computing
// this batch before [maxWaitDuration].
timer = Timer(maxWaitDuration, triggerTimer);
}
queue.add(_payload);
if (maxBatchSize != null && queue.length >= maxBatchSize) {
execute();
return;
}
}
void execute() async {
timer?.cancel();
if (queue.isEmpty) {
return;
}
final results = await computer.compute(List<K>.of(queue.map((p) => p.k)));
for (var pair in zip<Object>([queue, results])) {
(pair[0] as _Payload).completer.complete(pair[1] as V);
}
queue.clear();
}
void triggerTimer() {
listen(_ExecuteCommand.EXECUTE);
}
}
So when writing UI in GTK it's generally preferrable to handle reading of files, etc. in an Async Method. things such as listboxes, are generally bound to a ListModel, the items in the ListBox updated in accordance with the items_changed signal.
So if I have some class, that implements ListModel, and has an add function, and some FileReader that holds a reference to said ListModel, and call add from an async function, how do i make that in essence triggering the items_changed and having GTK update accordingly?
I've tried list.items_changed.connect(message("Items changed!")); but it never triggers.
I saw this: How can one update GTK+ UI in Vala from a long operation without blocking the UI
but in this example, it's just the button label that is changed, no signal is actually triggered.
EDIT: (Codesample added at the request of #Michael Gratton
//Disclaimer: everything here is still very much a work in progress, and will, as soon as I'm confident that what I have is not total crap, be released under some GPL or other open license.
//Note: for the sake of readability, I adopted the C# naming convention for interfaces, namely, putting a capital 'I' in front of them, a decision i do not feel quite as confident in as I did earlier.
//Note: the calls to message(..) was put in here to help debugging
public class AsyncFileContext : Object{
private int64 offset;
private bool start_read;
private bool read_to_end;
private Factories.IVCardFactory factory;
private File file;
private FileMonitor monitor;
private Gee.Set<IVCard> vcard_buffer;
private IObservableSet<IVCard> _vCards;
public IObservableSet<IVCard> vCards {
owned get{
return this._vCards;
}
}
construct{
//We want to start fileops at the beginning of the file
this.offset = (int64)0;
this.start_read = true;
this.read_to_end = false;
this.vcard_buffer = new Gee.HashSet<IVCard>();
this.factory = new Factories.GenericVCardFactory();
}
public void add_vcard(IVCard card){
//TODO: implement
}
public AsyncFileContext(IObservableSet<IVCard> vcards, string path){
this._vCards = vcards;
this._vCards = IObservableSet.wrap_set<IVCard>(new Gee.HashSet<IVCard>());
this.file = File.new_for_path(path);
this.monitor = file.monitor_file(FileMonitorFlags.NONE, null);
message("1");
//TODO: add connect
this.monitor.changed.connect((file, otherfile, event) => {
if(event != FileMonitorEvent.DELETED){
bool changes_done = event == FileMonitorEvent.CHANGES_DONE_HINT;
Idle.add(() => {
read_file_async.begin(changes_done);
return false;
});
}
});
message("2");
//We don't know that changes are done yet
//TODO: Consider carefully how you want this to work when it is NOT called from an event
Idle.add(() => {
read_file_async.begin(false);
return false;
});
}
//Changes done should only be true if the FileMonitorEvent that triggers the call was CHANGES_DONE_HINT
private async void read_file_async(bool changes_done) throws IOError{
if(!this.start_read){
return;
}
this.start_read = false;
var dis = new DataInputStream(yield file.read_async());
message("3");
//If we've been reading this file, and there's then a change, we assume we need to continue where we let off
//TODO: assert that the offset isn't at the very end of the file, if so reset to 0 so we can reread the file
if(offset > 0){
dis.seek(offset, SeekType.SET);
}
string line;
int vcards_added = 0;
while((line = yield dis.read_line_async()) != null){
message("position: %s".printf(dis.tell().to_string()));
this.offset = dis.tell();
message("4");
message(line);
//if the line is empty, we want to jump to next line, and ignore the input here entirely
if(line.chomp().chug() == ""){
continue;
}
this.factory.add_line(line);
if(factory.vcard_ready){
message("creating...");
this.vcard_buffer.add(factory.create());
vcards_added++;
//If we've read-in and created an entire vcard, it's time to yield
message("Yielding...");
Idle.add(() => {
_vCards.add_all(vcard_buffer);
vcard_buffer.remove_all(_vCards);
return false;
});
Idle.add(read_file_async.callback);
yield;
message("Resuming");
}
}
//IF we expect there will be no more writing, or if we expect that we read ALL the vcards, and did not add any, it's time to go back and read through the whole thing again.
if(changes_done){ //|| vcards_added == 0){
this.offset = 0;
}
this.start_read = true;
}
}
//The main idea in this class is to just bind the IObservableCollection's item_added, item_removed and cleared signals to the items_changed of the ListModel. IObservableCollection is a class I have implemented that merely wraps Gee.Collection, it is unittested, and works as intended
public class VCardListModel : ListModel, Object{
private Gee.List<IVCard> vcard_list;
private IObservableCollection<IVCard> vcard_collection;
public VCardListModel(IObservableCollection<IVCard> vcard_collection){
this.vcard_collection = vcard_collection;
this.vcard_list = new Gee.ArrayList<IVCard>.wrap(vcard_collection.to_array());
this.vcard_collection.item_added.connect((vcard) => {
vcard_list.add(vcard);
int pos = vcard_list.index_of(vcard);
items_changed(pos, 0, 1);
});
this.vcard_collection.item_removed.connect((vcard) => {
int pos = vcard_list.index_of(vcard);
vcard_list.remove(vcard);
items_changed(pos, 1, 0);
});
this.vcard_collection.cleared.connect(() => {
items_changed(0, vcard_list.size, 0);
vcard_list.clear();
});
}
public Object? get_item(uint position){
if((vcard_list.size - 1) < position){
return null;
}
return this.vcard_list.get((int)position);
}
public Type get_item_type(){
return Type.from_name("VikingvCardIVCard");
}
public uint get_n_items(){
return (uint)this.vcard_list.size;
}
public Object? get_object(uint position){
return this.get_item((int)position);
}
}
//The IObservableCollection parsed to this classes constructor, is the one from the AsyncFileContext
public class ContactList : Gtk.ListBox{
private ListModel list_model;
public ContactList(IObservableCollection<IVCard> ivcards){
this.list_model = new VCardListModel(ivcards);
bind_model(this.list_model, create_row_func);
list_model.items_changed.connect(() => {
message("Items Changed!");
base.show_all();
});
}
private Gtk.Widget create_row_func(Object item){
return new ContactRow((IVCard)item);
}
}
Heres the way i 'solved' it.
I'm not particularly proud of this solution, but there are a couple of awful things about the Gtk ListBox, one of them being (and this might really be more of a ListModel issue) if the ListBox is bound to a ListModel, the ListBox will NOT be sortable by using the sort method, and to me at least, that is a dealbreaker. I've solved it by making a class which is basically a List wrapper, which has an 'added' signal and a 'remove' signal. Upon adding an element to the list, the added signal is then wired, so it will create a new Row object and add it to the list box. That way, data is control in a manner Similar to ListModel binding. I can not make it work without calling the ShowAll method though.
private IObservableCollection<IVCard> _ivcards;
public IObservableCollection<IVCard> ivcards {
get{
return _ivcards;
}
set{
this._ivcards = value;
foreach(var card in this._ivcards){
base.prepend(new ContactRow(card));
}
this._ivcards.item_added.connect((item) => {
base.add(new ContactRow(item));
base.show_all();
});
base.show_all();
}
}
Even though this is by no means the best code I've come up with, it works very well.
I'm new in WPF, implementing application using reactiveUI.
I have one button and added command handler for it.
want to call the method only when canExecute is true.
in viewmodel, i have defined it
public bool canExecute
{
get { return _canExecute;}
set { _canExecute = value;}
}
Bind()
{
AddRecord = new ReactiveCommand(_canExecute);
AddRecord .Subscribe(x =>
{
AddR()
}
}
void AddR()
{
}
but its not working. how to convert it in to System.IObservable?
As #jomtois mentions, you need to fix your declaration of CanExecute:
bool canExecute;
public bool CanExecute {
get { return canExecute; }
set { this.RaiseAndSetIfChanged(ref canExecute, value); }
}
Then, you can write:
AddRecord = new ReactiveCommand(this.WhenAnyValue(x => x.CanExecute));
Why go to all this effort? This makes it so that when CanExecute changes, ReactiveCommand automatically enables / disables. However, this design is pretty imperative, I wouldn't create a CanExecute boolean, I'd think about how I can combine properties related to my ViewModel that have a semantic meaning.
Now I know properties do not support async/await for good reasons. But sometimes you need to kick off some additional background processing from a property setter - a good example is data binding in a MVVM scenario.
In my case, I have a property that is bound to the SelectedItem of a ListView. Of course I immediately set the new value to the backing field and the main work of the property is done. But the change of the selected item in the UI needs also to trigger a REST service call to get some new data based on the now selected item.
So I need to call an async method. I can't await it, obviously, but I also do not want to fire and forget the call as I could miss exceptions during the async processing.
Now my take is the following:
private Feed selectedFeed;
public Feed SelectedFeed
{
get
{
return this.selectedFeed;
}
set
{
if (this.selectedFeed != value)
{
this.selectedFeed = value;
RaisePropertyChanged();
Task task = GetFeedArticles(value.Id);
task.ContinueWith(t =>
{
if (t.Status != TaskStatus.RanToCompletion)
{
MessengerInstance.Send<string>("Error description", "DisplayErrorNotification");
}
});
}
}
}
Ok so besides the fact I could move out the handling from the setter to a synchronous method, is this the correct way to handle such a scenario? Is there a better, less cluttered solution I do not see?
Would be very interested to see some other takes on this problem. I'm a bit curious that I was not able to find any other discussions on this concrete topic as it seems very common to me in MVVM apps that make heavy use of databinding.
I have a NotifyTaskCompletion type in my AsyncEx library that is essentially an INotifyPropertyChanged wrapper for Task/Task<T>. AFAIK there is very little information currently available on async combined with MVVM, so let me know if you find any other approaches.
Anyway, the NotifyTaskCompletion approach works best if your tasks return their results. I.e., from your current code sample it looks like GetFeedArticles is setting data-bound properties as a side effect instead of returning the articles. If you make this return Task<T> instead, you can end up with code like this:
private Feed selectedFeed;
public Feed SelectedFeed
{
get
{
return this.selectedFeed;
}
set
{
if (this.selectedFeed == value)
return;
this.selectedFeed = value;
RaisePropertyChanged();
Articles = NotifyTaskCompletion.Create(GetFeedArticlesAsync(value.Id));
}
}
private INotifyTaskCompletion<List<Article>> articles;
public INotifyTaskCompletion<List<Article>> Articles
{
get { return this.articles; }
set
{
if (this.articles == value)
return;
this.articles = value;
RaisePropertyChanged();
}
}
private async Task<List<Article>> GetFeedArticlesAsync(int id)
{
...
}
Then your databinding can use Articles.Result to get to the resulting collection (which is null until GetFeedArticlesAsync completes). You can use NotifyTaskCompletion "out of the box" to data-bind to errors as well (e.g., Articles.ErrorMessage) and it has a few boolean convenience properties (IsSuccessfullyCompleted, IsFaulted) to handle visibility toggles.
Note that this will correctly handle operations completing out of order. Since Articles actually represents the asynchronous operation itself (instead of the results directly), it is updated immediately when a new operation is started. So you'll never see out-of-date results.
You don't have to use data binding for your error handling. You can make whatever semantics you want by modifying the GetFeedArticlesAsync; for example, to handle exceptions by passing them to your MessengerInstance:
private async Task<List<Article>> GetFeedArticlesAsync(int id)
{
try
{
...
}
catch (Exception ex)
{
MessengerInstance.Send<string>("Error description", "DisplayErrorNotification");
return null;
}
}
Similarly, there's no notion of automatic cancellation built-in, but again it's easy to add to GetFeedArticlesAsync:
private CancellationTokenSource getFeedArticlesCts;
private async Task<List<Article>> GetFeedArticlesAsync(int id)
{
if (getFeedArticlesCts != null)
getFeedArticlesCts.Cancel();
using (getFeedArticlesCts = new CancellationTokenSource())
{
...
}
}
This is an area of current development, so please do make improvements or API suggestions!
public class AsyncRunner
{
public static void Run(Task task, Action<Task> onError = null)
{
if (onError == null)
{
task.ContinueWith((task1, o) => { }, TaskContinuationOptions.OnlyOnFaulted);
}
else
{
task.ContinueWith(onError, TaskContinuationOptions.OnlyOnFaulted);
}
}
}
Usage within the property
private NavigationMenuItem _selectedMenuItem;
public NavigationMenuItem SelectedMenuItem
{
get { return _selectedMenuItem; }
set
{
_selectedMenuItem = val;
AsyncRunner.Run(NavigateToMenuAsync(_selectedMenuItem));
}
}
private async Task NavigateToMenuAsync(NavigationMenuItem newNavigationMenu)
{
//call async tasks...
}