.NET Rx - ReplaySubject buffer size not working - system.reactive

I've been using .NET Reactive Extensions to observe log events as they come in. I'm currently using a class that derives from IObservable and uses a ReplaySubject to store the logs, that way I can filter and replay the logs (for example: Show me all the Error logs, or show me all the Verbose logs) without losing the logs I've buffered.
The problem is, even though I've set a buffer size on the subject:
this.subject = new ReplaySubject<LogEvent>(10);
The memory usage of my program goes through the roof when I use OnNext to add to the observable collection on an infinite loop:
internal void WatchForNewEvents()
{
Task.Factory.StartNew(() =>
{
while (true)
{
dynamic parameters = new ExpandoObject();
// TODO: Add parameters for getting specific log events
if (this.logEventRepository.GetManyHasNewResults(parameters))
{
foreach (var recentEvent in this.logEventRepository.GetMany(parameters))
{
this.subject.OnNext(recentEvent);
}
}
// Commented this out for now to really see the memory go up
// Thread.Sleep(1000);
}
});
}
Does the buffer size on ReplaySubject not work? It doesn't seem to be clearing the buffer when the buffer size is reached. Any help much appreciated!
UPDATE:
I add subscribers like this (Is this wrong?):
public IDisposable Subscribe(IObserver<LogEvent> observer)
{
return this.subject.Subscribe(observer);
}
...which is called like:
// Inserts into UI ListView
this.logEventObservable.Subscribe(evt => this.InsertNewLogEvent(evt));

I'm not sure if this is the definitive answer, but I suspect that you're hitting an issue because of concurrency around the scheduler you're using. The constructor you're calling on ReplaySubject looks like this:
public ReplaySubject(int bufferSize)
: this(bufferSize, TimeSpan.MaxValue, Scheduler.CurrentThread)
{ }
The Scheduler.CurrentThread worries me. Try changing it to Scheduler.ThreadPool and see if that helps.
Also, as a side note, you seem to be mixing Rx with TPL and old fashioned thread sleeping. It's usually best to avoid doing that. You could change your WatchForNewEvents code to look like this:
dynamic parameters = new ExpandoObject();
var newEvents =
from n in Observable.Interval(TimeSpan.FromSeconds(1.0))
where this.logEventRepository.GetManyHasNewResults(parameters)
from recentEvent in
this.logEventRepository.GetMany(parameters).ToObservable()
select recentEvent;
newEvents.Subscribe(this.subject);
That's a nice compact Rx-y way of doing things.

Related

Bound variable in ViewModel is not updating the displayed value

I was trying to create an countdown timer in ViewModel but i didnt found any method to do that so i ve tried to do this with task delay and while loop but it ends after first task delay. Do u know any other way how to do that or how to repair that one.
public PageViewModel()
{
MethodName();
}
public async void MethodName()
{
CountSeconds = 10;
while (CountSeconds > 0)
{
await Task.Delay(1000);
CountSeconds--;
}
}
The reason why you can`t see others is related to context. You trying to run async code in non-async context.
To solve this problem you can do several ways, which way to choose is your choice and depends on your needs:
await MethodName();
async Task MethodName()
{
CountSeconds = 10;
while (CountSeconds > 0)
{
await Task.Delay(1000);
CountSeconds--;
}
}
Another way is to create various of tasks and execute them, here you can see methods, which can help you.
And as Rand Random metioned it's not about MAUI, it`s about understanding of async programming itself, so for you will be usefull read more about it.
You can use Dispatacher.StartTimer() (available in the DispatcherExtensions class) to create a function that will execute every x seconds/minutes/hours (depending of what you're setting) using the device's clock.
To access the Application's Dispatcher from any class, use the following line:
var dispatcher = Application.Current.Dispatcher;
Since there is no documentation available yet for MAUI, you can read the Device.StartTimer() documentation from Xamarin, which acts exactly the same.

Recommended data flow (import/export)

I have a flutter application which (simply put) list some data on various screens and can be modified. My current data approach works, but I feel it is not a best practice or optimal.
Currently, when a object is saved, it is converted to JSON (using dart:convert) and stored in a file on the device (using dart.io), overriding the file if it exist. Every screen that needs to display these objects reads the file to get the objects. Every time there is a change that needs to be saved, it exports everything (overwrites) again then imports it again to display it.
The reason I chose JSON over S is because I want to add a web portion later. Does this approach of reading/writing a best practice? I feel this much reading/writing of all the data for most screens could cause some performance issues.
Any advice is appreciated!
This is a possible way to keep data in-memory and write to disk when changes are made to your datamodel/settings.
I use RxDart myself. You don't need it per se, although it does make life easier. I'll be simplifying the examples below, so you get to know the concept and apply it to your own needs.
Let say you keep track of data in your settings class:
#JsonSerializable()
class Settings {
String someData1;
String someData2;
// json seriazable functions
}
You need a "handler"1 or something similar that manages changes made to your Settings and also to read/write data:
class SettingsHandler {
Settings _settings;
StreamController<Settings> _settingsController = BehaviorSubject<Settings>();
StreamController<String> _data1Controller = BehaviorSubject<String>();
StreamSink<String> get data1Input => _data1Controller.sink;
Observable<String> get data1Output => Observable(_data1Controller.stream);
Future<Settings> _readFromDisk() async {
// do your thing
}
Future<Settings> _writeToDisk(Settings settings) async {
// do your thing
}
Future<void> init() async {
// read from disk
_settings = await _readFromDisk();
_settingsController.sink.add(_settings);
// initialize data
data1Input.add(_settings.someData1);
data1Output
.skip(1) // we skip because we just added our initialization data above.
.listen((value) =>
// we must propagate through the update function
// otherwise nothing gets written to disk
update((settings) => settings.someData1 = value)
);
// when changes are made, it needs to notify this stream
// so everything can be written to disk
_settingsSaver.output
// save settings every 2.5 seconds when changes occur.
.debounceTime(const Duration(milliseconds: 2500))
// get the changes and write to disk
.listen((settings) => _writeToDisk(settings));
}
// this function is crucial as it allows changes to be made via the [adjustFunc]
// and then propagates this into the _settingsSaver stream.
void update(void Function(Settings input) adjustFunc) {
adjustFunc(_settings);
_settingsSaver.sink.add(_settings);
}
}
So now you can do something like
var handler = SettingsHandler();
await handler.init();
// this
handler.data1Input.add('NewData');
// or this
handler.update((settings) {
settings.someData1 = 'NewData';
});
Remember, this code only shows how the concept can work. You need to change it for your situation. You could also decide to not expose data1Input or the update(...) function, this is up to your own design.
1 I personally use BloC, your situation might require a different way.

Binding.scala: Strategy to avoid too many dom-tree updates

In my project scala-adapters I display log entries that are sent over a websocket.
As I have no control on how many entries are sent, I am looking for a strategy to avoid that the screen freezes.
I created a ScalaFiddle to simulate that: https://scalafiddle.io/sf/kzr28tq
This function with these parameters works perfectly:
setInterval(1000) { // note the absence of () =>
entries.value += (0 to 100).map(_.toString).mkString("")
}
If the interval gets smaller and the String longer - the screen freezes, e.g. with:
setInterval(100) { // note the absence of () =>
entries.value += (0 to 10000).map(_.toString).mkString("")
}
Is there a solution to solve that on the client side - or do I have to solve that on the server side?
You can try:
#dom
def render = {
<div>
{
for (entry <- entries) yield {
entryDiv(entry).bind
}
}
</div>
}
The problem is that you bind entries too early. Binding.scala does its magic by CPS transform, every .bind triggers re-evaluation of all code after, so you should bind a variable as late as possible.
And for Vars, use for comprehension instead of bind directly, to avoid updating the whole list. When you use += to modify the content of Vars, Binding.scala "patches" the list internally, but if you do .bind on the Vars instance directly to get the whole list, the framework cannot do any optimization for you.
Here is the updated ScalaFiddle: https://scalafiddle.io/sf/kzr28tq/3

Zookeeper barrier implementation

I am trying to implement a barrier in Zookeeper. My implementation works all of the time when there are a small number of nodes that need to join to pass the barrier. However, when I test my implementation with 100+ nodes needing to joining the barrier, around 1% of the time it seems like that one of the nodes is missing the last watcher event, and not checking to see if the number of children of the barrier node has changed.
I even synchronized the process method on the watcher, but that did not change anything. Below is the code for my process method, and the logic that checks to see if needs to move forward.
Watcher process :
public BarrierWatcher(FastBarrier FastBarrier) {
this.ofb = FastBarrier;
}
#Override
public synchronized void process(WatchedEvent event) {
synchronized (ofb) {
ofb.notify();
}
}
Logic to control barrier mechanism:
BarrierWatcher bw = new BarrierWatcher(this);
List<String> memberList = zk.getChildren(barrierPath, bw);
synchronized(this) {
while (memberList.size() < numOfMembers) {
this.wait(1000);
memberList = zk.getChildren(barrierPath, bw);
}
}
Instead of just calling this.wait(), I had add this.wait(1000) for the rare failure occurrence. With 1000 in place it always passes the barrier once all nodes have joined. I was sure that synchronizing the process method would fix this, but it hasn't. Anyone have any experience with this, or an ideas what i might be doing wrong?
You can compare your implementation with netflix-curator where distributed barrier is already implemented.

Handling errors in an observable sequence using Rx

Is there a way to have an observable sequence to resume execution with the next element in the sequence if an error occurs?
From this post it looks like you need to specify a new observable sequence in Catch() to resume execution, but what if you needed to just continue processing with the next element in the sequence instead? Is there a way to achieve this?
UPDATE:
The scenario is as follows:
I have a bunch of elements that I need to process. The processing is made up of a bunch of steps. I have
decomposed the steps into tasks that I would like to compose.
I followed the guidelines for ToObservable() posted here
to convert by tasks to an observables for composition.
so basically I'm doing somethng like so -
foreach(element in collection)
{
var result = from aResult in DoAAsync(element).ToObservable()
from bResult in DoBAsync(aResult).ToObservable()
from cResult in DoCAsync(bResult).ToObservable()
select cResult;
result.subscribe( register on next and error handlers here)
}
or I could something like this:
var result =
from element in collection.ToObservable()
from aResult in DoAAsync(element).ToObservable()
from bResult in DoBAsync(aResult).ToObservable()
from cResult in DoCAsync(bResult).ToObservable()
select cResult;
What is the best way here to continue processing other elements even if let's say the processing of
one of the elements throws an exception. I would like to be able to log the error and move on ideally.
Both James & Richard made some good points, but I don't think they have given you the best method for solving your problem.
James suggested using .Catch(Observable.Never<Unit>()). He was wrong when he said that "will ... allow the stream to continue" because once you hit an exception the stream must end - that is what Richard pointed out when he mentioned the contract between observers and observables.
Also, using Never in this way will cause your observables to never complete.
The short answer is that .Catch(Observable.Empty<Unit>()) is the correct way to change a sequence from one that ends with an error to one that ends with completion.
You've hit on the right idea of using SelectMany to process each value of the source collection so that you can catch each exception, but you're left with a couple of issues.
You're using tasks (TPL) just to turn a function call into an observable. This forces your observable to use task pool threads which means that the SelectMany statement will likely produce values in a non-deterministic order.
Also you hide the actual calls to process your data making refactoring and maintenance harder.
I think you're better off creating an extension method that allows the exceptions to be skipped. Here it is:
public static IObservable<R> SelectAndSkipOnException<T, R>(
this IObservable<T> source, Func<T, R> selector)
{
return
source
.Select(t =>
Observable.Start(() => selector(t)).Catch(Observable.Empty<R>()))
.Merge();
}
With this method you can now simply do this:
var result =
collection.ToObservable()
.SelectAndSkipOnException(t =>
{
var a = DoA(t);
var b = DoB(a);
var c = DoC(b);
return c;
});
This code is much simpler, but it hides the exception(s). If you want to hang on to the exceptions while letting your sequence continue then you need to do some extra funkiness. Adding a couple of overloads to the Materialize extension method works to keep the errors.
public static IObservable<Notification<R>> Materialize<T, R>(
this IObservable<T> source, Func<T, R> selector)
{
return source.Select(t => Notification.CreateOnNext(t)).Materialize(selector);
}
public static IObservable<Notification<R>> Materialize<T, R>(
this IObservable<Notification<T>> source, Func<T, R> selector)
{
Func<Notification<T>, Notification<R>> f = nt =>
{
if (nt.Kind == NotificationKind.OnNext)
{
try
{
return Notification.CreateOnNext<R>(selector(nt.Value));
}
catch (Exception ex)
{
ex.Data["Value"] = nt.Value;
ex.Data["Selector"] = selector;
return Notification.CreateOnError<R>(ex);
}
}
else
{
if (nt.Kind == NotificationKind.OnError)
{
return Notification.CreateOnError<R>(nt.Exception);
}
else
{
return Notification.CreateOnCompleted<R>();
}
}
};
return source.Select(nt => f(nt));
}
These methods allow you to write this:
var result =
collection
.ToObservable()
.Materialize(t =>
{
var a = DoA(t);
var b = DoB(a);
var c = DoC(b);
return c;
})
.Do(nt =>
{
if (nt.Kind == NotificationKind.OnError)
{
/* Process the error in `nt.Exception` */
}
})
.Where(nt => nt.Kind != NotificationKind.OnError)
.Dematerialize();
You can even chain these Materialize methods and use ex.Data["Value"] & ex.Data["Selector"] to get the value and selector function that threw the error out.
I hope this helps.
The contract between IObservable and IObserver is OnNext*(OnCompelted|OnError)? which is upheld by all operators, even if not by the source.
Your only choice is to re-subscribe to the source using Retry, but if the source returns the IObservable instance for every description you won't see any new values.
Could you supply more information on your scenario? Maybe there is another way of looking at it.
Edit: Based on your updated feedback, it sounds like you just need Catch:
var result =
from element in collection.ToObservable()
from aResult in DoAAsync(element).ToObservable().Log().Catch(Observable.Empty<TA>())
from bResult in DoBAsync(aResult).ToObservable().Log().Catch(Observable.Empty<TB>())
from cResult in DoCAsync(bResult).ToObservable().Log().Catch(Observable.Empty<TC>())
select cResult;
This replaces an error with an Empty which would not trigger the next sequence (since it uses SelectMany under the hood.