Merging a changing collection of observables - system.reactive

We have a class Thing that implements IObservable<Thing>. In another class, there is a collection of Things , and that class needs to react to updates from all those observables in a unified manner. The obvious way to do that is Observable.Merge(), and that generally works; however, when the collection changes, we also need to subscribe to any new Things in our merged subscription (and in theory unsubscribe from all the removed ones, but that seems less problematic - they just won't produce any updates anymore).
We currently achieve that by recreating the subscription on every change of the collection, but that seems rather suboptimal in terms of processing overhead and also due to missing updates from any of the Things in the brief time between discarding the old subscription and creating the new one (which has proven to be an issue in practice, especially as we also need to Buffer() the subscription for a short amount of time, and the buffered items are lost when disposing the subscription).
What is the proper way of merging a changing collection of observables like this?

If you have an IObservable<IObservable<T>> observable, then calling Merge on that, will include children of new parents, if you catch my drift. The trick is converting the ObservableCollection<IObservable<Thing>> to an IObservable<IObservable<Thing>>.
If you have ReactiveUI running around, and are ok to use it, then you could convert the ObservableCollection<IObservable<Thing>> to a ReactiveCollection<IObservable<Thing>>. ReactiveCollection inherits from ObservableCollection, and also implements IObservable.
If ReactiveUI is out of the question (which I'm guessing it is because you're already using a Caliburn Micro collection), then you can convert using ObservableCollection's events:
ObservableCollection<IObservable<Thing>> observableCollection = new ObservableCollection<IObservable<Thing>>();
IObservable<IObservable<Thing>> oCollectionObservable = Observable.FromEventPattern<NotifyCollectionChangedEventHandler, NotifyCollectionChangedEventArgs>(
h => observableCollection.CollectionChanged += h,
h => observableCollection.CollectionChanged -= h
)
.SelectMany(ep => ep.EventArgs.NewItems.Cast<IObservable<Thing>>());
Here's some sample code demonstrating use:
oCollectionObservable
.Merge()
.Subscribe(t => Console.WriteLine($"Received Thing {{Id = {t.Id}}}"));
var firstObservable = Observable.Range(1, 5)
.Select(i => new Thing { Id = i })
.Concat(
Observable.Range(8, 5)
.Select(i => new Thing { Id = i })
.Delay(TimeSpan.FromSeconds(2))
);
observableCollection.Add(firstObservable);
var subject = new Subject<Thing>();
observableCollection.Add(subject);
subject.OnNext(new Thing { Id = 6 });
subject.OnNext(new Thing { Id = 7 });
Using the following class:
public class Thing
{
public int Id { get; set; }
}

Related

EF Core IgnoreQueryFilters behavior on multiple subsequent queries

i've a problem with the IgnoreQueryFilters.
I've implemented soft-delete using the HasQueryFilter ( in the OnModelCreating i apply the global query filter to every entity which implements a particular interface ).
The problem is that if i launch a query 2 times in the same request:
the first time asking for also the "IsDeleted = true" entities ( so including IgnoreQueryFilters ),
and the second time asking only for the "IsDeleted = false" ( so not including the IgnoreQueryFilters)
the second time i still get also the "deleted" entities.
I think that this happens because when i launch the query for the second time, the entities are already in the context and i get them instead of the right results.
Here how i build the method for "including / excluding" the deleted entities.
// this is my repo pattern implementation
public class MyEntityRepo()
{
....
public Task<List<MyEntity>> GetEntityByUserId(int userId, bool ignoreQueryFilter = false)
{
var query = context.blabla
.Include(c => c.blabla2)
.Where(c => c.ApplicationUserId == userId);
if (ignoreQueryFilter)
{
query = query.IgnoreQueryFilters();
}
var result = await query.ToListAsync();
return result;
}
}
Now if in a service i call it this way:
public void MyServiceMethod()
{
...
var IncludeDeleted = await myEntityRepo.GetEntityByUserId(1, true);
//Here i need to do a sync with other data and for this reason i need also the deleted entities
foreach( var e in includeDeleted)
{
// do something
}
await context.SaveChangesAsync();
//Now that my data is correctly synced i've to get the data again but this time excluding the deleted entities
// and it fails
var ExcludeDeleted = await myEntityRepo.GetEntityByUserId(1, false);
return ExcludeDeleted;
}
The only way i found to solve the problem is to do something like context.ChangeTracker.Clear() before the second call to myEntityRepo.GetEntityByUserId, but is this the right way to go?
Since in real the method is a little bit more complex and can be re-used in other areas, i'm not sure that calling a Clear is a good idea because tomorrow it might be called in a bigger method and cause unexpected problems.
What's the best practice to use when i need to get data with and without query filter?
Is it ok to clear the change tracker?
If yes, what's the best time to clear it? in the GetEntityByUserId if i just ignoredTheFilters ( for consistency ) or after, in the caller method, whenever i find a problem like this one?
Actually i've also thinked about removing the GlobalQueryFilter usage and replace it with methods in the repos that get or exclude deleted entities... yes i've to remember to always filter out but feels more practical.

Understanding RxJava: Differences between Runnable callback

I'm trying to understand RxJava and I'm sure this question is a nonsense... I have this code using RxJava:
public Observable<T> getData(int id) {
if (dataAlreadyLoaded()) {
return Observable.create(new Observable.OnSubscribe<T>(){
T data = getDataFromMemory(id);
subscriber.onNext(data);
});
}
return Observable.create(new Observable.OnSubscribe<T>(){
#Override
public void call(Subscriber<? super String> subscriber) {
T data = getDataFromRemoteService(id);
subscriber.onNext(data);
}
});
}
And, for instance, I could use it this way:
Action1<String> action = new Action<String>() {
#Override
public void call(String s) {
//Do something with s
}
};
getData(3).subscribe(action);
and this another with callback that implements Runnable:
public void getData(int id, MyClassRunnable callback) {
if (dataAlreadyLoaded()) {
T data = getDataFromMemory(id);
callback.setData(data);
callback.run();
} else {
T data = getDataFromRemoteService(id);
callback.setData(data);
callback.run();
}
}
And I would use it this way:
getData(3, new MyClassRunnable()); //Do something in run method
Which are the differences? Why is the first one better?
The question is not about the framework itself but the paradigm. I'm trying to understand the use cases of reactive.
I appreciate any help. Thanks.
First of all, your RxJava version is much more complex than it needs to be. Here's a much simpler version:
public Observable<T> getData(int id) {
return Observable.fromCallable(() ->
dataAlreadyLoaded() ? getDataFromMemory(id) : getDataFromRemoteService(id)
);
}
Regardless, the problem you present is so trivial that there is no discernible difference between the two solutions. It's like asking which one is better for assigning integer values - var = var + 1 or var++. In this particular case they are identical, but when using assignment there are many more possibilities (adding values other than one, subtracting, multiplying, dividing, taking into account other variables, etc).
So what is it you can do with reactive? I like the summary on reactivex's website:
Easily create event streams or data streams. For a single piece of data this isn't so important, but when you have a stream of data the paradigm makes a lot more sense.
Compose and transform streams with query-like operators. In your above example there are no operators and a single stream. Operators let you transform data in handy ways, and combining multiple callbacks is much harder than combining multiple Observables.
Subscribe to any observable stream to perform side effects. You're only listening to a single event. Reactive is well-suited for listening to multiple events. It's also great for things like error handling - you can create a long sequence of events, but any errors are forwarded to the eventual subscriber.
Let's look at a more concrete with an example that has more intrigue: validating an email and password. You've got two text fields and a button. You want the button to become enabled once there is a email (let's say .*#.*) and password (of at least 8 characters) entered.
I've got two Observables that represent whatever the user has currently entered into the text fields:
Observable<String> email = /* you figure this out */;
Observable<String> password = /* and this, too */;
For validating each input, I can map the input String to true or false.
Observable<Boolean> validEmail = email.map(str -> str.matches(".*#.*"));
Observable<Boolean> validPw = password.map(str -> str.length() >= 8);
Then I can combine them to determine if I should enable the button or not:
Observable.combineLatest(validEmail, validPw, (b1, b2) -> b1 && b2)
.subscribe(enableButton -> /* enable button based on bool */);
Now, every time the user types something new into either text field, the button's state gets updated. I've setup the logic so that the button just reacts to the state of the text fields.
This simple example doesn't show it all, but it shows how things get a lot more interesting after you get past a simple subscription. Obviously, you can do this without the reactive paradigm, but it's simpler with reactive operators.

Using Reactive Extensions to stream model changes

I am working on a server component which is responsible for caching models in memory and then stream any changes to interested clients.
When the first client requests a model (well model key, each model has a key to identify it) the model will be created (along with any subscriptions to downstream systems) and then sent to the client, followed by a stream of updates (generated by downstream systems). Any subsequent client's should get this cached (updated) model, again with the stream of updates. When the last client unsubscribes to the model the downstream subscriptions should be destroyed and the cached model destroyed.
Could anyone point me in the right direction as regards to how Rx could help here. I guess what isn't clear to me at the moment is how I synchronize state (of the object) and the stream of changes? Would I have two separate IObservables for the model and updates?
Update: here's what I have so far:
Model model = null;
return Observable.Create((IObserver<ModelUpdate> observer) =>
{
model = _modelFactory.GetModel(key);
_backendThing.Subscribe(model, observer.OnNext);
return Disposable.Create(() =>
{
_backendThing.Unsubscribe(model);
});
})
.Do((u) => model.MergeUpdate(u))
.Buffer(_bufferLength)
.Select(inp => new ModelEvent(inp))
.Publish()
.RefCount()
.StartWith(new ModelEvent(model)
If I understood the problem correctly, there are Models coming in dynamically. At any point in time in your Application's lifetime, the number of Models are unknown.
For that purpose an IObservable<IEnumerable<Model>> looks like a way to go. Each time there is a new Model added or an existing one removed, the updated IEnumerable<Model> would be streamed. Now it would essentially preserve the older objects as opposed to creating all Models each time there is an update unless there is a good reason to do so.
As for the update on each Model object's state such as any field value or property value changed, I would look into Paul Betts' ReactiveUI project, it has something called ReactiveObject. Reactive object helps you get change notifications easily, but that library is mainly designed for WPF MVVM applications.
Here is how a Model's state update would go with ReactiveObject
public class Model : ReactiveObject
{
int _currentPressure;
public int CurrentPressure
{
get { return _currentPressure; }
set { this.RaiseAndSetIfChagned(ref _currentPressure, value); }
}
}
now anywhere you have Model object in your application you could easily get an Observable that will give you updates about the object's pressure component. I can use When or WhenAny extension methods.
You could however not use ReactiveUI and have a simple IObservable whenever a state change occurs.
Something like this may work, though your requirements aren't exactly clear to me.
private static readonly ConcurrentDictionary<Key, IObservable<Model>> cache = new...
...
public IObservable<Model> GetModel(Key key)
{
return cache.GetOrAdd(key, CreateModelWithUpdates);
}
private IObservable<Model> CreateModelWithUpdates(Key key)
{
return Observable.Using(() => new Model(key), model => GetUpdates(model).StartWith(model))
.Publish((Model)null)
.RefCount()
.Where(model => model != null);
}
private IObservable<Model> GetUpdates(Model model) { ... }
...
public class Model : IDisposable
{
...
}

querying in EntityObject

I have partial class TaxReportItem with partial method OnActualVolumeChanging(double value):
public partial class TaxReportItem
{
partial void OnActualVolumeChanging(double value)
{
if (Tax != null)
{
Payment = value*Tax.TaxRate;
}
}
}
In this method i want to get all collection of TaxReportItems that's present in context(something like this):
partial void OnActualVolumeChanging(double value)
{
var sum = 0.0;
if (Tax != null)
{
Payment = value*Tax.TaxRate;
foreach (var taxReportItem in ????)
{
sum += taxReportItem.Sum;
}
}
}
How can i achieve this?
This is actually quite hard because you should never need this. If you need this, design of your entity object is wrong and it is doing something which should be done elsewhere. Single TaxReportItem should never need to know about other tax report items and load them from database unless other items are dependent on this item (they form an aggregate). In such case you should have navigation property to dependent items in the principal one.
To follow your question. If you need to load other items you must have instance of the context to do that. You can either get instance used to load current item and use it to load other entities (bad solution) or you can create a new instance of the context and use it to load other entities (even worse solution).
As you can see from the linked article getting current context instance from the entity itself is not easy and it has some prerequisites which goes back to the first paragraph. It is hard because it is wrong approach.

Wicket - Wrapped collection Model "transformation"

I have a domain object which has a collection of primitive values, which represent the primary keys of another domain object ("Person").
I have a Wicket component that takes IModel<List<Person>>, and allows you to view, remove, and add Persons to the list.
I would like to write a wrapper which implements IModel<List<Person>>, but which is backed by a PropertyModel<List<Long>> from the original domain object.
View-only is easy (Scala syntax for brevity):
class PersonModel(wrappedModel: IModel[List[Long]]) extends LoadableDetachableModel[List[Person]] {
#SpringBean dao: PersonDao =_
def load: List[Person] = {
// Returns a collection of Persons for each id
wrappedModel.getObject().map { id: Long =>
dao.getPerson(id)
}
}
}
But how might I write this to allow for adding and removing from the original List of Longs?
Or is a Model not the best place to do this translation?
Thanks!
You can do something like this:
class PersonModel extends Model<List<Person>> {
private transient List<Person> cache;
private IModel<List<String>> idModel;
public PersonModel( IModel<List<String>> idModel ) {
this.idModel = idModel;
}
public List<Person> getObject() {
if ( cache == null ) {
cache = convertIdsToPersons( idModel.getObject() );
return cache;
}
public void setObject( List<Person> ob ) {
cache = null;
idModel.setObject( convertPersonsToIds( ob ) );
}
}
This isn't very good code but it shows the general idea. One thing you need to consider is how this whole thing will be serialised between requests, you might be better off extending LoadableDetachableModel instead.
Another thing is the cache: it's there to avoid having to convert the list every time getObject() is called within a request. You may or may not need it in practice (depends on a lot of factors, including the speed of the conversion), but if you use it, it means that if something else is modifying the underlying collection, the changes may not be picked up by this model.
I'm not quite sure I understand your question and I don't understand the syntax of Scala.
But, to remove an entity from a list, you can provide a link that simply removes it using your dao. You must be using a repeater to populate your Person list so each repeater entry will have its own Model which can be passed to the deletion link.
Take a look at this Wicket example that uses a link with a repeater to select a contact. You just need to adapt it to delete your Person instead of selecting it.
As for modifying the original list of Longs, you can use the ListView.removeLink() method to get a link component that removes an entry from the backing list.