using Reactive Extensions to monitor IEnumerable - ienumerable

I'm connecting to an object that asyncronously loads a collection of objects into an IEnumerable. At the time I connect, the IEnumerable may have items already in it's collection, and may add items during the lifetime of the application that I need to be notified of as they occur. As an example, it could be a bank account containing a list of bank transactions.
The challenge is this. I want to combine the processing of the initial values in the IEnumerable with any new additions. They are currently two processes. I would like to eliminate the use of NotifyCollectionChanged entirely.
I can modify the backend holding the IEnumerable. It does not need to remain as an IEnumerable if a solution to this question exists otherwise.

I would suggest that the object should not expose a IEnumerable as that is for "cold observable values", where in your case you need something which can get additional items in future also.
The best way to model this would be to use ReplaySubject<T> instead of IEnumerable. Below is an example that demonstrate the situation similar of yours:
//Function to generate the subject with future values
public static ReplaySubject<int> GetSubject()
{
var r = new ReplaySubject<int>();
r.OnNext(1); r.OnNext(2); r.OnNext(3);
//Task to generate future values
Task.Factory.StartNew(() =>
{
while (true)
{
Thread.Sleep(3000);
r.OnNext(DateTime.Now.Second);
}
});
return r;
}
Consuming code:
var sub = GetSubject();
sub.Subscribe(Console.WriteLine);
Every time anyone subscribes to sub they will get all the values that have been published in the subject till now and as well as new values that this subject generates in future

You can use Defer/Replay Operator

Related

Service Fabric, determine if specific actor exists

We are using Azure Service Fabric and are using actors to model specific devices, using the id of the device as the ActorId. Service Fabric will instantiate a new actor instance when we request an actor for a given id if it is not already instantiated, but I cannot seem to find an api that allows me to query if a specific device id already has an instantiated actor.
I understand that there might be some distributed/timing issues in obtaining the point-in-time truth but for our specific purpose, we do not need a hard realtime answer to this but can settle for a best guess. We would just like to, in theory, contact the current primary for the specific partition resolved by the ActorId and get back whether or not the device has an instantiated actor.
Ideally it is a fast/performant call, essentially faster than e.g. instantiating the actor and calling a method to understand if it has been initialized correctly and is not just an "empty" actor.
You can use the ActorServiceProxy to iterate through the information for a specific partition but that does not seem to be a very performant way of obtaining the information.
Anyone with insights into this?
The only official way you can check if the actor has been activated in any Service Partition previously is using the ActorServiceProxy query, like described here:
IActorService actorServiceProxy = ActorServiceProxy.Create(
new Uri("fabric:/MyApp/MyService"), partitionKey);
ContinuationToken continuationToken = null;
do
{
PagedResult<ActorInformation> page = await actorServiceProxy.GetActorsAsync(continuationToken, cancellationToken);
var actor = page.Items.FirstOrDefault(x => x.ActorId == idToFind);
continuationToken = page.ContinuationToken;
}
while (continuationToken != null);
By the nature of SF Actors, they are virtual, that means they always exist, even though you didn't activated then previously, so it make a bit harder to do this check.
As you said, it is not performant to query all actors, so, the other workarounds you could try is:
Store the IDs in a Reliable Dictionary elsewhere, every time an Actor is activated you raise an event and insert the ActorIDs in the Dictionary if not there yet.
You can use the OnActivateAsync() actor event to notify it's creation, or
You can use the custom actor factory in the ActorService to register actor activation
You can store the dictionary in another actor, or another StatefulService
Create a property in the actor that is set by the actor itself when it is activated.
The OnActivateAsync() check if this property has been set before
If not set yet, you set a new value and store in a variable (a non persisted value) to say the actor is new
Whenever you interact with actor you set this to indicate it is not new anymore
The next activation, the property will be already set, and nothing should happen.
Create a custom IActorStateProvider to do the same as mentioned in the option 2, instead of handle it in the actor it will handle a level underneath it. Honestly I think it is a bit of work, would only be handy if you have to do the same for many actor types, the option 1 and 2 would be much easier.
Do as Peter Bons Suggested, store the ActorID outside the ActorService, like in a DB, I would only suggest this option if you have to check this from outside the cluster.
.
The following snipped can help you if you want to manage these events outside the actor.
private static void Main()
{
try
{
ActorRuntime.RegisterActorAsync<NetCoreActorService>(
(context, actorType) => new ActorService(context, actorType,
new Func<ActorService, ActorId, ActorBase>((actorService, actorId) =>
{
RegisterActor(actorId);//The custom method to register the actor if new
return (ActorBase)Activator.CreateInstance(actorType.ImplementationType, actorService, actorId);
})
)).GetAwaiter().GetResult();
Thread.Sleep(Timeout.Infinite);
}
catch (Exception e)
{
ActorEventSource.Current.ActorHostInitializationFailed(e.ToString());
throw;
}
}
private static void RegisterActor(ActorId actorId)
{
//Here you will put the logic to register elsewhere the actor creation
}
Alternatively, you could create a stateful DeviceActorStatusActor which would be notified (called) by DeviceActor as soon as it's created. (Share the ActorId for correlation.)
Depending on your needs you can also register multiple Actors with the same status-tracking actor.
You'll have great performance and near real-time information.

Get Context instance from DbContextPool (EF Core 2.0) to use it in Task

Entity framework core 2.0 introduce DbContext Pooling.
In my code I do a lot of jobs in Tasks because I do some independent heavy operations on database.
My old approach was:
Task.Run(() =>
{
AppDbContext c = new AppDbContext(this.config);
How can I get instance from EF Core 2.0 DbContext Pooling?
Edited:
I am using DI: public CategoryController(AppDbContext context, ...
Reason for doing this is quicker execute Rest API method.
For example, I think this should complete quicker
List<AppUser> users;
List<DbGroup> groups;
Task task1 = Task.Run(async() => {
users = await ContextFromConnectionPool.Users.Where(t => t.Id == 1).ToListAsync();
});
Task task2 = Task.Run(async () => {
groups = await ContextFromConnectionPool.Groups.Where(t => t.Id == 1).ToListAsync();
});
var tags = await this.context.Tags.ToListAsync();
Task.WaitAll(task1, task2);
//process all 3 results
then this:
List<AppUser> users = await this.context.Users.Where(t => t.Id == 1).ToListAsync();
List<DbGroup> groups = await this.context.Groups.Where(t => t.Id == 1).ToListAsync();
var tags = await this.context.Tags.ToListAsync();
//process all 3 results
In second example second query executes after first is completed.
If every query takes 150ms in first example method execute in approx 150ms, but second in approx 450ms. Am I right?
Only problem is how to get context from connection pool in first approach.
The feature of ASP.NET Core 2.0 and Entity Framework Core 2.0, to support connection pooling, is not — in any way — preventing you from doing the time consuming queries at once. The entire concept of pooling is to allow the connection to be reused in multiple requests, instead of having to recreate an instance each time a new request comes in. Sometimes, it can have benefits and sometimes it might have downfalls. Now, for your question, there are two pathways,
Allow the framework to pool the connection in Startup class and then reuse those objects everywhere you need. You can capture them inside the actions, and any other private or local functions that you have.
Do not use DI and database context pooling and instead keep doing what you were doing. Note that, you were never using DI and thus there is no need to register your database context in the Startup class. But you must take care of creation of instance, manually disposing the instance as well.
Second approach is not suitable, and not a good approach as well, for many reasons. If you want to consider the first approach you can then change your controller to accept a property of the type database context, such as,
public class YourController : Controller {
public AppDbContext c { get; set; }
public YourController (AppDbContext c) {
this.c = c;
}
}
Now if you have got that, you can then use this c variable inside your tasks, and run the time consuming queries inside that function — which in any way would be too useless. You can do this,
Task.Run(() =>
{
// Use c here.
});
Just remember a few points:
It is good to build your query, and then call ToListAsync() — ToList() may not be suitable, consider using ToListAsync() and apply await keyword for asynchronously capturing the data.
Your query only gets executed on the database server, when you call ToList or any similar function.
While running tasks in parallel, you must also handle any cases where your query might break the policies, such as data integrity or similar cases in database. It is always a best practice to catch the exceptions.
In your case, for just better practicing you might want to consider wrapping your code inside using block,
Task.Run(() => {
using (var context = new AppDbContext) {
// use context here.
}
}
This is the best that I can state to help you, since you have not shared 1) purpose of not using DI, 2) the sample of your query (why not using LINQ to build query and then executing on server?) 3) any sample code to be used. I hope this would give you an idea of, why you should consider using DI and using the instances returned from there.

RxJava Relay vs Subjects

I'm trying to understand the purpose of this library by Jake Warthon:
https://github.com/JakeWharton/RxRelay
Basically: A Subject except without the ability to call onComplete or
onError. Subjects are stateful in a damaging way: when they receive an
onComplete or onError they no longer become usable for moving data.
I get idea, it's a valid use case, but the above seems easy to achieve just using the existing subjects.
1. Don't forward errors/completions events to the subject:
`observable.subscribe({ subject.onNext(it) }, { log error / throw exception },{ ... })`
2. Don't expose the subject, make your method signature return an observable instead.
fun(): Observable<> { return subject }
I'm obviously missing something here and I'm very curios on what it is!
class MyPublishRelay<I> : Consumer<I> {
private val subject: Subject<I> = PublishSubject.create<I>()
override fun accept(intent: I) = subject.onNext(intent)
fun subscribe(): Disposable = subject.subscribe()
fun subscribe(c: Consumer<in I>): Disposable = subject.subscribe(c)
//.. OTHER SUBSCRIBE OVERLOADS
}
subscribe has overloads and, usually, people get used to the subscribe(Consumer) overload. Then they use subjects and suddenly onComplete is also invoked. RxRelay saves the user from themselves who don't think about the difference between subscribe(Consumer) and subscribe(Observer).
Don't forward errors/completions events to the subject:
Indeed, but based on our experience with beginners, they often don't think about this or even know about the available methods to consider.
Don't expose the subject, make your method signature return an observable instead.
If you need a way to send items into the subject, this doesn't work. The purpose is to use the subject to perform item multicasting, sometimes from another Observable. If you are in full control of the emissions through the Subject, you should have the decency of not calling onComplete and not letting anything else do it either.
Subjects have far more overhead because they have to track and handle
terminal event states. Relays are stateless aside from subscription
management.
- Jake Wharton
(This is from the issue OP opened on GitHub and felt it was a more a correct answer and wanted to "relay" it here for others to see. https://github.com/JakeWharton/RxRelay/issues/30)
In addition to #akarnokd answer:
In some cases you can't control the flow of data inside the Observable, an example of this is when observing data changes from a database table using Room Database.
If you use Subjects, executing subjects.getValue will always throw error about null safety. So you have to put "? or !!" everywhere in your code even though you know that it will be not nullable.
public T getValue() {
Object o = value.get();
if (NotificationLite.isComplete(o) || NotificationLite.isError(o)) {
return null;
}
return NotificationLite.getValue(o);
}

How to pass a variable along when chaining observables?

I'm pretty new to RxJava, and whenever I have a case where I need to pass return data from one observable down the chain until a call to 'subscribe' - I have trouble understanding how to do it the 'Reactive' way without any patches...
For example:
Observable<GameObject> obs1 = func1();
Observable<GameObject> obs2 = func2();
Observable<GameObject> obs3 = func3();
Observable<GameObject> obs3 = func4();
I would like to emit obs1 and obs2, get their result, then emit obs3 then obs4 and then end the chain with subscribe while having the access to the results of obs1,obs2,obs3 and obs4.
The order of the calls is important, I need obs1 and obs2 to complete before obs3 is executed.
same goes for obs3 and obs4 - I need obs3 to complete before obs4 is executed.
How can I do that?
I know it's a pretty digested question - but this is one of the most problematic issues when a developer starts to know rxJava.
Thanks.
You can do it using Observable.zip and simple Observable.map/Observable.flatMap:
Observable.zip(obs1, obs2, (res1, res2) -> {
// do stuff with res1, res2
return obs3.flatMap(res3 -> {
// do stuff with res1, res2, res3
return obs4.flatMap(res4 -> {
// do stuff with res1, res2, res3, res4
return result;
});
});
});
This will force your scheduling requirements:
observables 1 and 2
observable 3
observable 4
Since I had the same kind of doubts in mind a while ago, the question seams to be related to how Observables really work.
Let's say you created obs1 and obs2 using something like:
Observable<GameObject> obs1 = Observable.create(...)
Observable<GameObject> obs2 = Observable.create(...)
You have 2 independent and disconnected streams. That's what you want when each of them are supposed to do something like a network request or some intensive background processing, which can take some time.
Now, let's say you want to watch for both results and emit a single value out of them when they get ready (you didn't say explicitly that, but it's gonna help you understand how it works). In this case, you can use the zipWith operator, which takes a pair of items, the first item from the first Observable and the second item from the second Observable, combine them into a single item, and emit it to the next one in the chain that may be interested on it. zipWith is called from an Observable, and expects another Observable as param to be zipped with. It also expects a custom function that knows how to zip the 2 source items and create a new one out of them.
Observable<CustomObject> obs3 = obs1.zipWith(obs2, new Func2<GameObject, GameObject, CustomObject>() {
#Override
public CustomObject call(GameObject firstItem, GameObject secondItem) {
return new CustomObject(firstItem, secondItem);
}
});
In this case, the CustomObject is just a pojo. But it could be another long running task, or whatever you need to do with the results from the first two Observable items.
If you want to wait for (or, to observe!) the results coming from obs3 you can plug another Observable at the end, which is supposed to perform another piece of processing.
Observable<FinalResult> obs4 = obs3.map(new Func1<CustomObject, FinalResult>() {
#Override
public FinalResult call(CustomObject customObject) {
return new FinalResult(customObject);
}
});
The map operator transforms (or maps) one object into another. So you could perform another piece of processing, or any data manipulation, and return a result out of it. Or your FinalResult might be a regular class, like CustomObject, just holding references to the other GameObjects.. you name it.
Depending how you created your Observables, they may not have started to emit any items yet. Until now you were just creating and plugging the data pipes. In order to trigger the first task and make items flow in the stream you need to subscribe to it.
obs4.subscribe();
Wrapping up, you don't really have one single variable passing along the whole chain. You actually create an item in the first Observable, which notifies the second one when it gets ready, and so on. Additionally, each step (observable) transforms the data somehow. So, you have a chain of transformations.
RxJava follows a functional approach, applying high order functions (map, zip, filter, reduce) to your data. It's crucial to have this clear. Also, the data is always immutable: you don't really change an Observable, or change your own objects. It creates new instances of them, and the old objects will eventually be garbage collected. So obs1.zip(...) doesn't change obs1, it creates a new Observable instance, and you can assign it to a variable.
You can also drop the variable assignments (obs1, obs2, obs3 etc) and just chain all methods together. Everything is strongly typed, so the compiler will not let you plug Observables that don't match each other (output of one should match input of the next).
I hope it gives you some thoughts!

Entity Framework Validation & usage

I'm aware there is an AssociationChanged event, however, this event fires after the association is made. There is no AssociationChanging event. So, if I want to throw an exception for some validation reason, how do I do this and get back to my original value?
Also, I would like to default values for my entity based on information from other entities but do this only when I know the entitiy is instanced for insertion into the database. How do I tell the difference between that and the object getting instanced because it is about to be populated based on existing data? Am I supposed to know? Is that considiered business logic that should be outside of my entity business logic?
If that's the case, then should I be designing controller classes to wrap all these entities? My concern is that if I deliver back an entity, I want the client to get access to the properties, but I want to retain tight control over validations on how they are set, defaulted, etc. Every example I've seen references context, which is outside of my enity partial class validation, right?
BTW, I looked at the EFPocoAdapter and for the life of me cannot determine how to populate lists of from within my POCO class... anyone know how I get to the context from a EFPoco Class?
This is in reply to a comment I left. Hopefully this answers your question, Shimmy. Just comment, and I will shorten it or remove it if it doesn't answer your question.
You will need both INotifyPropertyChanging and INotifyPropertyChanged interfaces to be implemented on your class (unless it is something like an entity framework object, which I believe implements these internally).
And before you set a value to this property, you will need to raise NotifyPropertyChanging.PropertyChanging event, using the name of the property in PropertyChangingEventArgs constructor.
And after you set this value you need to raise NofityPropertyChanged.PropertyChanged event, again using the name of the property this is being raised in PropertyChangedEventArgs constructor.
Then you have to handle the PropertyChanging and PropertyChanged events. In the PropertyChanging event, you need to cache the value. In the PropertyChanged event, you can compare and throw an exception.
To get the property from PropertyChanging/PropertyChanged event args, you need to use relfection.
// PropertyName is the key, and the PropertyValue is the value.
Dictionary <string, object> propertyDict = new Dictionary<object, object>();
// Convert this function prototype to C# from VBNet. I like how Handles is descriptive.
Public Sub PropertyChanging(sender As object, e As PropertyChangingEventArgs) Handles Foo.PropertyChanging
{
if (sender == null || preventRecursion)
{
return;
} // End if
Type senderType = sender.GetType();
PropertyInfo info = senderType.GetProperty(e.PropertyName);
object propertyValue = info.GetValue(sender, null);
// Change this so it checks if e.PropertyName already exists.
propertyDict.Add(e.PropertyName, propertyValue);
} // End PropertyChanging() Event
// Convert this function prototype to C# from VBNet. I like how Handles is descriptive.
Public Sub PropertyChanged(sender As object, e As PropertyChangedEventArgs) Handles Foo.PropertyChanged
{
if (sender == null || preventRecursion)
{
return;
} // End if
Type senderType = sender.GetType();
PropertyInfo info = senderType.GetProperty(e.PropertyName);
object propertyValue = info.GetValue(sender, null);
// Change this so it makes sure e.PropertyName exists.
object oldValue = propertyDict(e.PropertyName);
object newValue = propertyValue;
// No longer needed.
propertyDict.Remove(e.PropertyName);
if (/* some condition */)
{
try {
preventRecursion = true;
info.SetValue(oldValue, null);
Throw New Exception();
} finally {
preventRecursion = false;
} // End try
} // End if
} // End PropertyChanging() Event
Notice how I am using PreventRecursion, which is a boolean I forgot to add above these methods? When you reset the property back to its previous value, these events will be recalled.
tl;dr
Now you could derive a single event which inherits from INotifyPropertyChanged, but uses an argument which holds an Object representing the previous value as well as the Property Name. And that would reduce the number of events being fired down to one, have similar functionality, and have backwards compatibility with INotifyPropertyChanged.
But if you want to handle anything before the property gets set (say the property does an irreversible change or you need to setup other properties before setting that variable, otherwise an exception will be thrown) you won't be able to do that.
Overall, this method is a very old way of doing things. I would take Poker Villian's answer and have invalid data able to be entered. But disallow saving to a database.
Entity Framework has some excellent code towards validation. You add validation to your properties via attributes. And then it takes care of the work of processing those attributes. Then you can make a property called IsValid, which calls Entity Framework specific validation. It also distinguishes both field errors (like typing in the wrong characters or having a string too long), and class errors (like having missing data or conflicting keys).
Then you can bind IsValid to controls validation, and they will display a red bubble while invalid data is entered. Or you could just implement IsValid validation yourself. But If IsValid is false, SaveChanges event would need to cancel saving.
btw. The code provided will not compile and is pseudocode only (mixing vb and c#). But I believe it is much more descriptive than c# alone--showing exactly what is being handled.
Concerning your first question, I would simply implement the changes to the associations as business logic. For example, if you add a Teacher class with multiple Student, do not add students like
aTeacher.Students.Add(new Student)
instead, create a AddStudent method
public Student AddNewStudent(string name, string studentID)
{
Student s = new Student( name, studentID);
s.Teacher = this; // changes the association
return s;
}
That way you have full control on when associations are changed. Of course that what prevents another programmer from adding a student directly? On the Student side, you can set the Teacher setter to private (and change the constructor to accept a teacher or similar). On the teacher side, how to make the Students collection non-insertable? I'm not certain... maybe transforming it in a custom collection that doesn't accept inserts.
Concerning the second part of your question, you could probably use the OnVarNameChanging events. If the EntityState is 'New' then you can apply your logic that fetches the real values.
There is also an event that fires when you save changes (OnSavingChanges?) that you could use to determine which objects are new and set some values.
But maybe the simplest solution is to always set the defaults in the constructor and they will get overwritten if the data is loaded from the DB.
Good luck
Create a factory that produces instances for you depending on your need like:
getStudent(String studentName, long studentId, Teacher teacher) {
return new Student(studentName, studentId);
}
getStudentForDBInseration(String studentName, long studentId, Teacher teacher) {
Student student = getStudent(studentName, studentId);
student = teacher;
//some entity frameworks need the student to be in the teachers student list
//so you might need to add the student to the teachers student list
teacher.addStudent(student);
}
It's a serious lack not having an AssociationChanging (that inherits from CancelEventArgs) event.
It bothers me also very much, therefore I reported this to Microsoft Connect Please vote here!
And BTW, I also think this is also stupid that the PropertyChangingEventArgs doesn't inherit CancelEventArgs, since cancelling with an exception is not always the elegant solution, besides, throwing exceptions cost more performance than calling the OnPropertyChangingEvent then check for the returned e.Cancel, so does it cost less than raising the PropertyChangingEvent, which you anyway call them both.
Also an exception can be thrown at the handler anyway instead of marking e.Cancel as true, for those who insist to go the Exception way. Vote Here.
To maybe answer part of your question or expound on ADB's answer you can user ObjectStateManager.GetObjectStateEntry to find the state of the entities and write your custom default logic.
SaveChanges is the method on the context that you can use, or SavingChanges is the event that occurs before SaveChanges is called.
You can override SaveChanges and only call base.SaveChanges if you don't want to abort the change
There is also a ObjectMaterialized event for the context.
Between the two you can stick all your validation and creation code in one location, which may be appropriate if they are complex and include values of other objects etc..