Restoring data 1 minute ago for time-shifted sequence - system.reactive

This class accumulates values + knows at current moment the difference between current sum and sum 1 minute ago. Its client uses it in such way: adds new value for every incoming data chunk and gets the difference. Now, there's a problem with restoring its state. Suppose application gets recycled, and this data in pump for previous minute is lost and first minute after recycling Change will equal 0, so I'd have to wait for a minute to be able to calculate difference. How to fix it?
public class ChangeEstimator
{
private int sum;
private Subject<int> sumPump;
private IConnectableObservable<int> hotSumPump;
public int Sum
{
get
{
return sum;
}
private set
{
sum = value;
sumPump.OnNext(value);
}
}
public int Change { get; private set; }
public void Start()
{
sumPump = new Subject<int>();
hotSumPump = sumPump.Publish();
var changePeriod = TimeSpan.FromMinutes(1);
hotSumPump.Delay(changePeriod)
.Subscribe(value =>
{
Change = Sum - value;
});
hotSumPump.Connect();
}
public void AddNewValue(int newValue)
{
Sum += newValue;
}
}
UPDATE
In the code below you can see the explanation. The client subscribes to transaction stream, and with every new transaction it updates the estimator. Also client exposes IObservable source of snapshots which pushes snapshot of data to listeners which can be UI or database. The problem is when recycling happens, UI will be shown not real Change but 0. If this problem is too specific for Stackoverflow please forgive me. I was advised to use RabbitMQ to keep persistence of changes. Do you think it could work for this problem?
public class Transaction
{
public int Price { get; set; }
}
public class AlgorithmResult
{
public int Change { get; set; }
}
public interface ITransactionProvider
{
IObservable<Transaction> TransactionStream { get; }
}
public class Client
{
private ChangeEstimator estimator = new ChangeEstimator();
private ITransactionProvider transactionProvider;
public Client(ITransactionProvider transactionProvider)
{
this.transactionProvider = transactionProvider;
}
public void Init()
{
transactionProvider.TransactionStream.Subscribe(t =>
{
estimator.AddNewValue(t.Price);
});
}
public IObservable<AlgorithmResult> CreateSnaphotsTimedSource(int periodSeconds)
{
return Observable
.Interval(TimeSpan.FromSeconds(periodSeconds))
.Select(_ =>
{
AlgorithmResult snapshot;
snapshot = new AlgorithmResult
{
Change = estimator.Change
};
return snapshot;
})
.Where(snapshot => snapshot != null);
}
}

Your application gets restarted and has no memory (pun intended) of its previous life. No Rx tricks (within this application) can help you.
As discussed, you should figure out the business requirements and consider state initialization during startup.
You might want to consider storing latest state via I/O source or separating application logic between message sender and consumer in order to implement a queue.

I have to answer my own question because I got an answer from someone and this works in my case. I agree that the correct answer depends on the business logic and I think I had explained it as clearly as I could.
So, here the right way to deal with possible application recycle is to put the class ChangeEstimator to outer process and exchange messages with it.
I use AMQP to send messages to estimator (RabbitMQ). To key point here is that the risk that external process will close/recycle is really small compared to that of web application in which the rest is contained.

Related

How to implement a State Pattern for Blazor pages using multiple components to build a page?

I have a Blazor page that utilizes multiple components within it - how can I implement a State pattern (ideally per-page) that would be able to handle the current state of a page?
Currently I have all of the state and state-manipulation being done on the page (and via injected Services), but I imagine it would be cleaner to implement a state pattern where each page has some kind of State object which then allows you to manipulate the page and its components in a strict manner.
Ideally the State object would implement INotifyPropertyChanged and be able to dynamically have its State updated, but I also don't hate the idea of having the State object relegate State-manipulation to methods on the object to make sure state isn't just 1-off updated on the Blazor page.
I've already tried to implement some kind of MVVM pattern, but that turned into more questions than answers.
I started to create a State object for the current page being worked on, but I'm not sure if I should basically just be putting most of the logic that was on the Blazor page in the State object, or if I should still have some data, but delegating the heavy lifting to the State.
eg: I have some code that used to be in the "OnAfterRenderAsync" function on the Blazor page, but I'm in the process of moving basically everything in there to a "LoadMatterDetails()" function in the State object that is handling that. Does this make sense, or should I only really have object State in the state object, and writing to & reading from the State object when particular pieces of information are available?
public class MatterDetailsState : IMatterDetailsState
{
private readonly IMatterDetailsService matterDetailsService;
private readonly NavigationManager navigationManager;
public bool EditMode { get; private set; } = false;
public int EditMatterId { get; private set; } = 0;
public Matter Matter { get; set; } = new();
public MatterPaymentOptionDetails PaymentDetails { get; set; } = new();
public List<MatterStatus> MatterStatuses { get; private set; } = new();
public MatterDetailsState(
IAppState appState,
IMatterDetailsService matterDetailsService,
NavigationManager navigationManager)
{
this.matterDetailsService = matterDetailsService;
this.navigationManager = navigationManager;
}
public async Task LoadMatterDetails()
{
// Query Params handling
var uri = navigationManager.ToAbsoluteUri(navigationManager.Uri);
var decryptedUri = HelperFunctions.Decrypt(uri.Query);
var queryParamFound = QueryHelpers.ParseQuery(decryptedUri).TryGetValue("MatterID", out StringValues uriMatterID);
if (queryParamFound)
{
EditMatterId = Convert.ToInt32(uriMatterID);
EditMode = !String.IsNullOrEmpty(uriMatterID) && EditMatterId > 0;
}
await LoadMatterStatuses();
if (EditMode)
{
Matter = await matterDetailsService.GetMatterByIdAsync(EditMatterId);
PaymentDetails = await matterDetailsService.GetMatterPaymentInfoByMatterId(EditMatterId);
}
}
private async Task LoadMatterStatuses()
{
MatterStatuses = await matterDetailsService.GetAvailableMatterStatusesAsync();
}
}
Basically, should I instead of having more or less the entire function in the State object, or only make the calls like setting Matter & PaymentDetails go through functions in the State object? Not sure what the standard for this is.
I've used Fluxor, which is a Flux/Redux library for Blazor, and have liked it. It holds all your state in an object which you can inject into your component for read access. You then manage state by dispatching actions from your components which are processed by effects or reducers which are essentially methods that process the action and make changes to state. It keeps everything neat, separated and very testable in my experience.
https://github.com/mrpmorris/Fluxor
There isn't a "standard", but applying good coding practices such as the "Single Responsivity Principle" and Clean Design principles drives you in a certain direction.
I divide the presentation and UI code into three:
UI - components and UI logic
State - data that you want to track state on.
Data Management - getting, saving,....
Each represented by one or more objects (Data Management is the ViewModel in MVVM).
You can see an example of this in this answer - https://stackoverflow.com/a/75157903/13065781
The problem is then how do you create a ViewModel instance that is scoped the same as the Form component. You either:
Scope the VM as transient - you can cascade it in the form if sub components need direct access to it. This is the approach in the referenced example.
Create an instance from the IServiceProvider using ActivatorUtilities and deal with the disposal in the form component.
If the VM implements IDisposable/IAsycDisposable the you have to do the second.
The following extension class adds two methods to the IServiceProvider that wrap up this functionality.
public static class ServiceUtilities
{
public static bool TryGetComponentService<TService>(this IServiceProvider serviceProvider,[NotNullWhen(true)] out TService? service) where TService : class
{
service = serviceProvider.GetComponentService<TService>();
return service != null;
}
public static TService? GetComponentService<TService>(this IServiceProvider serviceProvider) where TService : class
{
var serviceType = serviceProvider.GetService<TService>()?.GetType();
if (serviceType is null)
return ActivatorUtilities.CreateInstance<TService>(serviceProvider);
return ActivatorUtilities.CreateInstance(serviceProvider, serviceType) as TService;
}
}
Your form then can look something like this:
public partial class UIForm: UIWrapperBase, IAsyncDisposable
{
[Inject] protected IServiceProvider ServiceProvider { get; set; } = default!;
public MyEditorPresenter Presenter { get; set; } = default!;
private IDisposable? _disposable;
public override Task SetParametersAsync(ParameterView parameters)
{
// overries the base as we need to make sure we set up the Presenter Service before any rendering takes place
parameters.SetParameterProperties(this);
if (!initialized)
{
// Gets an instance of the Presenter from the Service Provider
this.Presenter = ServiceProvider.GetComponentService<MyEditorPresenter>() ?? default!;
if (this.Presenter is null)
throw new NullReferenceException($"No Presenter could be created.");
_disposable = this.Presenter as IDisposable;
}
return base.SetParametersAsync(ParameterView.Empty);
}
//....
public async ValueTask DisposeAsync()
{
_disposable?.Dispose();
if (this.Presenter is IAsyncDisposable asyncDisposable)
await asyncDisposable.DisposeAsync();
}
}

One Event Behind when using FubuMVC.ServerSentEvents

We are currently working on implementing a notifications feature for our app we are developing for Windows Azure. We want to inform users when actions have taken place they are interested in (such as the importing and exporting of files and so on). This notification is specific to each logged in user.
We have been using ServerSentEvents and we have found that this list is one event behind. So we do not start seeing notifications until the second action has taken place and that notification is for the first action. In our dev environments this problem always happens, but in Azure it appears to work as expected (sometimes!!!)
We are using the default implementation of the Event Queue and Channel Initialiser. We publish via the EventPublisher.WriteTo method passing a topic and a serverevent.
Here is our implementation of Topic:
public class UserNotificationsTopic : Topic
{
[RouteInput]
public Guid UserId { get; set; }
public override bool Equals(object obj)
{
return Equals(obj as UserNotificationsTopic);
}
public bool Equals(UserNotificationsTopic other)
{
if (ReferenceEquals(null, other)) return false;
return ReferenceEquals(this, other) ||
UserId.Equals(other.UserId);
}
public override int GetHashCode()
{
return UserId.GetHashCode();
}
}
Implementation of our ServerEvent:
public class UserNotificationServerEvent : IServerEvent
{
public string Id { get; private set; }
public string Event { get; private set; }
public int? Retry { get; set; }
public object Data { get; private set; }
public UserNotificationServerEvent(string id, string #event, object data)
{
this.Id = id;
this.Event = #event;
this.Data = data;
}
}
Any help, suggestions would be appreciated!
Thanks
Scott
I think I have found the solution (with a lot of assistance from my colleague).
We reading about SignalR and found it has a similar problem in Azure (one event behind), the solution recommended was to add properties for urlCompression to the Web.config for the Azure Web Role.
I added the settings which are the default for IIS7.5:
<urlCompression doStaticCompression="true" doDynamicCompression="false"/>
And this appears to have solved the issue! I am not sure why yet, so if anyone can help with that please feel free to add.
Anyway, I just thought I'd share the update.

Registering component in autofac

I'm new to autofac(using 2.1.14.854),and im still trying to put my head around in trying to understand
I have an interface and there are one or more implementations to this interface, and the implementation(s) should be fired in a specific sequence.
For example:
public IPipeline
{
void execute();
}
public MyPipeLine_1:IPipeline
{
public void execute(){}
}
public MyPipeLine_2:IPipeline
{
public void execute(){}
}
foreach(IPipeline pipeline in pipelines)
pipeline.execute();
The order execution of IPipeline should be MyPipleLine_2,MyPipleLine_1, etc
I have two questions
1) how to register all the components, that implements IPipeLine interface in a assembly and place them in a List
2) can i define the order of the execution of these components whilst registering
Thanks in advance.
[A quick note: You're using a really old version of Autofac. You may need to update to get the features I'm talking about.]
The first part is easy - Autofac implicitly supports IEnumerable<T>. Just register all the types and resolve:
var builder = new ContainerBuilder();
builder.RegisterType<MyPipeLine_1>().As<IPipeline>();
builder.RegisterType<MyPipeLine_2>().As<IPipeline>();
var container = builder.Build();
var containsAllPipelineComponents = container.Resolve<IEnumerable<IPipeline>>();
It'd be better if you can take it as an IEnumerable<T> rather than a list, but if you have to have a list, you could add a registration for it:
builder
.Register(c => new List<IPipeline>(c.Resolve<IEnumerable<IPipeline>>()))
.As<IList<IPipeline>>();
The second part isn't as easy. Autofac doesn't necessarily guarantee the order of the items in the list. If you need to order them, you'll need to put some sort of ordering metadata on them - attributes, properties, something that you can use to order the pipeline after the fact.
Alternatively, if your pipeline has "stages" or "events" where different components are applicable, look at the design of your pipeline and have a different pipeline interface per event. Within the event it shouldn't matter what order each item executes in. (This is similar to how event handlers in .NET work now. You'd want to mimic that behavior - different events for different stages in the overall lifecycle, but within each specific stage the order of execution of handlers doesn't matter.)
An example might look like:
public interface IFirstStage
{
void Execute();
}
public interface ISecondStage
{
void Execute();
}
public interface IThirdStage
{
void Execute();
}
public class PipelineExecutor
{
public IEnumerable<IFirstStage> FirstHandlers { get; private set; }
public IEnumerable<ISecondStage> SecondHandlers { get; private set; }
public IEnumerable<IThirdStage> ThirdHandlers { get; private set; }
public PipelineExecutor(
IEnumerable<IFirstStage> first,
IEnumerable<ISecondStage> second,
IEnumerable<IThirdStage> third)
{
this.FirstHandlers = first;
this.SecondHandlers = second;
this.ThirdHandlers = third;
}
public void ExecutePipeline()
{
this.ExecuteFirst();
this.ExecuteSecond();
this.ExecuteThird();
}
public void ExecuteFirst()
{
foreach(var handler in this.FirstHandlers)
{
handler.Execute();
}
}
// ExecuteSecond and ExecuteThird look just
// like ExecuteFirst, but with the appropriate
// set of handlers.
}
Then when you register your handlers it's simple:
var builder = new ContainerBuilder();
builder.RegisterType<SomeHandler>().As<IFirstStage>();
builder.RegisterType<OtherHandler>().As<IFirstStage>();
builder.RegisterType<AnotherHandler>().As<ISecondStage>();
// You can have any number of handlers for any stage in the pipeline.
// When you're done, make sure you register the executor, too:
builder.RegisterType<PipelineExecutor>();
And when you need to run the pipeline, resolve and run.
var executor = container.Resolve<PipelineExecutor>();
executor.ExecutePipeline();
This is just like event handlers but not using delegates. You have a fixed order of pipeline "events" or "stages" but the handlers inside each stage aren't guaranteed order.
If you need to modify the pipeline to have more stages, yes, you'll need to modify code. Just like if you had a new event you wanted to expose. However, to add, remove, or change handlers, you just modify your Autofac registrations.
I suggest you to use Metadata feature.
It gives you an advantage to define the order on registration stage.
Here is an example:
internal class Program
{
private static void Main(string[] args)
{
var builder = new ContainerBuilder();
var s1 = "First";
var s2 = "Second";
var s3 = "Third";
builder.RegisterInstance(s1).As<string>().WithMetadata<Order>(c => c.For(order => order.OrderNumber, 1));
builder.RegisterInstance(s2).As<string>().WithMetadata<Order>(c => c.For(order => order.OrderNumber, 2));
builder.RegisterInstance(s3).As<string>().WithMetadata<Order>(c => c.For(order => order.OrderNumber, 3));
using (var container = builder.Build())
{
var strings = container.Resolve<IEnumerable<Meta<string, Order>>>();
foreach (var s in strings.OrderBy(meta => meta.Metadata.OrderNumber))
{
Console.WriteLine(s.Value);
}
}
Console.ReadKey();
}
public class Order
{
public int OrderNumber { get; set; }
}
}

What are the options for creating an object model that need caching using entity framework and postsharp?

I am working with an internet application that has high demands for performance which means that a good caching functionality is crucial for our success.
The solution is built with Entity Framework Code First for the database access and Postsharp for caching. For the moment the model looks something like below.
public class Article
{
private readonly IProducerOperator _producerOperator;
public Article(IProducerOperator operator)
{ _producerOperator = operator; }
public int Id { get; set; }
...
public int ProducerId { get; set; }
public Producer Producer {
get { return _producerOperator.GetProducer(ProducerId); }
}
}
The operations classes looks like below.
public class ArticleOperations : IArticleOperations
{
private readonly IDataContext _context;
public ArticleOperations(IDataContext context)
{ _context = context; }
[Cache]
public Article GetArticle(int id)
{
var article = _context.Article.Find(id);
return article;
}
}
public class ProducerOperations : IProducerOperations
{
private readonly IDataContext _context;
public ProducerOperations(IDataContext context)
{ _context = context; }
[Cache]
public Producer GetProducer(int id)
{
var producer = _context.Producer.Find(id);
return producer;
}
}
I am NOT fond of having dependendencies in the business objects but the argument for it is to having lazy loading from the cache... for the most. This solution also means that caching is done only once for producer... at GetProducer. Normally I would not even consider having dependencies there. The objects should be POCOs, nothing more. I would really need some new inputs on this one. How can I do it instead? Is this the best way?
We also need to resolve the opposite, ie, from a producer that is cached we should be able to retrieve all its articles.
First, i wish to say, there are actually some (one?) solutions that uses entity framework code first in combination with caching using postsharp. Ideablades has released Devforce code first that actually is doing exactly this. That kind of framework actually resolves it all and we can use the entity framework as it is supposed to be used, and in combination with caching.
But that did not become the solution in this case. We went for complete separation of concern, meaning that the business objects only concern went to be only containing the data. The operations classes got the responsibility to fill the business objects.

Calculated columns should be where in MVVM model?

I have a WPF DataGrid displaying Products. I have two fields price and mass in that which are actually the properties of Product class. I need to show a seperate column in grid name MultipliedValue = price * mass. As per MVVM model where should i do it ?
1) In model by making a readonly property.
2) In converter so that only my UI will be aware of that?
3) or in View model?
Please suggest which option i should choose and why?
Thanks.
I would disregard option #2 from the beginning -- converters should be used only to account for implementation details of the UI, and specifically in MVVM perhaps not even then (as you can do the conversion inside the ViewModel, which is option #3 and more convenient).
Between #1 and #3, in this case IMHO it's best to go with #1 -- price is not something that's only relevant for your UI and of course the concept of price (and how it is derived) is going to stay fixed throughout your application. Both the UI and your backend may choose to use this property or not.
I would argue differently (than #jon). I put in the model only properties that I would like to serialize (say, from the server). Computed properties don't serialize, and hence they aren't in the model.
Recently, my favorite Model/View Model paradigm as as follow: Product is a class in the Model, that has nothing but the simplest getters and setters. ProductVm is a class in the VM, which contains Product, and has the additional VM logic. Most importantly, the property changed notification - which in my opinion is also part of the VM and not the model.
// Model:
class Product {
public double Price { get; set; }
public double Mass { get; set; }
}
// View Model:
class ProductVM : INotifyPropertyChanged
{
Product _product;
public event PropertyChangedEventHandler PropertyChanged;
public double Price {
get { return _product.Price; }
set { _product.Price = value; raise("Price"); raise("Total"); }
}
public double Mass {
get { return _product.Mass; }
set { _product.Mass = value; raise("Mass"); raise("Total"); }
}
public double total {
get { return Price * Mass; }
}
private void raise(string name) {
if( PropertyChanged ) {
PropertyChanged( this, new PropertyChangedEventArgs(name) );
}
}
public ProductVm( Product p ) {
_product = p;
}
public ProductVm() {
// in case you need this
_product = new Product();
}
}
Yes, there is a lot of boilerplate here, but once you do all the typing, you'll find this separation between Model and ViewModel very helpful. My 2 cents.
Note: I think #Jon approach is also correct, and is reasons are valid. I don't think there is one answer.