When does it make sense to use seperate scoped services in Blazor Server app? - service

I want to use in my Blazor server app
a scoped service for variables, that should be stored client specific during the session. So I will use this variables in my raqzor pages, where I want that the variables keep their values during the session.
On the other side I want to use also a scoped service for data that should be shown (changed) in several razor pages at the same time. In this second service I will define an Eventhandler so that other screens can subscribe to the event.
Question: Can I define for this two subjects one common service or should I use two seperate services?

Related

Fetching potentially needed data from repository - DDD

We have (roughly) following architecture:
Application service does the infrastructure job - fetches data from repositories which are hidden behind interfaces.
Object graph is created and passed to appropriate domain service.
Domain service does it thing and raises appropriate events.
Events are handled in different application services which perform some persistent operations (altering repositories, sending e-mails etc).
However. Domain service (3) has become so complex that it requires data from different external APIs only if particular conditions are satisfied. For example - if Product X is of type Car, we need to know price of that car model from some external CatalogService (example invented) hidden behind ICatalogService. This operation is potentially expensive one (REST call).
How do we go about this?
A. Do we pre-fetch all data in Application Service listed as (1) even we might not need it? Do we inject interface ICatalogService into given Domain Service and fetch data only when needed? The latter solution might create performance issues if, some other client of Domain Service, calls this Domain Service repeatedly without knowing there is a REST call hidden inside it.
Or did we simply get the domain model wrong?
This question is related to Domain Driven Design.
How do we go about this?
There are two common patterns.
One is to pass the capability to make the query into the domain model, allowing the model to fetch the information itself when it is needed. What this will usually look like is defining an interface / a contract that will be consumed by the domain model, but implemented in the application/infrastructure layers.
The other is to extend the protocol between the domain model and the application, so that we can signal to the application layer what information is needed, and then the application code can decide how to provide it. You end up with something like a state machine for the processes, with the application code coordinating the exchange of information between the external api and the domain model.
If you use a bit of imagination, you've already got a state machine something like this; as your application code is already coordinating the movement of inputs to the repository and the domain model. The difference, of course, is that the existing "state machine" is simple and linear enough that it may not be obvious that there is a state machine present at all.
how exactly would you signal application layer?
Simple queries; which is to say, the application code pulls the information it needs out of the domain model and uses that information to compute the next action. When the action is completed, the application code pushes information to the domain model.
There isn't enough information to give you targeted good advice. I suspect you need to refactor your domains into further subdomains. It sounds like your domain service has way more than 1 responsibility. Keep the service simple.
In addition, If you have a long running task like a service call that takes a long time, then you need to architect it away. The most supple design will not keep the consumer waiting. It'll return immediately with some sort of result to the user even if it's simply a periodic status update.

Assign names to applications without Service Fabric

I have an application in the service fabric and I'm going to upload another one.
I wonder if it's possible to assign different names to each application.
With an application, I access using the address:
http://sf-spartan.eastus.cloudapp.azure.com
You can configure for access to look like this:?
http://application1.sf-spartan.eastus.cloudapp.azure.com
or
http://sf-spartan.eastus.cloudapp.azure.com/application1
Sure, have a look here. Use the ApplicationName argument to define it.
Every application instance you create must in fact have a unique name.
You can reach your application instance through its url by using a reverse proxy. (either the built-in one, or a custom one like Traefik)
Usually, the application and service name are part of the url, e.g.:
http://mycluster.eastus.cloudapp.azure.com:19081/MyApp/MyService
This does require a web based communication listener.
Event more info here.

Deploying IdentityServer3 on Load Balancer

We are moving right along with building out our custom IdentityServer solution based on IdentityServer3. We will be deploying in a load balanced environment.
According to https://identityserver.github.io/Documentation/docsv2/configuration/serviceFactory.html there are a number of services and stores that need to be implemented.
I have implemented the mandatory user service, client and scope stores.
The document says there are other mandatory items to implement but that there are default InMemory versions.
We were planning on using the default in memory for the other stuff but am concerned that not all will work in a load balanced scenario.
What are the other mandatory services and stores we must implement for things to work properly when load balanced?
With multiple Identity Server installations serving the same requests (e.g. load balanced) you won't be able to use the various in-memory token stores, otherwise authorization codes, refresh tokens and reference tokens issued by one server won't be recognized by the other, nor will user consent be persisted. If you are using IIS, machine key synchronization is also necessary to have tokens work across all instances.
There's an entity framework package available for the token stores. You'll need the operational data.
There's also a very useful guide to going live here.

Where to put my application logic when using Entity Framework and MVVM

In the last days I spent a lot of time creating the architecture for my program, but still have a problem with it. At the moment it looks like this:
DataLayer: Here my context class which derived from DbContext and the mapper classes which derived from EntityTypeConfiguration like JobMap for the Domain objects reside
DomainLayer: Here my domain/business objects like Job or Schedule reside.
Presentation Layer: Here I have the *ViewModel and *View classes (I use WPF for the views)
Now to my question: I want to build a scheduling application with some optimization abilities (it is a single user and single pc application so no further decoupling like web application is needed). But I have the problem that I don't know where this application fits into this architecture?
Considering the following use case: The user clicks a button "Start" on the View which calls the ViewModel which redirects to my scheduling/optimization application. This app then gets all the new jobs from the database and creates/updates the current schedule. The ViewModel should then update the old schedule with the new created one. Finally the View shows the generated schedule to the user.
In this case my ViewModel knows about my application (because it calls it) and about my domain/business objects (because my app will deliver e.g. a Schedule domain object, which the ViewModel encapsulates).
Is this a correct usage of the EF, MVVM and my application?
Regards
To start, you'll want to identify which pieces of your application go where, and that's fairly easy to do. Essentially, you have to ask yourself: Does this method or class help define my domain. If the answer is yes, you put it in the domain layer, and if not, you'll put it in presentation.
Here's how you'd look at it in your example:
Your Presentation layer (PL) receives a message via the start
button.
The PL calls the Domain and tells it to generate a schedule. This call is probably to a domain service.
Your domain service is then in charge of populating the Job domain objects, creating a new Schedule domain object (or modifying an existing one), and returning the Schedule domain object.
Your PL then simply displays the returned Schedule.
This might be different if you just wanted to obtain an existing Schedule object. Instead of calling a domain service, you would ask a domain repository to get the existing schedule. The repository would be the way of encapsulating or otherwise obscuring the data layer from your PL and from your Domain.
Now, what you DON'T want to do:
Do not get the list of jobs in your PL, and then use that list of jobs to create the schedule in the controller of your MVVM. This would be business logic that defines your domain.
If Schedules are commonly generated from Jobs, regardless of whether it's called from MVVM or a PHP site, then don't add complexity in your PL and Domain Layer by forcing the PL to first get the jobs and pass them back into the Domain for a Schedule to be generated. The fact that those two concepts are tied to each other means that the relationship helps define your domain, and thus belongs in your domain layer. An exception might be when both the jobs and the schedule to be modified both rely on context from the front end (user input), but even this isn't always an exception.
Do not pass in VMs to your domain. Let your controller filter out the data and determine what needs to be sent to which domain part.
It's really hard to give a precise detail of what you should place where because only you would have a clear view of what defines your domain, but here's essentially how I break it down:
Could I change/replace this without affecting how my business/domain works?
If the answer is yes, it does not belong in your domain. Example: You could replace your entire MVVM front-end to flat PHP or ASPX, and even though it'd be a lot of work and a huge pain, you could to do it without affecting how the rest of the business operates.

Automatically Select Per-Thread or Per-Request Lifetime Scopes

I am developing a library that is distributed internal to my company and consumed by a variety of applications. This library must be platform agnostic in that it may be deployed in a web context or even within a console app. I would like to register objects to be per-http-request or per-thread, depending on the context of the application consuming this framework. In StructureMap, I can do this using the Hybrid lifetime. Essentially, if an HttpContext exists then the object will be scoped to that, otherwise ThreadLocalStorage will be used on a per-thread basis. No additional configuration is required for the distributed library or the consuming application. Is this possible using Autofac? Given our wide variability of developer skill levels, our goal is to minimize/eliminate any specialized configuration for consumers.
I understand that registrations can be context agnostic using the InstancePerLifetimeScope lifetime, but then consuming applications are required to consume the ASP.NET/WCF/MVC integration binaries in order to bind InstancePerLifetimeScope registrations to an Http Request. Or, for per-thread scopes, the consuming code needs to have the responsibility of creating a lifetime scope per thread.
Any suggestions?
It's easy to implement own lifetime manager that will check if 'HttpContext.Current != null' and then delegate to one of the existing managers.
I would however suggest that each application wire up appropriate manager itself. An example would be a unit test scenario where 'HttpContext' might not exist since it's mocked an you might want to control the lifetime manually for test specific purposes.