Calling client side human service from another client side human service - service

I have created two client side human services.
Now i want to call one client side human service from the second client side human service.
Is it possible to do so?

Generally moving between CSHS is handled by simply putting both CSHS in a wrapper service. Basically service 1 can either set a variable or use multiple end points to indicate where to go next. At least one of these options is to go to CSHS 2. We expose the wrapper service to the end user, and when they run it, they will wind up running CSHS 1, which is the first sub-service in the wrapper service. Then some interaction with the presented Coach has them go to CSHS 2.

You can fire second CSHS with the help of URL.you can expose service to user and put it in scrip to call it with window.open().

Related

Fetching potentially needed data from repository - DDD

We have (roughly) following architecture:
Application service does the infrastructure job - fetches data from repositories which are hidden behind interfaces.
Object graph is created and passed to appropriate domain service.
Domain service does it thing and raises appropriate events.
Events are handled in different application services which perform some persistent operations (altering repositories, sending e-mails etc).
However. Domain service (3) has become so complex that it requires data from different external APIs only if particular conditions are satisfied. For example - if Product X is of type Car, we need to know price of that car model from some external CatalogService (example invented) hidden behind ICatalogService. This operation is potentially expensive one (REST call).
How do we go about this?
A. Do we pre-fetch all data in Application Service listed as (1) even we might not need it? Do we inject interface ICatalogService into given Domain Service and fetch data only when needed? The latter solution might create performance issues if, some other client of Domain Service, calls this Domain Service repeatedly without knowing there is a REST call hidden inside it.
Or did we simply get the domain model wrong?
This question is related to Domain Driven Design.
How do we go about this?
There are two common patterns.
One is to pass the capability to make the query into the domain model, allowing the model to fetch the information itself when it is needed. What this will usually look like is defining an interface / a contract that will be consumed by the domain model, but implemented in the application/infrastructure layers.
The other is to extend the protocol between the domain model and the application, so that we can signal to the application layer what information is needed, and then the application code can decide how to provide it. You end up with something like a state machine for the processes, with the application code coordinating the exchange of information between the external api and the domain model.
If you use a bit of imagination, you've already got a state machine something like this; as your application code is already coordinating the movement of inputs to the repository and the domain model. The difference, of course, is that the existing "state machine" is simple and linear enough that it may not be obvious that there is a state machine present at all.
how exactly would you signal application layer?
Simple queries; which is to say, the application code pulls the information it needs out of the domain model and uses that information to compute the next action. When the action is completed, the application code pushes information to the domain model.
There isn't enough information to give you targeted good advice. I suspect you need to refactor your domains into further subdomains. It sounds like your domain service has way more than 1 responsibility. Keep the service simple.
In addition, If you have a long running task like a service call that takes a long time, then you need to architect it away. The most supple design will not keep the consumer waiting. It'll return immediately with some sort of result to the user even if it's simply a periodic status update.

Assign names to applications without Service Fabric

I have an application in the service fabric and I'm going to upload another one.
I wonder if it's possible to assign different names to each application.
With an application, I access using the address:
http://sf-spartan.eastus.cloudapp.azure.com
You can configure for access to look like this:?
http://application1.sf-spartan.eastus.cloudapp.azure.com
or
http://sf-spartan.eastus.cloudapp.azure.com/application1
Sure, have a look here. Use the ApplicationName argument to define it.
Every application instance you create must in fact have a unique name.
You can reach your application instance through its url by using a reverse proxy. (either the built-in one, or a custom one like Traefik)
Usually, the application and service name are part of the url, e.g.:
http://mycluster.eastus.cloudapp.azure.com:19081/MyApp/MyService
This does require a web based communication listener.
Event more info here.

Best place to fetch 3rd party data in ES/CQRS

I have a RegisterUserCommand with some user data.
To be able to register user with some additional information, I need to connect to 3rd party so my question is:
1) Should Command already have all of that 3rd party data when called?
2) Would it be ok if CommandHandler connects to 3rd party and retrieves it?
3) I don't think that my aggregate root should be doing it but in a sense, it is domain logic.
I think that #2 is the best way but would like to hear if I'm going wrong about it or not?
(actual case is not registering user but it needs to fetch data from a remote service/3rd party)
The issue with (2) that your domain layer (where the command handler definitely belongs) become dependent on an external bounded context. This breaks the onion architecture inner layer isolation.
Your first point is basically correct for some cases, if your service layer can fetch this data and send a self-contained command, this is one of possible solutions.
Another solution that instead of invoking a command handler, you can send a message to start a process manager that will send an information collection request, get the data back and send a command to your handler with all information required. Since this happens via asynchronous messaging, you will not have synchronous dependency on a third party and your application will keep working even if the third party is down, at least to some extent, and when the third party will wake up, all queued requests will be processed.
Usually, durable messaging also has some retry capabilities that decrease the risk of the request made to the external bounded context to fail.
Your 3rd party data is not part of your domain but is required by it so you could have a command that results in a "data requested" event to which an external process subscribes to. This process could then gather the required 3rd party data and package it into another command which results in another event stating that the data was provided which would cause your query data to be update.

Where to put my application logic when using Entity Framework and MVVM

In the last days I spent a lot of time creating the architecture for my program, but still have a problem with it. At the moment it looks like this:
DataLayer: Here my context class which derived from DbContext and the mapper classes which derived from EntityTypeConfiguration like JobMap for the Domain objects reside
DomainLayer: Here my domain/business objects like Job or Schedule reside.
Presentation Layer: Here I have the *ViewModel and *View classes (I use WPF for the views)
Now to my question: I want to build a scheduling application with some optimization abilities (it is a single user and single pc application so no further decoupling like web application is needed). But I have the problem that I don't know where this application fits into this architecture?
Considering the following use case: The user clicks a button "Start" on the View which calls the ViewModel which redirects to my scheduling/optimization application. This app then gets all the new jobs from the database and creates/updates the current schedule. The ViewModel should then update the old schedule with the new created one. Finally the View shows the generated schedule to the user.
In this case my ViewModel knows about my application (because it calls it) and about my domain/business objects (because my app will deliver e.g. a Schedule domain object, which the ViewModel encapsulates).
Is this a correct usage of the EF, MVVM and my application?
Regards
To start, you'll want to identify which pieces of your application go where, and that's fairly easy to do. Essentially, you have to ask yourself: Does this method or class help define my domain. If the answer is yes, you put it in the domain layer, and if not, you'll put it in presentation.
Here's how you'd look at it in your example:
Your Presentation layer (PL) receives a message via the start
button.
The PL calls the Domain and tells it to generate a schedule. This call is probably to a domain service.
Your domain service is then in charge of populating the Job domain objects, creating a new Schedule domain object (or modifying an existing one), and returning the Schedule domain object.
Your PL then simply displays the returned Schedule.
This might be different if you just wanted to obtain an existing Schedule object. Instead of calling a domain service, you would ask a domain repository to get the existing schedule. The repository would be the way of encapsulating or otherwise obscuring the data layer from your PL and from your Domain.
Now, what you DON'T want to do:
Do not get the list of jobs in your PL, and then use that list of jobs to create the schedule in the controller of your MVVM. This would be business logic that defines your domain.
If Schedules are commonly generated from Jobs, regardless of whether it's called from MVVM or a PHP site, then don't add complexity in your PL and Domain Layer by forcing the PL to first get the jobs and pass them back into the Domain for a Schedule to be generated. The fact that those two concepts are tied to each other means that the relationship helps define your domain, and thus belongs in your domain layer. An exception might be when both the jobs and the schedule to be modified both rely on context from the front end (user input), but even this isn't always an exception.
Do not pass in VMs to your domain. Let your controller filter out the data and determine what needs to be sent to which domain part.
It's really hard to give a precise detail of what you should place where because only you would have a clear view of what defines your domain, but here's essentially how I break it down:
Could I change/replace this without affecting how my business/domain works?
If the answer is yes, it does not belong in your domain. Example: You could replace your entire MVVM front-end to flat PHP or ASPX, and even though it'd be a lot of work and a huge pain, you could to do it without affecting how the rest of the business operates.

Fake services mock for local development

This has happend to me more than once, thought someone can give some insight.
I have worked on multiple projects where my project depends on external service. When I have to run the application locally, i would need that service to be up. But sometimes I would be coding to the next version of their service which may not be ready.
So the question is, is there already a way that can have a mock service up and running that i could configure with some request and responses?
For example, lets say that I have a local application that needs to make a rest call to some other service outside to obtain some data. E.g, say, for given a user, i need to find all pending shipments which would come from other service. But I dont have access to that service.
In order to run my application, i need a working external service but I dont have access to it in my environment. Is there a better way rather than having to create a fake service?
You should separate the communications concerns from your business logic (something I call "Edge Component" see here and here).
For one it will let you test the business logic by itself. It will also give you the opportunity to rethink the temporal coupling you currently have. e.g. you may want the layer that handle communications to pre-fetch, cache etc. data from other services so that you will also have more resilient services at run time