SOA Service Boundary and production support - rest

Our development group is working towards building up with service catalog.
Right now, we have two groups, one to sale a product, another to service that product.
We have one particular service that calculates if the price of the product is profitable. When a sale occurs, the sale can be overridden by a manager. This sale must also be represented in another system to track various sales and the numbers must match. The parameters of profitability also change, and are different from month to month, but a sale may be based on the previous set of parameters.
Right now the sale profitability service only calculates the profit, it also provides a RESTful URI.
One group of developers has suggested that the profitability service also support these "manager overrides" and support a date parameter to calculate based on a previous date. Of course the sales group of developers disagree. If the service won't support this, the servicing developers will have to do an ETL between two systems for each product(s), instead of just the profitability service. Right now since we do not have a set of services to handle this, production support gets the request and then has to update the 1+ systems associated for that given product.
So, if a service works for a narrow slice, but an exception based business process breaks it, does that mean the boundaries of the service are incorrect and need to account for the change in business process?
Two, does adding a date parameter extend the boundary of the service too much, or should it be excepted that if the service already has the parameters, it would also have a history of parameters as well? At this moment, we don't not have a service that only stores the parameters as no one has required a need for it.
If there is any clarification needed before an answer can be given, please let me know.

I think the key here is: How much pain would be incurred by both teams if and ETL was introduced between to the two?
Not that I think you're over-analysing this, but if I may, you probably have an opinion that adding a date parameter into the service contract is bad, but also dislike the idea of the ETL.
Well, strategy aside, I find these days my overriding instinct is to focus less on the technical ins and outs and more on the purely practical.
At the end of the day, ETL is simple, reliable, and relatively pain free to implement, however it comes with side effects. The main one is that you'll be coupling changes to your service's db schema with an outside party, which will limit options to evolve your service in the future.
On the other hand allowing consumer demand to dictate service evolution is easy and low-friction, but also a rocky road as you may become beholden to that consumer at the expense of others.
Another possibility is to allow the extra service parameters to be delivered to the consumer via a message, rather then across the same service. This would allow you to keep your service boundary intact and for the consumer to hold the necessary parameters local to themselves.
Apologies if this does not address your question directly.

Related

CQRS projections, joining data from different aggregates via probe commands

In CQRS when we need to create a custom-tailored projections for our read-models, we usually prefer a "denormalized" projections (assume we are talking about projecting onto a DB). It is not uncommon to have the information need by the application/UI come from different aggregates (possibly from different BCs).
Imagine we need a projected table to contain customer's information together with her full address and that Customer and Address are different aggregates in our system (possibly in different BCs). Meaning that, addresses are generated and maintained independently of customers. Or, in other words, when a new customer is created, there is no guarantee that there will be an AddressCreatedEvent subsequently produced by the system, this event may have already been processed prior to the creation of the customer. All we have at the time of CreateCustomerCommand is an UUID of an existing address.
We have several solutions here.
Enrich CreateCustomerCommand and the subsequent CustomerCreatedEvent to contain full address of the customer (looking up this information on the fly from the UI or the controller). This way the projection handler will just update the table directly upon receiving CustomerCreatedEvent.
Use the addrUuid provided in CustomerCreatedEvent to perform an ad-hoc query in the projection handler to get the missing part of the address information before updating the table.
These are commonly discussed solution to this problem. However, as noted by many others, there are problems with each approach. Enriching events can be difficult to justify as well described by Enrico Massone in this question, for example. Querying other views/projections (kind of JOINs) will work but introduces coupling (see the same link).
I would like describe another method here, which, as I believe, nicely addresses these concerns. I apologize beforehand for not giving a proper credit if this is a known technique. Sincerely, I have not seen it described elsewhere (at least not as explicitly).
"A picture speaks a thousand words", as they say:
The idea is that :
We keep CreateCustomerCommand and CustomerCreatedEvent simple with only addrUuid attribute (no enriching).
In API controller we send two commands to the command handler (aggregates): the first one, as usual, - CreateCustomerCommand to create customer and project customer information together with addrUuid to the table leaving other columns (full address, etc.) empty for time being. (Warning: See the update, we may have concurrency issue here and need to issue the probe command from a Saga.)
Right after this, and after we have obtained custUuid of the newly created customer, we issue a special ProbeAddrressCommand to Address aggregate triggering an AddressProbedEvent which will encapsulate the full state of the address together with the special attribute probeInitiatorUuid which is, of course our custUuid from the previous command.
The projection handler will then act upon AddressProbedEvent by simply filling in the missing pieces of the information in the table looking up the required row by matching the provided probeInitiatorUuid (i.e. custUuid) and addrUuid.
So we have two phases: create Customer and probe for the related Address. They are depicted in the diagram with (1) and (2) correspondingly.
Obviously, we can send as many such "probe" commands (in parallel) as needed by our projection: ProbeBillingCommand, ProbePreferencesCommand, etc. effectively populating or "filling in" the denormalized projection with missing data from each handled "probe" event.
The advantages of this method is that we keep the commands/events in the first phase simple (only UUIDs to other aggregates) all the while avoiding synchronous coupling (joining) of the projections. The whole approach has a nice EDA feeling about it.
My question is then: is this a known technique? Seems like I have not seen this... And what can go wrong with this approach?
I would be more then happy to update this question with any references to other sources which describe this method.
UPDATE 1:
There is one significant flaw with this approach that I can see already: command ProbeAddrressCommand cannot be issued before the projection handler had a chance to process CustomerCreatedEvent. But this is impossible to know from the API gateway (or controller).
The solution would probably involve a Saga, say CustomerAddressJoinProjectionSaga with will start upon receiving CustomerCreatedEvent and which will only then issue ProbeAddrressCommand. The Saga will end upon registering AddressProbedEvent. Or, if many other aggregates are involved in probing, when all such events have been received.
So here is the updated diagram.
UPDATE 2:
As noted by Levi Ramsey (see answer below) my example is rather convoluted with respect to the choice of aggregates. Indeed, Customer and Address are often conceptualized as belonging together (same Aggregate Root). So it is a better illustration of the problem to think of something like Student and Course instead, assuming for the sake of simplicity that there is a straightforward relation between the two: a student is taking a course. This way it is more obvious that Student and Course are independent aggregates (students and courses can be created and maintained at different times and different places in the system).
But the question still remains: how can we obtain a projection containing the full information about a student (full name, etc.) and the courses she is registered for (title, credits, the instructor's full name, prerequisites, etc.) all in the same table, if the UI requires it ?
A couple of thoughts:
I question why address needs to be a separate aggregate much less in a different bounded context, in view of the requirement that customers have an address. If in some other bounded context customer addresses are meaningful (e.g. you want to know "which addresses have more customers" etc.), then that context can subscribe to the events from the customer service.
As an alternative, if there's a particularly strong reason to model addresses separately from customers, why not have the read side prospectively listen for events from the address aggregate and store the latest address for a given address UUID in case there's a customer who ends up with that address. The reliability per unit effort of that approach is likely to be somewhat greater, I would expect.

Arbitrary Groups In REST

I was recently recommended a talk by Jim Webber.
And there was a very interesting point in there.
Jim says that when you think that there is a 1-1 correspondence between rows in your database, domain objects and resources in REST service. This makes it hard when want to transact work across arability groups.
No he goes on to point that if you have say 3 users and want to update them, you do then sequentially and it is very poor because you have to track each of them and handle issues if 1 out of the 3 (or how many transactions you want occur).
He mentioned the way you should handle this is to make a resource, for all of the 3 users. Resources are cheap and infinite (you can make as many as you want) so use them. So create that resource and in a single operation put their status update.
This is an extremely interesting point to me as there have been times where I have wanted to perform an operation on multiple things that i considered to be singular.
So here is an example:
Say I have a list of users. Say 100. Users would be their own thing/resource. I want to pick x amount of users out of that list (say 10 randomly) and apply 50 points to them.
I want to apply these points to these users that have no unique connection in the domain, they are just a random group of users. a arbitrary group.
How would I create a rest endpoint/resource as Jim Webber is implying to handle this operation?
Now In my admittedly old frame of mind I would go about it making a specific resource like users/points/bulk/ (or something) and pass in a list of user id's and the points I would apply them. I would never have had the mindset of treating them as a resource, I would have just had an hacky command rest endpoint to perform it.
This point Jim has pointed out is really something I never considered and is such a change of mindset, that it would really make things cleaner.
Could someone explain this to mean and give an example to how it would look
Thanks
He mentioned the way you should handle this is to make a resource, for all of the 3 users. Resources are cheap and infinite (you can make as many as you want) so use them. So create that resource and in a single operation put their status update.
...
How would I create a rest endpoint/resource as Jim Webber is implying to handle this operation?
The basic rule of thumb here is: How would you do it on the Web? As REST is just a generalization of the interaction model the Web allowed to grow to its todays size, the same concpet that proven to be successful on the Web can (and should) be used in a REST architecture.
What is a group of resources actually?! If you think about most sport activities that are played in teams, such as football or the like, almost all players can be divided into certain groups. I.e. players of Team A and players of Team B or all defensive players or all attacking players. Each of the players is its own resource but each of the available groups is its own resource as well as we could give it a name also. We can further talk about the group instead of the individual player. Which allows us to instead of reference all of the players individually, to include all of them within a single, short statement. A statement such as "Team A beat the crap out of Team B" will most likey subsume that each of the players on Team A was playing better than their counterparts in the opponent team.
It is now only a matter of providing clients with the toolset to group resources together. In a typical HTML page you could i.e. have a table representation of all the active football players of this season across all teams with a checkbox to select certain players and some control element, such as a submit button, that allows you to create a group for the selected players. The backing HTML form contains not only the actual data set you could select sepcific players from and a submit button but also a target URI where the request has to be sent to as well as a request method to use. HTML by default uses application/x-www-form-urlencoded as representation format to send the data to the server, which knows depending on the invoked endpoint, the HTTP operation used and the media type received how to process the data accordingly.
As a new resource will be created as a consequence to the previous grouping request, the server will respond with a 201 Created response code and a Location HTTP header whose value is a URI pointing to the location the newly created grouping is accessible. A client may now get redirected to that URI automatically or it can use the returned URI to invoke further operations on that resource. As the domain-model does (and probably should) not need to match a resource or affordance model, each of the invidvidual player resources as well as the team-resource may use the same database entries to present the data to the client. On updating one resource (either an individual player or the team as a whole) other resources may get influenced by this operation as well.
If you take a look at the definition of PUT in the HTTP specification, you can read something like this:
A PUT request applied to the target resource can have side effects on other resources.
Due to this side-effect it is possible for an update performed via PUT to achive somthing similar to a partial-update:
Partial content updates are possible by targeting a separately identified resource with state that overlaps a portion of the larger resource, or by using a different method that has been specifically defined for partial updates (for example, the PATCH method defined in RFC5789).
I.e. if you update Player 1 of Team A via PUT it creates as a side effect a partial-update of the state of Team A as this just uses the same data the data-model provides for that particular player.
In order to achive the same functionality in a REST architecture, as mentioned before, the same concepts of providing a client with structured data it can select a subset from and perform operations on that subset, such as creating a new resource for these selected elements, should be used. In contrast to the Web where HTML is dominant, the supported media-types may varry drastically in a REST architecture. Here, content-type negotiation is a very important part as this allows the server to chose the most suitable representation format that is supported by the client. Instead of using proprietary representation formats, standardized formats should be used to increase the likelihood of clients not under your control to be able to interact with your system. While there is an ongoing effort on introducing media-types that support clients with client-feedback in the form of forms similar to the ones used in HTML, there is no de-facto standard form-representation, except for HTML, yet widely accepted. There are a couple of especially JSON-based approaches, such as hal-forms, halo+json, Ion or Hydra, in the working, though, as mentioned, nothing that is really used widely in production.
As your acutal intention is to update a bunch of resources atomically, you could use PATCH here as well, without the need of creating new resources, as PATCH is defined to perform all of the instructions atomically - either all succeed or none at all. In the spec, PATCH is defined similar to how patching is understood in software engineering, by having a sequence of instructions that should be applied to a resource to transform it to a desired output. application/json-patch+json is a representation format that is quite close to the actual definition whereas application/merge-patch+json has a totally different take on it by defining default rules to apply, depending whether the request contained a modified or nullified field value. As the latter representation-format is able to only work on a single resource, the first representation-format could be used for a batch update. By targeting the collection-resource directly, JSON Pointers can be used to address the respective fields of the sub-resources in that collection directly.
To avoid data-loss via PATCH operations, due to intermediary updates between fetching the most recent state, calculating the necessary steps to apply and sending the request to the API, an optimistic locking approach should be used that is achievable via conditional requests, such as ETag.
While patching provides you with the capability to apply the changes atomically, I feel that grouping resources together, if they naturally form a group, such as in the player - team example, feels more common and reuses the interaction model proposed by REST also better IMO.

Microservice Adapter. One for many or many / countries to one / country. Architectural/deployment decision

Say, I have System1 that connect to System 2 through the adapter-microservice between them.
System1 -> rest-calls --> Adapter (converts request-response + some extra logic, like validation) -> System2
System1 is more like a monolith, exists for many countries (but it may change).
Question is: From the perspective of MicroService architecture and deployment, should the Adapter be one per country. Say Adapter-Uk, Adappter-AU, etc. Or it should be just the Adapter that could handle many countries at the same time?
I mean:
To have a single system/adapter-service :
Advantage: is having one code-base in or place, the adaptive code-logic between countries in 90 % are the same. Easy to introduce new changes.
Disadvantage: once we deploy the system, and there is a bug, it could affect many countries at the same time. Not safe.
To have a separate system:
Disadvantage: once some generic change has been introduced to one system, then it should be "copy-pasted" for all other countries/services. Repetitive, not smart.. work, from developer point of view.
Advantage:
Safer to change/deploy.
Q: what is a preferable way from the point of view of microservice architecture?
I would suggest the following:
the adapter covers all countries in order to maintain single codebase and improve code reusability
unit and/or integration tests to cope with the bugs
spawn multiple identical instances of the adapter with load balancer in front
Since Prabhat Mishra asked in the comment.
After two years.. (took some time to understand what I have asked.)
Back then, for me was quite critical to have a resilient system, i.e. if I change a code in one adapter I did not want all my countries to go down (it is SAP enterprise system, millions clients). I wanted only once country to go down (still million clients, but fewer millions :)).
So, for this case: i would create many adapters one per country, BUT, I would use some code-generated common solution to create them, like common jar - so I would would not repeat my infrastructure or communication layers. Like scaffolding thing.
country-adapter new "country-1"
add some country specific code (not changing any generated one (less code to repeat, less to support))
Otherwise, if you feel safe (like code reviewing your change, making sure you do not change other countries code), then 1 adapter is Ok.
Another solution, is to start with 1 adapter, and split to more if it is critical (feeling not safe about the potential damage / cost if fails).
In general, seems, all boils down to the same problem, which is: WHEN to split "monolith" to pieces. The answer is always: when it causes problems to be as big as it is. But, if you know your system well, you know in advance that WHEN is now (or not).

Migrating to Bounded Context

I currently have a Web API project which currently has all the system processing in the same solution. I'm breaking this out into separate solutions so that they can be ran independently (e.g. an Azure WebJob) as I don't want to have to redploy the Web project if something in the backend has changed.
My issue with this is that even though I have separated the logic they are tied together by a single context so that if I make a change in one I will have to redeploy all as the migrations won't match up.
So that's why I've been looking at Bounded Context and DDD. I'm looking at how to break this up but having trouble understanding how relationships work.
A lot of the site is administrative (i.e. creating entities, no actual processing) so was going to split contexts around this e.g.:
A user adds and maintains currency conversion rates (this is two entities in
total).
A user adds and maintains details on how to process payments (note that is is not processing payments, it only holds information about paypal account details etc).
So I was splitting the context's up by this, does this sound reasonable to start with (there are a lot more like this such as tax bands, charge structures etc)?
If this is the way to go, how do I handle relationships between those two contexts? As an example:
A payment method requires a link to an 'active' currency conversion. I understand I can just have this as an Id, but I need to check it's state so need access to the model.
A currency conversion can only be set to 'Inactive' if there are no payment methods currently using it. Again this needs access to the other model.
So logically the models need access to each other, how would this be included in the context? Can I add navigation properties to a model in a different context? Or should I add it as a separate DbSet and possibly map using a view?
Thanks
So I was splitting the context's up by this, does this sound
reasonable to start with (there are a lot more like this such as tax
bands, charge structures etc)?
"So that they can be ran and deployed independently" may not be a sufficient heuristic to tell when you should split Bounded Contexts. This addresses one aspect of the solution space, but if you haven't looked well enough at the problem space, you'll suffer from a misalignment between BC's and subdomains that can cause a lot of friction. You might end up always deploying a cluster of seemingly unrelated "independently deployable units" together because you didn't realize they talk about the same thing.
Identifying subdomains is the product of distillating your business - separating the big functional areas and defining which parts are your core domain and which are ancillary activities. Each subdomain has its own specific semantics (Ubiquitous Language). In your case, as has been pointed out in the comments, Currency Conversion and Payment Methods might well be part of the same subdomain (Payment?). It does not automatically mean that they should also be in the same BC but it might be a good idea to keep subdomains aligned 1-to-1 with BC's, as additional BC's come at a cost.
Back to deployability, even if it can be one beneficial effect of Bounded Contexts, they are not always so easily translatable in terms of independent units of deployment. Context mapping patterns (Shared Kernel, Customer Supplier, etc.) and BC communication in general can lead to a model, and therefore a part of a codebase, being shared by multiple BC's. Code and API synchronization issues arise that can question a simplistic "deployable free electron" view.
Just because you're using the Bounded Context approach doesn't mean you have to use DDD's tactical patterns (Aggregate Root, invariants, etc.) inside each BC. Using them should be an educated decision to trade solution space complexity off for problem space manageability. If "Currency Conversion can only be set to inactive..." is the only rule pertaining to payment method and currency management in your business, it might not be worth the bother to give that Bounded Context a full-fledged rich domain model. CRUD could be better suited there.

Is this generic REST service a good idea?

I have a question on whether or not a particular REST-service design is good or not.
The background is of having an inhouse monolithic system (will call this "the main system") dealing with e.g. customers. Then there are external components that have additional information on persons, which may or may not correspond 1-1 with a customer in the main system.
At present there is no definite specification of what kind of data is or may be associated with a person/customer in these external components.
The proposed design I have been presented with is a REST- service that exposes an API for the external system to call in order to feed the component with this arbitrary data associated with persons.
The idea is that by doing so the main system will have a single place to go to, to get the external data for customers/persons.
A proposed requirement of this REST service is that as new types of data is loaded into it by an external component, this data is automatically made accessible by the service, without it needing to be changed in any way, or redeployed. And "new data" generally means a new type of key value set. E.g. initially the service might provide data for customer identified by a customerId. Then an external component decides to post some kind of data associated to SSN. This should automatically entail that the service can be queried for this data by supplying SSN in the request.
In order to avoid the need to change/redeploy the service I’m assuming the solution will ahve to have a very generic scheme of reference, e.g.
http://url/generic-resource-name/?id=[customerId]&keyType=cusomterId
There is really nothing in the requirements that limits the data to be associated to a person, only that it’s key be made up of one value.
And example use case sequence could be:
So to the question:
Is it a good idea to implement such a general purpose service? And how does it rhyme with the principles of REST: the noun in question that the service will operate on will have to be very generic, really nothin short of “resource” or “data”, which in itself seems like a smell to me.
So to the question: Is it a good idea to implement such a general
purpose service?
I believe not. You are going straight into Inner platform effect antipattern. You must be very careful, or you might end like Vision.
Please also read a chapter "The allure of distributed objects" from Fowler's PoEAA book. Just to be careful.