When does SOAP make more sense than REST? - rest

From my understanding of REST, the implicit assumption is that all operations are CRUD operations. Sometimes, you are not doing CRUD operations and are doing some more complex logic. In this case, is not SOAP more suitable? Or is it the case that all operations no matter how complex are a series of CRUD operations so they should be split up into a set of smaller CRUD operations to be called one after the other? But, does this not make the operation you are attempting more cumbersome to write? I am trying to understand when it might make more sense to use SOAP instead of REST.

i'm working in banking area. the one of the biggest bank in Russia.
we using soa and we have a lot of webservices.
we like that, because:
it's simple to decompose one task -> one service
it's simple to manage webservices(Service Orchestration vs. Service Choreography)
it's pretty simple to localize a bugs because you have to rewrite only one service and you do't need to rewrite all logic(in rest we
will rewrite all logic if bug has been found)
it's simple to create a map of organisation if we use soap webservices(after we could join webservices together and get another logical unit)

Related

CQRS Read Models & REST API

We are implementing a REST API over our CQRS services. We of course don't want to expose any of our domain to users of the REST APIs.
However, a key tenant of CQRS is that the read models generally correspond to a specific view or screen.
With that being the case, it seems logical that the resources in our REST API, will map virtually 1:1 with the read / view models from our queries (where the queries return a DTO containing all the data for the view). Technically this is exposing a part of our domain (the read models - although returned as DTOs). In this case, this seems to be what we want. Any potential downsides to being so closely coupled?
In terms of commands, I have been considering an approach like:
https://www.slideshare.net/fatmuemoo/cqrs-api-v2. There is a slide that indicates that commands are not first class citizens. (See slide 26). By extension, am I correct in assuming that the DTOs returned from my queries will always be the first class citizens, which will then expose the commands that can be executed for that screen?
Thanks
Any potential downsides to being so closely coupled?
You need to be a little bit careful in terms of understanding the direction of your dependencies.
Specifically, if you are trying to integrate with clients that you don't control, then you are going to want to agree upon a contract -- message semantics and schema -- that you cannot change unilaterally.
Which means that the representations are relatively fixed, but you have a lot of freedom about about how you implement the production of that representation. You make a promise to the client that they can get a representation of report 12345, and it will have some convenient layout of the information. But whether that representation is something you produce on demand, or something that you cache, and how you build it is entirely up to you.
At this level, you aren't really coupling your clients to your domain model; you are coupling them to your views/reports, which is to say to your data model. And, in the CQRS world, that coupling is to the read model, not the write model.
In terms of commands, I have been considering an approach like...
I'm going gently suggest that the author, in 2015, didn't have a particularly good understanding of REST by today's standards.
The basic problem here is that the author doesn't recognize that caching is a REST constraint; and the design of our HTTP protocols needs to consider how general purpose components understand cache invalidation.
Normally, for a command (meaning here "a message intended to change the representation of the resource"), you normally want the target-uri of the HTTP request to match the identifier of the primary resource that changes.
POST /foo/123/command
Isn't particularly useful, from the perspective of cache invalidation, if nobody ever sends a GET /foo/123/command request.

.Net Core Rest API Request/Response best practice

I need some advice on how to best structure the requests and the responses for my Rest API.
I'm mostly trying to limit myself to CRUD operations on one resources and I work with one object: for example if the ressource is "book" I end up with the following actions in the controller
[HttpPost("books")] Book Create(Book book)
[HttpGet("books")] Book Get(int id)
This is relatively strait forward.
Now for a more complex example for the creation of a resource, I need to receive a complexe object different from my ressource and return an object containing the resource and extra data
For example for the Order resource I have a the following action in the controller:
[HttpPost("/order")] CreateOrderResponse CreateOrder(CreateOrderRequest createOrderRequest)
Here my action will use the "CreateOrderRequest" object to create to build an Order.
Then I would like to return a "createOrderResponse" object which contains the Order but also extra information that the client needs.
I'm not sure this is the best way to go, any advice ?
Thanks in advance for your help
I prefer the following:
[HttpPost("/order")] CreateOrderResponse CreateOrder(CreateOrderRequest createOrderRequest)
And here is why:
By this method, you are able to protect your public API from implementation details. If you expose your model to your API then you cannot make the same guarantee.
You can also make your validations specific to the request format. In some cases, you might require one subset of your model when creating a record and another subset when editing data. This approach will allow you to handle that scenario as well.
Security. Were you going to add that Book right to a DbContext and save it? Or attach it and update directly? Those would be potential issues from security and data quality perspectives.
But there are downsides:
This approach is time consuming. It may not be worth the time invested if you are writing something as a learning exercise or a quick implementation. And it adds complexity. But then, you might find complexity when you realize your Book object is insufficent in all cases.
You will feel like there is duplicate code in different places. The code may appear to be the same, but the use cases are actually different and may diverge over time. Having a Book parameter will be a liability at that point.

How to retrieve data from another bounded context in ddd?

Initially, There is an app runs in Desktop, however, the app will run in web platform in the future.
There are some bounded contexts in the app and some of them needs to retrieve data from another. In this case, I don't know which approach I have to use for this case.
I thought of using mediator pattern that a bound context "A" requests data "X" and then mediator call another bound context, like B" " and gets the correct data "X". Finally, The mediator brings data "X" to BC "A".
This scenario will be change when the app runs in web, then I've thought of using a microservice requests data from another microservice using meaditor pattern too.
Do the both approaches are interest or there is another better solution?
Could anyone help me, please?
Thanks a lot!
If you're retrieving data from other bounded contexts through either DB or API calls, your architecture might potentially fall to death star pattern because it introduces unwanted coupling and knowledge to the client context.
A better approach might be is looking at event-driven mechanisms like webhooks or message queues as a way of emitting data that you want to share to subscribing context(s). This is good because it reduces coupling of your bounded context(s) through data replication across contexts which results to higher bounded contexts independence.
This gives you the feeling of "Who cares if bounded context B is not available ATM, bounded context A and C have data they need inside them and I can resume syncing later since my data update related events are recorded on my queue"
The answer to this question breaks down into two distinct areas:
the logical challenge of communicating between different contexts, where the same data could be used in very different ways. How does one context interpret the meaning of the data?
and the technical challenge of synchronizing data between independent systems. How do we guarantee the correctness of each system's behavior when they both have independent copies of the "same" data?
Logically, a context map is used to define the relationship between any bounded contexts that need to communicate (share data) in any way. The domain models that control the data are only applicable with a single bounded context, so some method for interpreting data from another context is needed. That's where the patterns from Evan's book come in to play: customer/supplier, conformist, published language, open host, anti-corruption layer, or (the cop-out pattern) separate ways.
Using a mediator between services can be though of as an implementation of the anti-corruption layer pattern: the services don't need to speak the same language, because there's an independent buffer between them doing the translation. In a microservice architecture, this could be some kind of integration service between two very different contexts.
From a technical perspective, direct API calls between services in different bounded contexts introduce dependencies between those services, so an event-driven approach like what Allan mentioned is preferred, assuming your application is okay with the implications of that (eventual consistency of the data). Picking a messaging platforms that gives you the guarantees necessary to keep the data in sync is important. Most asynchronous messaging protocols guarantee "at least once" delivery, but ordering of messages and de-duplication of repeats is up to the application.
Sometimes it's simpler to use a synchronous API call, especially if you find yourself doing a lot of request/response type messaging (which can happen if you have services sending command-type messages to each other).
A composite UI is another pattern that allows you to do data integration in the presentation layer, by having each component pull data from the relevant service, then share/combine the data in the UI itself. This can be easier to manage than a tangled web of cross-service API calls in the backend, especially if you use something like an IT/Ops service, NGINX, or MuleSoft's Experience API approach to implement a "backend-for-frontend".
What you need is a ddd pattern for integration. BC "B" is upstream, "A" is downstream. You could go for an OHS PL in upstream, and ACL in downstream. In practice this is a REST API upstream and an adapter downstream. Every time A needs the data from B , the adapter calls the REST API and adapts the info returned to A domain model. This would be sync. If you wanna go for an async integration, B would publish events to MQ with the info, and A would listen for those events and get the info.
I want to add-on a comment about analysis in DDD. Exist e several approaches for sending data to analytic.
1) If you have a big enterprise application and you should collect a lot of statistic from all bounded context better move analytic in separate service and use a message queue for send data there.
2) If you have a simple application separate your Analytic from your App in other context and use an event or command to speak with there.

Best workflow for any RESTful operation in web CRUD

As a general rule for any RESTful CRUD operation, I follow these steps:
Validating information on client-side
Sending required information in JSON format to the server (possibly a web service)
Validating information on the server
Doing the operation
Returning JSON as the result of operation
Updating DOM based on server's response
Though this list is general, I think it's the most complete list. The only problem is that, I do it for any and every operation. I mean, DRY (don't repeat yourself) tells us to stop repeating things. Is it considered a repetition? Or should we follow these steps always?
Well, you can skip validating the data client-side if you wish…
Seriously, those are the necessary minimum for doing a lot of things; you must validate server side to prevent a whole host of potential problems and the other parts are just fundamental. OK, you can skip the sending to the server, but then you're not interacting with a REST service in the first place. You could also skip updating the DOM, but then you're not showing the results. In other words, every step of that sequence serves its own purpose that is independent from the other ones: they're not redundant.
But that doesn't mean that you should ignore DRY. Not at all. Instead, you should factor out as much of that code as possible into a single place so as to keep the number of repetitions to a minimum. (Maybe even find a framework to do some of that for you.)

DSL to implement business rules for REST service routing and processing

I am hoping that Combinator parsers, (http://debasishg.blogspot.com/2008/04/external-dsls-made-easy-with-scala.html), will work for a design to process the routing rules for a REST service that is implemented with Scalatra,(http://tutorialbin.com/tutorials/80408/infoq-scalatra-a-sinatra-like-web-framework-for-scala).
This REST service is to serve as a proxy so external applications can get access to services within the firewall, as it will have additional layers of security that can be customized for the business requirements of each REST service.
So, if a person wants to access their class schedule there will be less security than if you want to look at the transcript of someone.
I would like the rules for where to go to actually get the information, and how to return it, as well as what security is needed, in a DSL.
But, the first problem is how to dynamically change the routing rules for the REST service based on a DSL, as I am trying to create a framework that doesn't require a great deal of recompiling to add new rules, but just write the appropriate scripts and just let it be processed.
So, can a DSL be implemented using the Combinator Parser, in Scala, that will allow JAX-RS (http://download.oracle.com/javaee/6/tutorial/doc/giepu.html) to have dynamically changed routing?
UPDATE:
I haven't designed the language yet, but this is what I am trying to do:
route /transcript using action GET to
http://inside.com/transcript/{firstparam}/2011/{secondparam}
return json encrypt with public key from /mnt/publickey.txt
for /education_cost using action GET combine http://combine.com/SOAP/costeducate with
http://combine.com/education_benefit/2010 with
http://combine.com/education_benefit/2011 return html
These are two possible ideas where rules for a request for a transcript is sent to a different site, such as within a firewall, and the data is encrypted and returned.
The second would be more complicated in that the results of a SOAP and two REST requests will be combined, and there would need to be additional commands on how this is combined, but the idea is to put all of this in files that can be parsed on the fly.
If I used Groovy then some new classes could be generated for the routing, which would remove some performance hits, but I think using Scala would be the best bet, even if I took a performance hit.
My hope is to make a framework that is more maintainable so new routing rules can be written by people that don't know any OOP or functional languages, but the specifications could be written using Specs (http://code.google.com/p/specs/) so that the functional side could be certain that their requirements are tested on a regular basis.
UPDATE 2:
When I start working on a design I may intuitively understand some options, but not know why. Today I realized that the reason that Groovy may be a better fix for this is that I could then generate the classes for routing, using the metaprogramming (http://www.justinspradlin.com/programming/groovy-metaprogramming-adding-behavior-dynamically/), then I would be able to use Scala or Groovy to dynamically use the routing that was generated. I am not certain how to get Scala to generate the classes if they don't already exist.
In Groovy, as well as some other languages, as shown here (http://langexplr.blogspot.com/2008/02/handling-call-to-missing-method-in.html) if a method is missing you can dynamically generate the method and it will henceforth exist, so it will be missing one time.
It almost seems that I should be mixing Groovy with Java to make this work, but then the result may be that some of the code is in Scala and some in Java, for the routing of REST services.
Splitting the question in two parts:
can a DSL be implemented using the Combinator Parser
Yes. There are things that cannot be implemented using a combinator parser, or even other kinds of parser. For instance, Perl itself cannot be parsed (it must be evaluated). And combinator parsers are also not particularly good for complex languages (such as Scala -- its compiler is not based on combinator parsers), or if you demand top performance (such as the compilers used to compile hundreds of thousands of lines of code).
If, however, you plan to go to such extremes, choosing the parser is not going to be your main problem. For DSLs of average complexity, they'll do just fine.
that will allow JAX-RS to have dynamically changed routing
Well, I don't know JAX-RS, but if dynamically changed routing can be done with it, then combinators parsers will be able to provide whatever input is needed.
EDIT
Seeing your example, I think parser combinators are certainly enough. From their results, I expect you could dynamically create BlueEyes binders -- I haven't used BlueEyes, so I'm not sure how dynamic they are.
Another alternative would be go with Lift. Lift's binders are partial functions, and they can be combined in all the usual ways -- f1 orElse f2, f1 andThen f2, etc. I didn't suggest it at first because it is most often used with sessions, but it has a RESTful model which, I think, is stateless.
I don't know Scalatra, so I don't know if it would be adaptable to this or not.