We have different client applications (each is built with a different UI and is targeted to a different sales channels) that are used to capture orders that ultimately need to be processed by our factory.
At first we decided to offer a single "order" microservice that would be used by all these client applications for business rules execution and data storage. This microservice will also trigger our backoffice processes such as client profile update, order analysis, documents storage to our electronic vault, invoicing, communications, etc.
The challenge we are facing is that these client applications are developed by teams that are external to ours (we are a backoffice team only). Each team responsible to develop a client application will be able to offer a different UX to their users (some will allow to save orders in an incomplete state, some wil allow to capture data using a specific worflow, some will use text fields instead of listboxes for some values, etc.).
This diversity of behaviors from client applications is an issue because our microservice logic will become very complex to be able to support all those UI requirements. Moreover, everytime a change will be made to one of the client applications, we will have to modify our microservice which is a case of strong coupling.
My questions are: What would be your best advice to manage this issue? Should we let each application capture the data the way it wants (and persist it if needed in its own database) and let them call our microservice only when an order is complete and compliant to our API contract?
Should we keep our idea of having a single "order" microservice for everyone and force each client application to capture the data the same way?
Any other option?
We want to reduce the duplication of data and business rules in our ecosystem but in the same time we don't want our 'order' microservice to become a mess.
Many thanks for your help.
Moreover, everytime a change will be made to one of the client applications, we will have to modify our microservice which is a case of strong coupling.
This rings alarm bells for me. A change to a UI shouldn't require a change to a backend service. (The exception would be if a new feature were being added to a system and the backend service needed to play a part in supporting that feature, but I wouldn't just call that a change to a client.) As you have said, it's strong coupling, and that's something to be avoided in a microservices environment.
Ideally, your service should provide a generic, programmatic API that is flexible enough to support multiple UIs (or other non-UI applications) without having any knowledge of how the UIs work.
It sounds like you have some decisions to make about what responsibilities your service will and won't take on:
Does it make more sense for your generic orders service to facilitate the storage/retrieval/completion of incomplete orders, or to force its clients to manage this somewhere else?
Does it make more sense for your generic service to provide facilities to assist in the tracking of workflows, or to force the UIs that need that functionality to find it elsewhere?
For clients that want to show list boxes, does it make sense for your generic orders service to provide APIs that aid in populating those boxes?
Should we let each application capture the data the way it wants (and persist it if needed in its own database) and let them call our microservice only when an order is complete and compliant to our API contract?
It really depends on whether you think that's the most sensible way for your service to behave. Something that will play into that will be how similar or dissimilar the needs of each UI is. If 4 out of 5 UIs have the same needs, it could well make sense to support that generically in your service. If every single UI behaves differently to the others, putting that functionality in your generic orders service would amount to storing frontend code somewhere that it doesn't belong.
It seems like there might also be some organisational considerations to these decisions. If the teams using your service are only frontend teams (i.e. without capacity/skills to build backend services), then someone will still have to build the backend functionality they require.
Should we keep our idea of having a single "order" microservice for everyone and force each client application to capture the data the same way?
Yes to the idea of having a single order service with a generic interface for everyone. With regards to forcing client applications to capture data a certain way, your API will only dictate what they need to do to create an order. You can't (and shouldn't) force anything on them about the way they capture the data before calling your service. They can do it however they like. The questions are really around whether your service supports various models of capture or pushes that responsibility back to the frontend.
What would be your best advice to manage this issue?
Collaborate with the teams that will use the service. Gather as much information as you can about the use cases in which they intend to use it. Discover what is common for the majority and choose what of that you will support. Create a semi-formal spec (e.g. well-documented Open API), share it with the client teams, ask for feedback, and iterate. For the parts of the UIs that aren't common across clients, strongly consider telling those teams they'll need to support those elements of their design themselves, especially if they represent significant work on your end.
Related
I have 2 years in the IT industry,i love to read a lot ,but when i go deep in some subjects i see a lot of contradiction in somes articles,forum or terms that are used interchangeable.
I understand the difference between Soap and Rest.
When we want to communicate between backends, we can use either of these 2 approaches, each with its advantages and disadvantages.
Situation :
If i have an application, which can be monolithic or not, where I have a backend and I will only have a front end that consumes it. Usually we create a Rest Api so that our front end can consume it. But we will never think about exposing our backend with Soap.(Lot of reasons)
Questions:
1 -Is it okay if I say that Rest , in addition to allowing us to exchange information between application and application (backend to backend ), is it also useful when exposing services for our front end? And SOAP is only useful for Server - Server communication?
2 -And finally, if I expose a backend only for a front end, it is ok to say that we expose a web service or conceptually we say that it is a backend for frontend ?
Question 1: No the First question is wrong Assumption. We can say that in SOAP, XML is the only means of communication, while in Rest, the accepted means is JSON, while there are other formats like XML, JSON, PDF, HTML etc. and Ofcourse, XML can be converted back from server into UI Language and XML Request deciphered at Server for a Response. So, its not Ok to say that SOAP is only for Server - Server Communication.
2. No, when you have typically exposed backend only for consumption by a Front end, you can typically say that it is a backend for numerous front end client requests. But IMO, Backend for a front end is a monolithic webapplication, both bundled in WAR. so in that sense, any UI Request can request response from the Back end web service. Hope i am able to clear your understanding about web services.
I see that in your question there are actually 4 embedded independent topics. And probably because they are always used in conjunction it is sometimes tough to understand.
I will give a short answer first:
REST and SOAP both can be used for Client-Server and Server-Server integrations. But the choice will be dependent on the questions like where you want server-side UI technology/client-side UI technology or is it a single page application/portal technology, etc.
If you expose a single-backend for a single-frontend it's technically a BFF although the term BFF is used only in the case where you have separate-backends for each type of frontend application. e.g. one for mobile, one for web, one for IoT devices, etc.
The long answer is to clarify the 4 principles. Let me give a try at this by separating the topics into the below four headings:
1. Backend(Business Layer) vs Backend for frontend(BFF)
In classical 3-tier architecture (UI-Business Layer-Database) world, the middle-layer that consists of the business and integration logic is mostly referred to as backend/business layer.
This layer can be separated from the UI/Frontend using multiple different options like APIs(REST/SOAP), RPC, Servlet Technology, etc. The limitation with this 3-tier architecture is that, it is still tightly coupled to the type of users and use-cases are limited to web/browser based. It is not a good choice when you want to reuse the business-layer for both web and mobile as the mobile applications are required to be light-weight by principle.
That's where we lean on to multi-tier architecture with Backend For Frontend(BFF) as a savior. It's just a methodology to segregate the business-layers based on consumers.
2. Monolith vs SOA vs Microservices(Optimized SOA)
In a monolith world all the code components mostly UI and Business Layer sits in a tightly coupled fashion. The simplest example would be a Java Servlet Pages(JSP) application with Java as Business Layer. These are typically server-side UI technologies.
In Service Oriented Architecture(SOA), the usecases revolve around leveraging reusable business layer functions aka services. Here one would have to deal with UI-Server, Inter-Service and Server-Server integration scenarios. It's heavily service dependent, meaning it's like a spider-web of dependent applications.
The Microservices is an extension of SOA, but the approach is to keep a resource in focus instead of services to reduce the spider-web dependencies. Hence, self-sufficient and standalone service-clusters are the base of micro-services architecture.
3. SOAP vs REST Webservices
SOAP stands for Simple Object Access Protocol, typically used by the business-layer to provide user-defined methods/services to manipulate an object. For example look at the names of the services for accessing a book collection
To get a book getABook()
Get the whole list of books listAllBooks()
Find a book by name searchABook(String name)
Update a book's details updateABookDetails()
On the other hand, REST is representational state transfer which transfers the state of a web-resource to the client using underlying existing HTTP methods. So the above services for accessing a book collection would look like
To get a book /book(HTTP GET)
Get the whole list of books /books(HTTP GET)
Find a book by name /book?name={search}(HTTP GET)
Update a book's details /book/{bookId}(HTTP PATCH/PUT)
4. How to make a correct choice of architecture?
Spot the diversity of the application user groups and usages: This will help to understand the platform(web/mobile/IoT/etc), nature of the application and session-management.
Determine the estimated/required throughput: This will help you to understand the scalability requirements.
How frequently and who will be maintaining the application: This will help to gauge the application and technology complexity, deployment cycles, deployment strategy, appetite for downtime, etc.
In conclusion, always follow the divine rule of KISS: Keep it simple, stupid.
1.)
A webapplication is for H2M communication a webservice is for M2M communication through the web. The interface of the service is more standardized, more structured, so machines can easily use it and parse the messages.
I don't think it matters where your service consumer is, it can run in a browser or it can run on the server. As long as it can communicate with the service on a relative safe channel it is ok.
You design a service usually to decouple it from multiple different consumers, so you don't have to deal with the consumer implementations. This makes sense usually when you have potentially unknown consumers programmed usually by 3rd party programmers you don't even know or want to know about. You version the service or at least the messages to stay compatible with old consumers.
If you have only a single consumer developed by you, then it might be too much extra effort to maintain a service with a quasi-standard interface. You can easily change the code of the consumer when you change the interface of the service, so thinking about interface design, standardization, backward compatibility, etc. does not make much sense. Though you can still use REST or SOAP ad hoc without much design. In this case having a RESTish CRUD API without hypermedia is a better choice I think.
2.)
I think both are good, I would say backend in your scenario.
I am unsure how to make use of event-driven architecture in real-world scenarios. Let's say there is a route planning platform consisting of the following back-end services:
user-service (manages user data and roles)
map-data-service (roads & addresses, only modified by admins)
planning-tasks-service
(accepts new route planning tasks, keeps track of background tasks, stores results)
The public website will usually request data from all 3 of those services. map-data-service needs information about user-roles on a data change request. planning-tasks-service needs information about users, as well as about map-data to validate new tasks.
Right now those services would just make a sync request to each other to get the needed data. What would be the best way to translate this basic structure into an event-driven architecture? Can dependencies be reduced by making use of events? How will the public website get the needed data?
Cosmin is 100% correct in that you need something to do some orchestration.
One approach to take, if you have a client that needs data from multiple services, is the Experience API approach.
Clients call the experience API, which performs the orchestration - pulling data from different sources and providing it back to the client. The design of the experience API is heavily, and deliberately, biased towards what the client needs.
Based on the details you've said so far, I can't see anything that cries out for event-based architecture. The communication between the client and ExpAPI can be a mix of sync and async, as can the ExpAPI to [Services] communication.
And for what it's worth, putting all of that on API gateway is not a bad idea, in that they are designed to host API's and therefore provide the desirable controls and observability for managing them.
Update based on OP Comment
I was really interested in how an event-driven architecture could
reduce dependencies between my microservices, as it is often stated
Having components (or systems) talk via events is sort-of the asynchronous equivalent of Inversion of Control, in that the event consumers are not tightly-coupled to the thing that emits the events. That's how the dependencies are reduced.
One thing you could do would be to do a little side-project just as a learning exercise - take a snapshot of your code and do a rough-n-ready conversion to event-based and just see how that went - not so much as an attempt to event-a-cise your solution but to see what putting events into a real-world solution looks like. If you have the time, of course.
The missing piece in your architecture is the API Gateway, which should be the only entry-point in your system, used by the public website directly.
The API Gateway would play the role of an orchestrator, which decides to which services to route the request, and also it assembles the final response needed by the frontend.
For scalability purposes, the communication between the API Gateway and individual microservices should be done asynchronously through an event-bus (or message queue).
However, the most important step in creating a scalable event-driven architecture which leverages microservices, is to properly define the bounded contexts of your system and understand the boundaries of each functionality.
More details about this architecture can be found here
Event storming is the first thing you need to do to identify domain events(a change in state in your system). For example, 'userCreated', 'userModified', 'locatinCreated', 'routeCreated', 'routeCompleted' etc. Then you can define topics that manage these events. Interested parties can consume these events by subscribing to published events(via topics/channel) and then act accordingly. Implementation of an event-driven architecture is often composed of loosely coupled microservices that communicate asynchronously through a message broker like Apache Kafka. Free EDA book is an excellent resource to know most of the things in EDA.
Tutorial: Even-driven-architecture pattern
We have a python web app that clients interact with and that web app directly interacts with a database. We now have the need to develop an API that merchants will use to get and post data from/in our database in JSON. Should we build the API as part of the web app, meaning that each request will pass through our python web app and then interact with the database or should it be separated?
Further considerations include scalability and the fact that in the future we’ll probably want to develop a mobile app or other services that will also need to communicate with the database. As such, we considered the possibility to build an API as the only point of interaction with the database.
However, we are deeply in the development of the flask web app and change it would mean an huge delay in our schedule and we just wanted to weight in the advantages and disadvantages of both solutions.
These two schemes summarize the options we are considering:
Option 1:
Option2:
As you said both options have advantages and disadvantages.
Option 1 gives you Separation of Concerns. The logic for interacting with your database is abstracted behind a single service. Changes to the type of database you use or the schema you use only requires code changes to a single service. For example, say your platform has expanded and you now wish to cache calls to your database. If you have an API, Web App, and Mobile App all communicating directly with the database they must all be updated to take advantage of the cache. These changes would likely also have to be orchestrated to be deployed at the same time. In reality this is going to involve downtime: most often you see this referred to as 'scheduled maintenance'.
However, Option 1 breaks the Single Responsibility Principle. A service should do a single thing and do it well. In Option 1 the service is responsible for both being an interface to the database and rendering HTML for the web app. Changes to the Web App require you to redeploy the service for the API even though the two are not connected.
The advantages and disadvantages for Option 2 are mostly just the opposites of the advantages and disadvantages for Option 1. Multiple services sharing a database can lead to inconsistency in the data, tight coupling (especially in deployment), and debugging being more difficult.
A common design pattern (which I'd recommend) is a slight modification of Option 1. An API sits in front of the database. This is the only service that interacts with the database. This setup should improve your scalability. It's easy to deploy duplicate APIs and then load-balance requests between them. Furthermore, caching database lookups or changing database technology entirely is a (relatively) simple task. Your Web App, or any other services you develop in the future, interact with the API instead of the database. Here you can reap the benefits of Single Responsibility. It is worth noting that with this design every request for your Web App will have to go through two services. However, the benefits of the design arguably outweigh a few extra milliseconds of latency.
One last thing: kudos for thinking about scalability this early on. You may take a hit now if your schedule is delayed but I think you'll be better off in the long term.
I want to make an API using REST which interacts (stores) data in a database.
While I was reading some design patterns and I came across remote facade, and the book I was reading mentions that the role of this facade is to translate the course grained methods from the remote calls into fine grained local calls, and that it should not have any extra logic. As an explaination, it says that the program should still work without this facade.
Here's an example
Yet I have two questions:
Considering I also have a database, does it make sense to split the general call into specific calls for each attribute? Doesn't it make more sense to just have a general "get data" method that runs one query against the database and converts it into an usable object, to reduce the number of database calls? So instead of splitting the get address to get street, get city, get zip, make on db call for all that info.
With all this in mind, and, in my case using golang, how should the project be structured in terms of files and functions?
I will have the main file with all the endpoints from the REST API, calling the controllers that handle these requests.
I will have a set of files that define those controllers. Are these controllers the remote facade? Should those methods not have logic in that case, and just call the equivalent local methods?
Should the local methods call the database directly, or should they use some sort of helper class that accesses the database?
Assuming all questions are positive, does the following structure make sense?
Main
Controllers
Domain
Database helper
First and foremost, as Mike Amundsen has stated
Your data model is not your object model is not your resource model is not your affordance model
Jim Webber did say something very similar, that by implementing a REST architecture you have an integration model, in the form of the Web, which is governed by HTTP and the other being the domain model. Resources adept and project your domain model to the world, though there is no 1:1 mapping between the data in your database and the representations you send out. A typical REST system does have many more resources than you have DB entries in your domain model.
With that being said, it is hard to give concrete advice on how you should structure your project, especially in terms of a certain framework you want to use. In regards to Robert "Uncle Bob" C. Martin on looking at the code structure, it should tell you something about the intent of the application and not about the framework¹ you use. According to him Architecture is about intent. Though what you usually see is the default-structure imposed by a framework such as Maven, Ruby on Rails, ... For golang you should probably read through certain documentation or blogs which might or might not give you some ideas.
In terms of accessing the database you might either try to follow a micro-service architecture where each service maintains their own database or you attempt something like a distributed monolith that acts as one cohesive system and shares the database among all its parts. In case you scale to the broad and a couple of parallel services consume data, i.e. in case of a message broker, you might need a distributed lock and/or queue to guarantee that the data is not consumed by multiple instances at the same time.
What you should do, however, is design your data layer in a way that it does scale well. What many developers often forget or underestimate is the benefit they can gain from caching. Links are basically used on the Web to reference from one resource to an other and giving the relation some semantic context by the utilization of well-defined link-relation names. Link relations also allow a server to control its own namespace and change URIs as needed. But URIs are not only pointers to a resource a client can invoke but also keys for a cache. Caching can take place on multiple locations. On the server side to avoid costly calculations or look ups on the client side to avoid sending requests out in general or on intermediary hops which allow to take away pressure from heavily requested servers. Fielding made caching even a constraint that needs to be respected.
In regards to what attributes you should create queries for is totally dependent on the use case you attempt to depict. In case of the address example given it does make sense to return the address information all at once as the street or zip code is rarely queried on its own. If the address is part of some user or employee data it is more vague whether to return that information as part of the user or employee data or just as a link that should be queried on its own as part of a further request. What you return may also depend on the capabilities of the media-type client and your service agree upon (content-type negotiation).
If you implement something like a grouping for i.e. some football players and certain categories they belong to, such as their teams and whether they are offense or defense players, you might have a Team A resource that includes all of the players as embedded data. Within the DB you could have either an own table for teams and references to the respective player or the team could just be a column in the player table. We don't know and a client usually doesn't bother as well. From a design perspective you should however be aware of the benefits and consequences of including all the players at the same time in regards to providing links to the respective player or using a mixed approach of presenting some base data and a link to learn further details.
The latter approach is probably the most sensible way as this gives a client enough information to determine whether more detailed data is needed or not. If needed a simple GET request to the provided URI is enough, which might be served by a cache and thus never reach the actual server at all. The first approach has for sure the disadvantage that it doesn't reuse caching optimally and may return way more data then actually needed. The approach to include links only may not provide enough information forcing the client to perform a follow-up request to learn data about the team member. But as mentioned before, you as the service designer decide which URIs or queries are returned to the client and thus can design your system and data model accordingly.
In general what you do in a REST architecture is providing a client with choices. It is good practice to design the overall interaction flow as a state machine which is traversed through receiving requests and returning responses. As REST uses the same interaction model as the Web, it probably feels more natural to design the whole system as if you'd implement it for the Web and then apply the design to your REST system.
Whether controllers should contain business logic or not is primarily an opinionated question. As Jim Webber correctly stated, HTTP, which is the de-facto transport layer of REST, is an
application protocol whose application domain is the transfer of documents over a network. That is what HTTP does. It moves documents around. ... HTTP is an application protocol, but it is NOT YOUR application protocol.
He further points out that you have to narrow HTTP into a domain application protocol and trigger business activities as a side-effect of moving documents around the network. So, it's the side-effect of moving documents over the network that triggers your business logic. There is no straight rule whether to include business logic in your controller or not, but usually you try to keep the business logic in yet their own layer, i.e. as a service that you just invoke from within the controller. That allows to test the business logic without the need of the controller and thus without the need of a real HTTP request.
While this answer can't provide more detailed information, partly due to the broad nature of the question itself, I hope I could shed some light in what areas you should put in some thoughts and that your data model is not necessarily your resource or affordance model.
When should I choose one over the other?
The way I see it API wrappers are so much simpler to use but I feel like there's something I'm not seeing, so can you enlighten me?
REST is a software design to decouple clients from APIs though it is often misunderstood as simple URI design thingy due to the fact that it is often based on HTTP.
Advantages of using APIs and clients that support REST is clearly, that clients are not coupled to any API in particular and are therefore tolerant to changes done on the server side like moving resources to different endpoints. Like a browser is able to present content of a sheer infinite number of web pages, true RESTful clients should behave identical and be able to communicate with any API that supports REST. It may learn on the fly how to deal with new content type by looking up some processing routines dynamically (similar to plugins of certain applications) or fallback to a default handling (reporting errors or presenting unknown data as plain text).
An API wrapper is often used to create clients that are limited to a certain API only. This simplifies development as the client can contain certain logic needed for interaction with the API like en/de-coding messages sent to or received from the service, list all available operations and similar stuff. Often URI endpoints are also either injected through properties or hardcoded into the application. Also, the content type is often limited to XML or JSON and the rules on how to treat responses are hardcoded into the client directly. All these steps however tightly couple the client to the API. In case the API changes (or is enriched by further endpoints) the API wrapper has to be updated and shipped to each consumer otherwise users either won't be able to use the API or make use of the latest features.
API wrapper are often tailor made for the usage of the API and are also often much simpler to implement. They, however, also require constant updating in cases the API itself is changing as the wrapper is incapable of handling these changes itself. REST clients on the other hand are far more complicate to develop as the client somehow has to know (or learn) the semantical meaning of a certain response and has to infer somehow how to act upon received responses. Some parts of it are yet an active field of research (at least in automated processing).
As you asked on when to use which: In cases where you have to create an all-purpose client, a REST client is for sure the right thing to do. However, identifying the correct semantical treatment will be the true chanllenge for these clients IMO. In cases where you only need to provide a client frontend for customers (or users) and not much change is expected on the API itself, a wrapper may be much easier to implement. However, please don't call such a client RESTful!