I have a monolithic Spring boot application that exposes a REST API /getUserList.
Now we are taking the micro service approach, so that we extracted some business code and create /getCountry microservice REST API.
Normally, /getUserList should call /getCountry to get all countries.
Now for testing purpose, I want to use these two REST API in the same server, but separately. What I can think of is to create two seperate JAR and let one call another. I want to know what is the right way to do this :
Should I list /getCountry as a module of /getUserList? Or should I create a separate project ?
You decided to break your monolith into microservices. You started on the right foot, separating the application logic based on the bounded contexts you defined. In this case you describe "countries" context and the "user" context.
Microservices are independently deployable, so you are correct, they have to be separate jars. Your "user" service will call the "countries" service via the network as usual in microservices architecture.
Usually, each microservice has their own repo (project), so that developers who have to work on the service, do not need to familiarize with a large codebase, or figure out why code not related to their service (bounded context) is there.
That being said, if currently your project is quite small you can decide to keep these as separate modules (do not import countries module in the user module) under the same project and break them later.
Related
I wrote REST API in flask using flask-restplus does helps interact frontend UI in javascript with backend in python some rest api interacts with REST API provided by gitlab and caches them and returns data to client from the cache instead.
why such an implementation cannot be classified as micro services ?
Microservice is really a set of guidelines that help you develop scalable maintainable applications in a fast changing world. It is nothing more than that. It should not worry if you if some one else is calling it a microservice or not. If you have building some thing and you took into consideration of how to divide one big application in defined boundaries with loose coupling in between them, such that each service does one thing for you and can be build, scaled, modified independently without breaking contracts, you have followed the basic guide lines.
Microservices is about whole architecture and not just about couple of services. If you have just one service that does provide rest end points for some other monolith to consume, then over all its still a monolith, if its dealing with set of other services instead, then it may be set of services that can be called microserivces
I am developing a bookstore application based on microservices architecture with Spring and Netflix OSS.
I made a shopping service, with all the stuff necessary to buy a book. But I need to integrate with two services.
One service is a shipping service, this is an internal service. Connected through a Feign client.
The other service is an inventory service, this is an external service. Connected through an external library. This is a problem because it's more difficult to mock.
In order to connect with this services from shopping service, I thought that the adapter pattern was a good idea. I made another service, a shopping adapter service, that is used to connect with the other two services. And with this architecture, I can test the shopping service mocking the adapter service.
But now I think that is a bit awkward solution.
Do you know which is the best architecture to connect with external or internal services?
First, is it correct that’s what I understand?
Compound Service --(use)--> shipping service
----------------------------(use)--> inventory service ( this project uses external
library )
If it is right, I think it is not difficult to mock.
Create an inventory microservice project for wrapping external library.
Because compound service doesn`t need to care what we need to use a certain library for inventory. Your Inventory microservice project just exposes endpoint for using inventory service.
In the microservices world, services are first class citizens. Microservices expose service endpoints as APIs and abstract all their realization details. The internal implementation logic, architecture, and technologies, including programming language, database, quality of services mechanisms, and more, are completely hidden behind the service API.
Then, You can mock Inventory service at your compound service test code.
#Configuration
class MessageHandlerTestConfiguration {
#Bean
public InventoryClient inventoryClient() {
return Mockito.mock(InventoryClient.class);
}
I don't think creating another microservice which you should maintain and monitor and keep resilient and high available etc. just to have a kind of facade or adapter is a good idea. This statement may be proved wrong for some very special cases but generally if you don't have a context to maintain then it is not a good idea to create a new microservice.
What I could recommend would be directly calling shipping service by paying attention to anti corruption layer pattern which keeps your actual domain's service code clean from other microservice's domain entities.
You can find some more information about anti corruption layer here https://softwareengineering.stackexchange.com/questions/184464/what-is-an-anti-corruption-layer-and-how-is-it-used
When UI is supplied by Angular web application, the responsibility of the other micro service applications gets limited to supplying JSONS. But they still need to support ReST paths and cannot be a monolithic giant. Meaning, can't just make those as JARs and bind it as dependency to some WAR app. Is there a better way to do this than keeping the WAR applications that are just going to supply JSONs as UI less WARs for the sake of making sure that they are not mixed up?
We are building a Spring Boot based web application which consists of a central API server and multiple remote workers. We tried to embrace the idea of microservices so each component is built as a separate Spring Boot App. But they do share a lot of same data schema for the objects being handled across components, thus currently the JPA model definitions in code are duplicated in each component's project. Every time something changes we need to remember to change it everywhere and this results in poor compatibility between different versions across components.
So I'm wondering if this is the best we can do here, or is there better ways to manage the components code in such scenario?
To be more specific, we use both MySQL and Redis for data storage and both are accessed by all components. Redis is actually serving as one tool of data communication between components.
Basically I started to design my project to be like that:
Play! Framework for web gui (consuming RESTful service)
Spray Framework for RESTful service, connects to database, process incoming data, serves data for web gui
Database. Only service have rights to access it
Now I'm wondering, if it's really best possible design.
In fact with Play! I could easily have both web gui and service hosted at once.
That would be much easier to test and deploy in simple cases probably.
In complicated cases when high performance is needed I still can run one instance purely for gui and few next just to work as services (even if each of them could still serve full functionality).
On the other side I'm not sure if it won't hit performance too hard (services will be processing a lot of data, not only from the web gui). Also, isn't it mixing things which I should keep separated?
If I'll decide to keep them separated, should I allow to connect to database only through RESTful service? How to resolve problem with service and web gui trying to use different version of database? Should I use versioned REST protocol in that case?
----------------- EDIT------------------
My current system structure looks like that:
But I'm wondering if it wouldn't make sense to simplify it by putting RESTful service inside Play! gui web server directly.
----------------- EDIT 2------------------
Here is the diagram which illustrates my main question.
To say it clearly in other words: would it be bad to connect my service and web gui and share the model? And why?
Because there is also few advantages:
less configuration between service and gui needed
no data transfer needed
no need to create separate access layer (that could be disadvantage maybe, but in what case?)
no inconsistencies between gui/service model (for example because of different protocol versions)
easier to test and deploy
no code duplication (normally we need to duplicate big part of the model)
That said, here is the diagram:
Why do you need the RESTful service to connect to the database? Most Play! applications access the database directly from the controllers. The Play! philosophy considers accessing your models through a service layer to be an anti-pattern. The service layer could be handy if you intend to share that data with other (non-Play!) applications or external systems outside your control, but otherwise it's better to keep things simple. But you could also simply expose the RESTful interface from the Play! application itself for the other systems.
Play! is about keeping things simple and avoiding the over-engineered nonsense that has plagued Java development in the past.
Well, after few more hours of thinking about this, I think I found solution which will satisfy my needs. The goals which I want to be fulfilled are:
Web GUI cannot make direct calls to the database; it need to use proper model which will in turn use some objects repository
It must be possible to test and deploy whole thing as one packet with minimum configuration (at least for development phase, and then it should be possible to easy switch to more flexible solution)
There should be no code duplication (i.e. the same code in the service and web gui model)
If one approach will appear to be wrong I need to be able to switch to other one
What I forget to say before is that my service will have some embedded cache used to aggregate and process the data, and then make commits to database with bigger chunks of them. It's also present on the diagram.
My class structure will look like this:
|
|- models
|- IElementsRepository.scala
|- ElementsRepositoryJSON.scala
|- ElementsRepositoryDB.scala
|- Element.scala
|- Service
|- Element.scala
|- Web
|- Element.scala
|- controlers
|- Element.scala
|- views
|- Element
|- index.scala.html
So it's like normal MVC web app except the fact that there are separate model classes for service and web gui inheriting from main one.
In the Element.scala I will have IElementsRepository object injected using DI (probably using Guice).
IElementsRepository have two concrete implementations:
ElementsRepositoryJSON allows to retrieve data from service through JSON
ElementsRepositoryDB allows to retrieve data from local cache and DB.
This means that depending on active DI configuration both service and web gui can get the data from other service or local/external storage.
So for early development I can keep everything in one Play! instance and use direct cache and DB access (through ElementsRepositoryDB) and later reconfigure web gui to use JSON (through ElementsRepositoryJSON). This also allows me to run gui and service as separated instances if I want. I can even configure service to use other services as data providers (however for now I don't have such a needs).
More or less it will look like that:
Well, I think there's no objectively right or wrong answer here, but I'll offer my opinion: I think the diagram you've provided is exactly right. Your RESTful service is the single point of access for all clients including your web front-end, and I'd say that's the way it should be.
Without saying anything about Play!, Spray or any other web frameworks (or, for that matter, any database servers, HTML templating libraries, JSON parsers or whatever), the obvious rule of thumb is to maintain a strict separation of concerns by keeping implementation details from leaking into your interfaces. Now, you raised two concerns:
Performance: The process of marshalling and unmarshalling objects into JSON representations and serving them over HTTP is plenty fast (compared to JAXB, for example) and well supported by Scala libraries and web frameworks. When you inevitably find performance bottlenecks in a particular component, you can deal with those bottlenecks in isolation.
Testing and Deployment: The fact that the Play! framework shuns servlets does complicate things a bit. Normally I'd suggest for testing/staging, that you just take both the WAR for your front-end stuff and the WAR for your web service, and put them side-by-side in the same servlet container. I've done this in the past using the Maven Cargo plugin, for example. This isn't so straightforward with Play!, but one module I found (and have never used) is the play-cargo module... Point being, do whatever you need to do to keep the layers decoupled and then glue the bits together for testing however you want.
Hope this is useful...