We are embarking on a new project development , where we will have multiple micro-services communicating each other to provide information in cloud native system. Our application will be decomposed into multiple services like Text Cleaner , Entities Extractor, Entities Resolver , Output Converter. As you can see in diagram we have some forking where input to one service in required by other service and so forth.
Only one service is going to be exposed outside. Others would be internal. And we have to provide synchronous response to clients.
I wanted to check if some one can guide me here to best patterns:
1- Should we have one Wrapper class which has model classes for all projects as one all of details is needed in final output convertors or how should the data flow so data is sorted out in last micro-service. We want to keep systems loosely coupled and are thinking about how orchestrate this flow without having a middle layer which composes all this data?
2- How to orchestrate this flow? Service Mesh / Api Gateway?
Looks like a workflow based solution.. When so many steps are involved ; the only response you can give to consumer is that request accepted.. and in background the process starts..You cannot let consumer wait for very long because they will get connection time out.
if all these services are deployed on different servers ( which should be the case for Micro services definition for scalability); you can communicate via HTTP or using some messaging solution like JMS or if u are deployed on cloud ; they give workflow based services..
Related
I have an external service that provides weather data via Restfull API with authentication.
What would be the best option to able to consume the services and send/insert the data to a Context broker.
I was thinking to develop a custom IoT Agent with json file to provide the external Restfull service endpoint and configuration for the Context broker.
Is any other option to achieve the same functionality?
The big question here is whether you need to inject the data into the context broker, or just inform the context broker that such data exists. If you want to consider the weather station as a device, then indeed, your proposed architecture makes sense:
Create a chron job to fire periodically
Generate a file in a known format (e.g. JSON) and pass the file to a custom micro-service
The micro-service interprets the file and runs a batch upsert to send all the data as measures into the context broker
An example of this with a code walkthrough is discussed in the following webinar
The alternative would be to create a micro-service which listens to the registration endpoint(s) - for NGSI-v2 uses the /v2/op/query batch endpoint for this, for NGSI-LD it is a direct forwarding of a request. In this scenario, the weather-station data remains outside of the context broker itself and can be used to augment existing entities. A working example can be found within the FIWARE Tutorials
Obviously the route you choose will depend upon what you need to do with the data, if you need to subscribe to temperature changes for example, then it is better to treat the weather station as a device providing context data in the form of measures and go for Option 1.
I need to integrate 3-5 existing and ready services that are developed by different teams. That is something like integrating several independent monolithic applications.
The very wanted feature is having a central communication component where all requests can be logged (or partially logged in case they have a big payload) so that it can be quickly seen what service sent a request with what payload and when if something goes wrong.
The second task is security. It is needed to protect inter-service communication.
I have been researching this topic for several days. That is what I've come up with so far:
Use Enterprise Service Bus (ESB)
Use MessageBroker (ActiveMQ, RabbitMQ, Kafka, etc)
Simply use REST API communication
I've read about ESB and I am not sure whether this solution is OK to use.
The picture from Wikipedia shows the following:
The problem with ESB that it not only implements communication between separate independent services. It also changes the communication itself from synchronous request-response model to an asynchronous messaging style. Currently, we don't need asynchronous messaging (maybe we will in the future, but not right now).
ESB allows to make request-response communication, but in a very inconvenient and complex way with generating correlation IDs, creating a temporary response-topic and consumer. With this, I have doubts whether it has more advantages to use ESB with its super complex Request-Response messaging style or simply use plain old REST API calls (RPC). However, with REST API it is not possible to have a centralized component that can log all communication between services. A similar problem exists with Message Broker too (because it also involves asynchronous messaging).
Is there any ready solution/pattern to integrate several services (not microservices) with centralized logging, security configuration and having simple to implement synchronous request-response model (with a possibility to add messaging, if needed, later)?
Make your services talk to each other via a proxy that will take care of these concerns (Back in 2007 I called this "edge-component" but the today it known as sidecar pattern)
A common way to do that would be to containerize your services, deploy to Kubernetes and use a service mesh (like Console's or istio, etc)
I have an aggregate root with the business logic in a c# project. Also in the solution is a REST web.api project that passes commands / requests to the aggregate root to do work and handle queries. This is my microservice. Now I want some of my events / commands / request to come of a message queue. I'm considering this:
Put a console app in the solution to listen for messages from a message queue. Then reference the aggregate root project in the console app
Is it a bad pattern to share "microservice business logic" between two services? Because now I have two "services" an api and a console app doing the work. I would have to ensure that when the business logic changes both services are deployed.
Personally I think it is fine to do what I suggest, a good CI/CD pipeline should mitigate that. But are there any other cons I might have missed?
For some background I would suggest watching DDD & Microservices: At Last, Some Boundaries! by Eric Evans.
A bounded context is the micro service. How you surface it is another matter. What you describe seems to be what I actually do quite frequently. I have an Identity & Access open source project that I'm working on (so depending on when you read this it may be in a different state) that demonstrates this structure.
Internal to an organization one may access the BC either via a service bus or via the web-api. External parties would utilize only the web-api as messaging should not be exposed.
The web-api either returns data from the query layer or sends commands via the service bus (messaging) to the BC functional endpoint. Depending on the complexity of the system I may introduce an orchestration concern that interacts with multiple BCs. It is probably a BC in its own right much along the lines of a reporting BC.
I have two microservices into Docker and I want to connect one with other, but I don´t know to do it. The two (and the future apps) are API Rest with Spring-boot, I am searching info, tutorials... but I don`t see nothing. My idea is have an main app that it is be able to connect with the other microservices that they are API Rest and afterwards this main app publish and all this I want to have it inside of the container (Docker).
Is it possible?
Anyone knows any tutorial that explain this?
Thanks so much!
What you are describing could be an API Gateway. Here is a great tutorial explaining this pattern.
Implement an API gateway that is the single entry point for all clients. The API gateway handles requests in one of two ways. Some requests are simply proxied/routed to the appropriate service. It handles other requests by fanning out to multiple services.
A variation of this pattern is the Backend for Front-End pattern. It defines a separate API gateway for each kind of client.
Using an API gateway has the following benefits:
Insulates the clients from how the application is partitioned into microservices
Insulates the clients from the problem of determining the locations of service instances
Provides the optimal API for each client
Reduces the number of requests/roundtrips. For example, the API gateway enables clients to retrieve data from multiple services with a single round-trip. Fewer requests also means less overhead and improves the user experience. An API gateway is essential for mobile applications.
Simplifies the client by moving logic for calling multiple services from the client to API gateway
Translates from a “standard” public web-friendly API protocol to whatever protocols are used internally
The API gateway pattern has some drawbacks:
Increased complexity - the API gateway is yet another moving part that must be developed, deployed and managed
Increased response time due to the additional network hop through the API gateway - however, for most applications the cost of an extra roundtrip is insignificant.
How implement the API gateway?
An event-driven/reactive approach is best if it must scale to scale to handle high loads. On the JVM, NIO-based libraries such as Netty, Spring Reactor, etc. make sense. NodeJS is another option.
Just give you the simplest answer:
In general containers can communicate among each others with any protocols (http,ftp,tcp,udp) not limit to only rest(http/s)
using the internal/ external IPs and ports
using the internal/ external names (dns):
in your Micro-service is in the same cluster on multi-host -> you should be able to write the program in your Springboot to call http://{{container service name}} , It's the built-in feature of containers
if you have more microservices in different cluster or hosts or the internet , you can use APIM (API management) or reverse-proxy(NGINX,HAProxy) to manages the service name eg.
microservice1.yourdomain.com —> container1 or service1(cluster)
microservice2.yourdomain.com —> container2 or service 2(cluster)
yourdomain.com/microservice1—> container2 or service 2(cluster)
yourdomain.com/microservice2—> container1 or service1(cluster)
PS . there are more sophisticated techniques out there but it fundamentally come down above approaches.
Here is the background:
We have a cluster (of 3) different services deployed on various containers (like Tomcat, TomEE, JBoss) etc. Each of the services does one thing. Like one service manages a common DB and provides REST services to CRUD the db. One service puts some data into a JMS Queue, Another service reads from the Queue and updates the DB. There is a client app that makes a REST service call to one of the service that sets off creating a row in the db, pushing that row into a queue etc.
Question: We need to implement the client app so that we know at any given point in time where the processing is. How do I implement this in RcJava 2/Java 9?
First, you need to determine what functionality in RxJava 2 will benefit you.
Coordination between asynchronous sources. Since you have a) event-driven requests from one side, and b) network queries on the other sides, this is a good fit so far.
Managing a stream of data, transforming and combining from one or more sources. You have given no indication that this is required.
Second, you need to determine what RxJava 2 does not provide:
Network connections. This is provided by your existing libraries.
Database management. Again, this is provided in your existing solutions.
Now, you have to decide whether the firstlies add up to something you can benefit from, given the up-front costs of learning a new library.