API gateway for event-driven architecture - rest

We're trying to split our monolithic core into microservices and add some new ones connected with each other using the message system (e.g. Kafka).
The next stage is to create API endpoints for communication between mobile apps and microservices through Api gateway.
What would be a good solution for developing API gateway to transmit data to/from microservices?
use message system as request-reply one (transform requests on
API gateway into message commands, wait for response from message
system with status or necessary data)?
create REST endpoints on necessary microservices (e.g. using REST.li) to send or
get data through gateway; use message system for consistency of data
based on produced events by microservices?
Thanks for advice and some ideas

This depends about the Architecture that you are adopting.
If I understood the question, you already have the broker with the kafka message server.
I think that you can use the architecture publish/subscribe to assyncronous message.
If in the backend architecture the legacy systems to support SLA, in this case you can use the rest endpoints necessary to the integration.
This is the gain of if to utilize API Gateway Pattern in the Architecture.
Thanks a lot.

I would like to say that the second option sounds more reasonable for many cases.
Event-Driven solution mainly fits in the cases in which there are several following processes, so the creating entity could be via Rest endpoint, while processes of that entity could be async via events.
To illustrate, Payment Flow could be like below:
1-) API GW -> Payment Rest Controller -> Payment Service - Create Payment
Payment service creates a payment entity, and then publishes "payment.created" event.
2-)Queue -> Payment Stream Controller -> Service - Update Payment
Payment Stream controller consumes the "payment.created" event, and then checks balances, and updates payment entity as Confirmed. After updating the entity, it could send "payment.confirmed" event.
...
On the other hand, I mean the first option, it would be very hard to maintain a very decoupled system, as you need to know all exchanges or queues.
However, I think combining two solutions could be better for some cases. For example, your API is exposed by a client with very high traffic and the task of the API is quite clear. In that situation, using MQ as a buffer for this API would be perfect.

Related

REST API vs Enterprise service bus to integrate several services (amount of 3-5 services)

I need to integrate 3-5 existing and ready services that are developed by different teams. That is something like integrating several independent monolithic applications.
The very wanted feature is having a central communication component where all requests can be logged (or partially logged in case they have a big payload) so that it can be quickly seen what service sent a request with what payload and when if something goes wrong.
The second task is security. It is needed to protect inter-service communication.
I have been researching this topic for several days. That is what I've come up with so far:
Use Enterprise Service Bus (ESB)
Use MessageBroker (ActiveMQ, RabbitMQ, Kafka, etc)
Simply use REST API communication
I've read about ESB and I am not sure whether this solution is OK to use.
The picture from Wikipedia shows the following:
The problem with ESB that it not only implements communication between separate independent services. It also changes the communication itself from synchronous request-response model to an asynchronous messaging style. Currently, we don't need asynchronous messaging (maybe we will in the future, but not right now).
ESB allows to make request-response communication, but in a very inconvenient and complex way with generating correlation IDs, creating a temporary response-topic and consumer. With this, I have doubts whether it has more advantages to use ESB with its super complex Request-Response messaging style or simply use plain old REST API calls (RPC). However, with REST API it is not possible to have a centralized component that can log all communication between services. A similar problem exists with Message Broker too (because it also involves asynchronous messaging).
Is there any ready solution/pattern to integrate several services (not microservices) with centralized logging, security configuration and having simple to implement synchronous request-response model (with a possibility to add messaging, if needed, later)?
Make your services talk to each other via a proxy that will take care of these concerns (Back in 2007 I called this "edge-component" but the today it known as sidecar pattern)
A common way to do that would be to containerize your services, deploy to Kubernetes and use a service mesh (like Console's or istio, etc)

Event-based-microservices: MQTT with Broker OR HTTP with API GATEWAY?

I'm trying to develop a project with microservices.
I have some questions on this topic (something is not clear):
1) How to implement microservices communication?
A) HTTP : Every microservice expose HTTP API , an API GATEWAY broadcast requests.
B) MQTT : every microservice pub/sub to a broker
C) BOTH : but how to understand when one is better than the other ?
Have I to use pub/sub protocol as a standard even for classic operations usually performed over HTTP ? For example I have two microservices:
web-management and product-service. web-management is a panel that lets the administrator to add, modify, ... products in its ecommerce digital shop. Let's say we want to implement createProduct operation. It's a command (according to event /command distinction), a one-to-one communication.
I can open an API in product-service, let's say (POST, "/product") that add the new product. I also can implement this transforming the command in a productCreationRequest event. In this case: web-managemnet publish this event. product-service listen to productCreationRequest events (and also productUpdateRequest, productGetEvents, ...) once it is notified it performs the operation and emits productCreated event.
I find this case borderline. For example a last-occasion-service may listen to productCreated and immediately send a message (email or push notification) to customers. What do you think about this use case?
2) Which may be a valid broker (I will use docker-compose or kubernetes to orchestrate containerized microservices: language adopted probably java, javascript, python)?
Both is definitely a possibility! Choose a broker that allows you to easily mix-and-match between HTTP (synchronous) communication, and more async event-driven pub/sub. It should allow you to migrate your microservices between the two options as required.
HTTP APIs are great at the edge of your distributed application, where a customer wants to submit an order or something, and block waiting for a response (200 OK).
But internally within your application between microservices, a lot of them don't need a response... async, eventually consistent. And using pub/sub (like MQTT) allows for multiple downstream consumers easily. Another great use for MQTT is streaming updates to downstream consumers... like a data-feed from a bus or airline company or something, rather than having to poll a REST API for updates.
For your use-case and similar ones, I would almost always recommend using pub/sub communication, even if today it's a simple request-reply interaction with a single backend process. REST over HTTP is point-to-point, and perhaps in the future you want another process to be able to see/consume/monitor that event or interaction. If you're already using publish-subscribe, adding that 2nd (or more) consumer of that data flow is trivial. Harder with REST/HTTP.
In terms of performance, I would highly doubt a blocking protocol like HTTP is going to outperform something that is asynchronous and bidirectional, like MQTT which uses WebSockets for web communication.
As for a broker to glue all this together, check out the standard edition Solace PubSub+ event broker... can do both (and translate between) MQTT and HTTP. I even wrote a CodeLab for this (almost) exact use case haha!
(BTW, I work for Solace! FYI.)
Consider using SMF framework for Javascript/Node.js, it helps prototype pub/sub communications via a message broker (RabbitMQ) between microservices out of the box:
https://medium.com/#krawa76/bootstrap-node-js-microservice-stack-4a348db38e51
As for the message broker routes, use an event-driven naming convention, e.g. post a "web.new-product", where "web" is the sub-system name, "new-product" - event name.

Wrap event based system with REST API

I'm designing a system that uses a microservices architecture with event-based communication (using Google Cloud Pub/Sub).
Each of the services is listening and publishing messages so between the services everything is excellent.
On top of that, I want to provide a REST API that users can use without breaking the event-based approach. However, if I have an endpoint that triggers event X, how will I send the response to the user? Does it make sense to create a subscriber for a "ProcessXComplete" event and than return 200 OK?
For example:
I have the following microservices:
Service A
Service B
Frontend Service - REST Endpoints
I'm want to send this request "POST /posts" - this request sent to the frontend service.
The frontend service should trigger "NewPostEvent."
Both Service A and Service B will listen to this event and do something.
So far, so good, but here is where things are starting to get messy for me.
Now I want to return the user that made the request a valid response that the operation completed.
How can I know that all services finished their tasks, and how to create the handler to return this response?
Does it even make sense to go this way or is there a better design to implement both event-based communications between services and providing a REST API
What you're describing is absolutely one of the challenges of event-based programming and how eventual-consistency (and lack of atomicity) coordinates with essentially synchronous UI/UX.
It generally does make sense to have an EventXComplete event. Our microservices publish events on completion of anything that could potentially fail. So, there are lots of ServiceA.EventXSuccess events flowing through the queues. I'm not familiar with Google Cloud PubSub specifically, but in general in Messaging systems there is little extra cost to publishing messages with few (or no) subscribers to require compute power. So, we tend to over-articulate service status by default; it's easy to come back later and tone down messaging as needed. In fact, some of our newer services have Messaging Verbosity configurable via an Admin API.
The Frontend Service (which here is probably considered a Gateway Service or Facade Layer) has taken on the responsibility of being a responsive backing for your UI, so it needs to, in fact, BE responsive. In this example, I'd expect it to persist the User's POST request, return a 200 response and then update its local copy of the request based on events it's subscribed to from ServiceA and ServiceB. It also needs to provide a mechanism (events, email, webhook, gRPC, etc.) to communicate from the Frontend Service back to any UI if failure happens (maybe even if success happens). Which communication you use depends on how important and time-sensitive the notification is. A good example of this is getting an email from Amazon saying billing has failed on an Order you placed. They let you know via email within a few minutes, but they don't make you wait for the ExecuteOrderBilling message to get processed in the UI.
Connecting Microservices to the UI has been one of the most challenging aspects of our particular journey; avoiding tight coupling of models/data structures, UI workflows that are independent of microservice process flows, and perhaps the toughest one for us: authorization. These are the hidden dark-sides of this distributed architecture pattern, but they too can be overcome. Some experimentation with your particular system is likely required.
It really depends on your business case. If the REST svc is dropping message in message queue , then after dropping the message we simply return the reference ID that client can poll to check the progress.
E.g. flight search where your system has to calls 100s of backend services to show you flight deals . Search api will drop the message in the queue and save the same in the database with some reference ID and you return same id to client. Once worker are done with the message they will update the reference in DB with results and meanwhile your client will be polling (or web sockets preferably) to update the UI with results.
The idea is you can't block the request and keep everything async , this will make system scaleable.

Microservices: API Call Vs Messaging. When to Use?

I know that messaging system is non blocking and scalable and should be used in microservices environment.
The use case that i am questioning is:
Imagine that there's an admin dashboard client responsible for sending API request to create an Item object. There is a microservice that provides API endpoint which uses a MySQL database where the Item should be stored. There is another microservice which uses elastic search for text searching purposes.
Should this admin dashboard client :
A. Send 2 API Calls; 1 Call to MySQL service and another elasticsearch service
or
B. Send message to topic to be consumed by both MySQL service and elasticsearch service?
What are the pros and cons when considering A or B?
I'm thinking that it's a little overkill when only 2 microservices are consuming this topic. Also, the frequency of which the admin is creating Item object is very small.
Like many things in software architecture, it depends. Your requirements, SLAs and business needs should make it clearer.
As you noted, messaging system is not blocking and much more scalable, but, API communication got it pluses as well.
In general, REST APIs are best suited to request/response interactions where the client application sends a request to the API backend over HTTP.
Message streaming is best suited for notifications when new data or events occur that you may want to take action upon.
In you specific case, I would go with a messaging system with is much more scalable and non-blocking.
Your A approach is coupling the "routing" logic into your application. Pretend you need to perform an API call to audit your requests, then you will need to change the code and add another call to your application logic. As you said, the approach is synchronous and unless you're not providing threading logic, your calls will be lined up and won't scale, ie, call mysql --> wait response, then call elastic search --> wait response, ...
In any case you can prefer this approach if you need immediate consistency, ie, the result call of one action feeding the second action.
The B approach is decoupling that routing logic, so, any other service interested in the event can subscribe to the topic and perform the action expected. Totally asynchronous and scalable. Here you will have eventual consistency and you have to recover any possible failure.

Communication between microservices - request data

I am dealing with communication between microservices.
For example (fictive example, just for the illustration):
Microservice A - Store Users (getUser, etc.)
Microservice B - Store Orders (createOrder, etc.)
Now if I want to add new Order from the Client app, I need to know user address. So the request would be like this:
Client -> Microservice B (createOrder for userId 5) -> Microservice A (getUser with id 5)
The microservice B will create order with details (address) from the User Microservice.
PROBLEM TO SOLVE: How effectively deal with communication between microservice A and microservice B, as we have to wait until the response come back?
OPTIONS:
Use RestAPI,
Use AMQP, like RabbitMQ and deal with this issue via RPC. (https://www.rabbitmq.com/tutorials/tutorial-six-dotnet.html)
I don't know what will be better for the performance. Is call faster via RabbitMQ, or RestAPI? What is the best solution for microservice architecture?
In your case using direct REST calls should be fine.
Option 1 Use Rest API :
When you need synchronous communication. For example, your case. This option is suitable.
Option 2 Use AMQP :
When you need asynchronous communication. For example when your order service creates order you may want to notify product service to reduce the product quantity. Or you may want to nofity user service that order for user is successfully placed.
I highly recommend having a look at http://microservices.io/patterns/index.html
It all depends on your service's communication behaviour to choose between REST APIs and Event-Based design Or Both.
What you do is based on your requirement you can choose REST APIs where you see synchronous behaviour between services
and go with Event based design where you find services needs asynchronous behaviour, there is no harm combining both also.
Ideally for inter-process communication protocol it is better to go with messaging and for client-service REST APIs are best fitted.
Check the Communication style in microservices.io
REST based Architecture
Advantage
Request/Response is easy and best fitted when you need synchronous environments.
Simpler system since there in no intermediate broker
Promotes orchestration i.e Service can take action based on response of other service.
Drawback
Services needs to discover locations of service instances.
One to one Mapping between services.
Rest used HTTP which is general purpose protocol built on top of TCP/IP which adds enormous amount of overhead when using it to pass messages.
Event Driven Architecture
Advantage
Event-driven architectures are appealing to API developers because they function very well in asynchronous environments.
Loose coupling since it decouples services as on a event of once service multiple services can take action based on application requirement. it is easy to plug-in any new consumer to producer.
Improved availability since the message broker buffers messages until the consumer is able to process them.
Drawback
Additional complexity of message broker, which must be highly available
Debugging an event request is not that easy.
Personally I am not a fan of using a message broker for RPC. It adds unnecessary complexity and overhead.
How do you host your long-lived RabbitMQ consumer in your Users web service? If you make it some static singleton, in your web service how do you deal with scaling and concurrency? Or do you make it a stand-alone daemon process? Now you have two User applications instead of one. What happens if your Users consumer slows down, by the time it consumes the request message the Orders service context might have timed-out and sent another message or given up.
For RPC I would suggest simple HTTP.
There is a pattern involving a message broker that can avoid the need for a synchronous network call. The pattern is for services to consume events from other services and store that data locally in their own database. Then when the time comes when the Orders service needs a user record it can access it from its own database.
In your case, your Users app doesn't need to know anything about orders, but your Orders app needs to know some details about your users. So every time a user is added, modified, removed etc, the Users service emits an event (UserCreated, UserModified, UserRemoved). The Orders service can subscribe to those events and store only the data it needs, such as the user address.
The benefit is that is that at request time, your Orders service has one less synchronous dependency on another service. Testing the service is easier as you have fewer request time dependencies. There are also drawbacks however such as some latency between user record changes occuring and being received by the Orders app. Something to consider.
UPDATE
If you do go with RabbitMQ for RPC then remember to make use of the message TTL feature. If the client will timeout, then set the message expiration to that period. This will help avoid wasted work on the part of the consumer and avoid a queue getting backed up under load. One issue with RPC over a message broker is that once a queue fills up it can add long latencies that take a while to recover from. Setting your message expiration to your client timeout helps avoid that.
Regarding RabbitMQ for RPC. Normally we use a message broker for decoupling and durability. Seeing as RPC is a synchronous communication, that is, we are waiting for a response, then durability is not a consideration. That leaves us decoupling. The question is does that decoupling buy you anything over the decoupling you can do with HTTP via a gateway or Docker service names?