Microservice Communication using reactive programming - apache-kafka

I have two microservices(A and B).
Service B receives HTTP requests from the UI. Based on some conditions service B requires data from a DB which only service A has access to. So I would need some communication mechanism between service B and A. So service B would internally call service A, retrieve some fields from the response from service A and eventually send the final response to the client.
I'm used to spring boot framework and AWS cloud resources. I'm new to reactive programming. The services are built using micronaut framework and utilise reactive programming. Kafka is also used as a messaging system.
In Spring boot, I would use a rest api and use webclient to make async calls from service B to service A. But with a rest API, I'll have to handle security and authentication as well.
With reactive programming in micronaut and kafka available, is there a better way for these microservices to communicate?
Update 1:
If a message bus is used, in an event-driven way, service B can't receive the response from service A right? Unless service B notes the message ID and service A publishes a message back with the required data from it's DB and mentions the appropriate message ID for that data sent.

Micronaut can also use both HTTP and Kafka.
Service-to-service communication simply requires a network link. Message buses / brokers are completely optional, but offer a way to buffer events and/or handle downtime.
Reactive programming doesn't really change this. It doesn't change the security model, either. Kafka and REST clients can still use TLS, and have authz restrictions.
DB which only service A has access to
You could use Debezium; pulling the data into a Kafka topic from the database (if supported), then build a local, queryable KTable within "service B", rather than needing "service A"'s API at all.

Related

Connecting to topics using Rest proxy

I am new to Kafka .I have implemented my consumer as normal Java springboot application.I need to connect to the topic deployed on remote broker using Kafka rest proxy.
I am not able to understand how it will function differently if i use Kafka rest proxy.Where i should do change in my code to include the rest proxy.Do i need to structure my code complete different as i didn't think about rest proxy while creation.
I maybe wrong with the terminologies.
Any help or guidance would be of great help.
REST proxy would be used with any HTTP client, not a Kafka consumer (so create a WebClient bean rather than a ConsumerFactory, etc)
You can refer its documentation for how you can consume records over HTTP, but, simply put, the code will be completely different up until you parse the data

Is a web frontend producing directly to a Kafka broker a viable idea?

I have just started learning Kafka. So trying to build a social media web application. I am fairly clear on how to use Kafka for my backend ( communicating from backend to databases and other services).
However, I am not sure how should frontend communicate with backend. I was considering an architecture as: Frontend -> Kafka -> Backend.
Frontend acts as producer and backend as consumer. In this case, frontend would supposedly have all required resources to publish to Kafka broker (even if I implement security on Kafka). Now, is this scenario possible:
Lets say I impersonate the frontend and send absurd/invalid messages to my Kafka broker. Now I can handle and filter these messages when they reach to my backend. But I know that Kafka stores these messages temporarily. Wouldn't my Kafka server face DDOS problems if such "fake" messages are published to it in high volume, since it is gonna store them anyway as they dont get filtered out until they actually get consumed by backend?
If so, how can I prevent this?
Or is this not a good option? I can also try using REST for frontend/backend communication and then Kafka will be used from backend to communicate with database(s) and other stuff.
Or I can have a middleware (again, REST) that detects and filters out such messages.
Easiest way is to have the front end produce to the Kafka REST Proxy
See details here https://docs.confluent.io/1.0/kafka-rest/docs/intro.html
That way there is no kafka client code required in your front end and you can use HTTP(S) with standard off the shelf load balancers, and API Management tools.
Could you not consider the other direction, to use Kafka as a transport system for updating assets available to frontend ? This has been proposed for hybrid React / NodeJS/Express solutions.

Microservice consuming Kafka events through Zuul

I am new to Microservices architecture.
I want to create a microservice using Netflix OSS.
I want my architecture to look some thing like the one described here.
http://callistaenterprise.se/blogg/teknik/2017/09/13/building-microservices-part-8-logging-with-ELK/
However I want one of my microservice, (which is behind the Zuul Reverse proxy) to consume events from a Kafka events(which is from some other team).
I am not sure If this is a good idea, since this will expose my microservices, which is supposed to be abstracted from outside world behind my Zuul wall.
Is there any other way. Can I use my Zuul to consume event streams from kafka and push to my microservice. If yes, how do I stream from my Zuul to microservice?
Zuul will redirect your request to A service HTTP XXXX port /api/v1/input. This microservice as a producer will put message to kafka channel. After kafka consumer will get message and store or analyze. Another microservice can read from database and return response after frontend request or do push using Server Sent Events or Vertx message bus....

Use Kafka in AWS EBS microservice docker environment to avoid losing user requests and handle more concurrent hits

Currently, I am using AWS EBS microservice docker environment to deploy the micro-services which are written in Scala and Akka. If anyone of the microservice docker is crashed and restarted again. In this case, we will lose the user requests and service will not return any response for those cases. My current architecture can handle up to 1000 concurrent requests without any issues. To avoid this issues, I am planning to store and retrieve all the requests and responses using Kafka.
So I want to use Kafka to manage the request and responses of all my web services and include a separate service or web socket to process all the requests and store the responses again to Kafka. In this case, if my core process docker crashed or restarted. It won't lose any request and responses at any point in time. It will again start to read the requests from Kafka and process it.
All the web services will store the request in relevant topic in Kafka and get the response from relevant response topic and return back to an API response. I have found the following library to use Kafka in Scala web services.
https://github.com/akka/reactive-kafka/
Please check the attached architecture diagram which I am going to use it to efficiently handle a large number of concurrent requests from client apps. Is it a good approach to proceed? Do I need to changes anything in my architecture?
I have created this architecture after done more research about Kafka and microservice dockers. Please let me know if anything wrong with this architecture.
This is Kafka's bread and butter so I don't think you're going to run into any architectural issues with this. Just be aware that there is a pretty large amount of operational overhead with Kafka. A good resource for getting started is Kafka: The Definitive Guide written by Confluent (https://www.confluent.io/wp-content/uploads/confluent-kafka-definitive-guide-complete.pdf) . It goes into detail on a lot of common operational issues that they don't mention in the documentation.

What are the benefits of Apache Kafka's native binary TCP protocol over it's restful API?

As per Apache Kafka's documentation, Kafka uses binary TCP protocol in it's native API's communication but they have also provided URL based restful API for the languages which don't support Apache Kafka's native API. I was just wondering if there is any benefit of native binary TCP protocol (supported in native API) over restful URL based communication with broker node? And I was also thinking that will restful API still maintain only once property?
Edit:
The restful API guide is here: https://www.confluent.io/blog/a-comprehensive-open-source-rest-proxy-for-kafka which explains how to produce and consume Kafka's message over restful API
There is no REST API included in Apache Kafka for producing or consuming messages as with the native Kafka protocol client implemented in Java.
There is a REST APIs in Apache Kafka for configuring Kafka Connect.
There are a number of third party REST Proxy implementations (such as the Confluent Kafka REST Proxy) which allow pub/sub over a REST interface but these are separate open source projects outside of Apache Kafka.
If you mean to ask what are the advantages to use the native Kafka Java Producer/Consumer API rather than these third party REST/HTTP Proxy implementation then these are some reasons:
One benefit is greater parallelism. A Kafka client will typically open up TCP connections to multiple brokers in the cluster and send or fetch data in parallel across multiple partitions of the same topic.
Another benefit is better network utilization as HTTP headers can add a lot of size to otherwise small messages while Kafka’s wire protocol is a compact binary protocol.
Kafka clients handle load balancing, failover, and cluster expansion or contraction automatically while REST clients typically require a third party load balancer to achieve the same functionality.
Kafka client can send their own authentication credentials for access control and bandwidth throttling (quotas) while all REST clients look to the kafka cluster as one Kafka client and therefore have common ACL privileges.
Kafka client libraries buffer and batch messages together into smaller numbers of Kafka produce or fetch requests while HTTP can only batch data if the programmer thought to publish them as a single batch.
The native Kafka protocol supports more than just what the producer/consumer api exposes. There is also an Admin API for creating topics, and modifying topic configurations. These functions are not (yet) exposed through the most popular REST Proxy implementations.