Create kafka transactional.id for every connection using new GUID - apache-kafka

I am having API Gateway that produces message to kafka for each request. Some requests too big to store and they are splitted into smaller. This is done in transaction. For now we create transactional.id using new GUID for each request. Is it good idea? Is there any other solutions for this?
For now I am creating transactional.id with Guid.NewGuid() method. Another solution is to create pool for such Ids, but that not seems not to be scalable
UPDATE
API Gateway work in docker container on physical machine (not lambda or virtual machine). Clients send HTTP requests to it, and then API Gateway must produce message to kafka

Related

Microservice Communication using reactive programming

I have two microservices(A and B).
Service B receives HTTP requests from the UI. Based on some conditions service B requires data from a DB which only service A has access to. So I would need some communication mechanism between service B and A. So service B would internally call service A, retrieve some fields from the response from service A and eventually send the final response to the client.
I'm used to spring boot framework and AWS cloud resources. I'm new to reactive programming. The services are built using micronaut framework and utilise reactive programming. Kafka is also used as a messaging system.
In Spring boot, I would use a rest api and use webclient to make async calls from service B to service A. But with a rest API, I'll have to handle security and authentication as well.
With reactive programming in micronaut and kafka available, is there a better way for these microservices to communicate?
Update 1:
If a message bus is used, in an event-driven way, service B can't receive the response from service A right? Unless service B notes the message ID and service A publishes a message back with the required data from it's DB and mentions the appropriate message ID for that data sent.
Micronaut can also use both HTTP and Kafka.
Service-to-service communication simply requires a network link. Message buses / brokers are completely optional, but offer a way to buffer events and/or handle downtime.
Reactive programming doesn't really change this. It doesn't change the security model, either. Kafka and REST clients can still use TLS, and have authz restrictions.
DB which only service A has access to
You could use Debezium; pulling the data into a Kafka topic from the database (if supported), then build a local, queryable KTable within "service B", rather than needing "service A"'s API at all.

Connecting to topics using Rest proxy

I am new to Kafka .I have implemented my consumer as normal Java springboot application.I need to connect to the topic deployed on remote broker using Kafka rest proxy.
I am not able to understand how it will function differently if i use Kafka rest proxy.Where i should do change in my code to include the rest proxy.Do i need to structure my code complete different as i didn't think about rest proxy while creation.
I maybe wrong with the terminologies.
Any help or guidance would be of great help.
REST proxy would be used with any HTTP client, not a Kafka consumer (so create a WebClient bean rather than a ConsumerFactory, etc)
You can refer its documentation for how you can consume records over HTTP, but, simply put, the code will be completely different up until you parse the data

Is a web frontend producing directly to a Kafka broker a viable idea?

I have just started learning Kafka. So trying to build a social media web application. I am fairly clear on how to use Kafka for my backend ( communicating from backend to databases and other services).
However, I am not sure how should frontend communicate with backend. I was considering an architecture as: Frontend -> Kafka -> Backend.
Frontend acts as producer and backend as consumer. In this case, frontend would supposedly have all required resources to publish to Kafka broker (even if I implement security on Kafka). Now, is this scenario possible:
Lets say I impersonate the frontend and send absurd/invalid messages to my Kafka broker. Now I can handle and filter these messages when they reach to my backend. But I know that Kafka stores these messages temporarily. Wouldn't my Kafka server face DDOS problems if such "fake" messages are published to it in high volume, since it is gonna store them anyway as they dont get filtered out until they actually get consumed by backend?
If so, how can I prevent this?
Or is this not a good option? I can also try using REST for frontend/backend communication and then Kafka will be used from backend to communicate with database(s) and other stuff.
Or I can have a middleware (again, REST) that detects and filters out such messages.
Easiest way is to have the front end produce to the Kafka REST Proxy
See details here https://docs.confluent.io/1.0/kafka-rest/docs/intro.html
That way there is no kafka client code required in your front end and you can use HTTP(S) with standard off the shelf load balancers, and API Management tools.
Could you not consider the other direction, to use Kafka as a transport system for updating assets available to frontend ? This has been proposed for hybrid React / NodeJS/Express solutions.

Microservice consuming Kafka events through Zuul

I am new to Microservices architecture.
I want to create a microservice using Netflix OSS.
I want my architecture to look some thing like the one described here.
http://callistaenterprise.se/blogg/teknik/2017/09/13/building-microservices-part-8-logging-with-ELK/
However I want one of my microservice, (which is behind the Zuul Reverse proxy) to consume events from a Kafka events(which is from some other team).
I am not sure If this is a good idea, since this will expose my microservices, which is supposed to be abstracted from outside world behind my Zuul wall.
Is there any other way. Can I use my Zuul to consume event streams from kafka and push to my microservice. If yes, how do I stream from my Zuul to microservice?
Zuul will redirect your request to A service HTTP XXXX port /api/v1/input. This microservice as a producer will put message to kafka channel. After kafka consumer will get message and store or analyze. Another microservice can read from database and return response after frontend request or do push using Server Sent Events or Vertx message bus....

Can I use Kafka queue in my Rest WEBSERVICE

I have a rest based application deployed in server(tomcat) ,
Every request comes to server it takes 1 second of time to serve, now I have a issue, sometimes Server receive more request then it is capable of serving which making server non responsive. Now I was thinking if I can store the requests in a queue so that server can pull request and serve that request and handle the pick time issue.
Now I was thinking can Kafka be helpful for this, if yes any pointer where I can start.
You can use Kafka (or any other messaging system for this ex- ActiveMQ, RabbitMQ etc).
When WebService receives request, add request (with all details required to process it) in Kafka queue (using Kafka message producer details)
Separate service (having Kafka consumer details) will read from topic(queue) and process it.
In case need to send message to client when request is processed, server can push information to client using WebSocket (Or client can poll for request status however this need request status endpoint and will cause load on that endpoint).
Apache Kafka would be helpful in your case. If you use a Kafka broker, it will allow you to face with a peak of requests. The requests will be stored in a queue as you mentionned and be treated by your server at its own speed.
As your are using tomcat, I guess you developped your server in Java. Apache Kafka propose a Java API which is quite easy to use.