NestJS Kafka send event from server to all clients - apache-kafka

How do I send an event from the server to the client or to all clients? How do I get a server instance in the controller at all?
I need to send an event to the client from the server, without a request from the client side. The client listens, I send.
The point is that I implemented the architecture. Api-gateway sends a request to a microservice and gets a response. I need to implement sending event from microservice to api-gateway. In api-gateway I got a client instance in the controller, how do I get the same instance in the microservice? The controller in my microservice only listens to messages and sends a response. How can I do the same in api-gateway? So that Api-gateway listens to the event and microservice sends messages there.

It depends, altough you didn't provide enough details in your question, I'll pretend, based on your question tags, you're trying to use NestJS microservices and Kafka.
First of all, take a look at Nest's Microservices documentation, you should first understand how that works before trying to use it:
https://docs.nestjs.com/microservices/basics
You might also want to create a hybrid application if you're not going to exclusively use Kafka as an entrypoint:
https://docs.nestjs.com/faq/hybrid-application
That said, still based on your tags, here's a brief guide on how to implement Kafka on NestJS using the same knowledge you aquired reading about microservices and hybrid applications: https://docs.nestjs.com/microservices/kafka
You should be able to implement simple but functional application with those documentations I mentioned, but for anything else, you should provide more details on your question (like what you need, what you tried, some code etc).

Kafka send event ... To all clients
How do I send an event from the server
Kafka doesn't send. Clients poll.
If you deploy each Kafka consumer client application with a unique group.id, for the set of topics you're producing to, they will all uniquely consume the data. Otherwise, if each deployment shares a id, they all consume unique partitions of the topic, never getting the same event to all the clients
NestJS uses KafkaJS - https://kafka.js.org/docs/consuming
without a request from the client side
Unclear what this means. Even if the NestJS server is randomly generating data, you'll still need to use a Kafka producer client to send data to the Kafka server
get a server instance in the controller
The Kafka server? You wouldn't. You'd pass the bootstrap server string (via external config) into a producer or consumer instance, which then can be given to a controller

Related

Looping through REST API with pagination, getting data and sending to Kafka

I'm new to Apache Kafka building some applications and i get stuck dealing with a specific problem.
I'll try my best do explain my use case.
I have an external application, that is a kind of ticket manager and would like to pull data from them. They have a paginated REST API where i can get ticket data by a client. I would like to loop through this API 'till the last page and send it to Kafka, where my Sink Connectors would send them to three DBs.
Q) My best option is to create some kind of python script to get data and /POST/ them to Kafka REST Proxy API?
I don't think you have really any good option here.
Pages imply ordering; if you have N pages and attempt to send N requests, then your producer request could fail and retry any one of them, causing loss of information and random order
Two options to fix that
send "page count" and "current page" along with each message and reshuffle data at some downstream system
don't produce any message until you iterate all pages, keeping in mind that Kafka has a maximum request size
Problem with either approach - what happens if another page is added to the API while you're producing or writing to the database? Or if they change? How will you detect which pages you might need to request again/overwrite?
POST them to Kafka REST Proxy API?
If the REST Proxy is the only way you're able to get data into the cluster, then sure, but it'll be less performant than a producer

Need to bring the data home/On-Prem from vendor's Kafka Topic

The vendor publishes frequent Async responses to Kafka Topic which resides in the vendor DC.
We need to get that info into our company.
What we don't want to do is:
to write a (polling) Kafka consumer service to read off their Kafka Topic
We want them to call our Rest API (callback url) to publish the information.
Are there any options to configure a trigger on (their) Kafka topic to call a (external) REST API as and when a message is written into the topic?
We would like them to call our API so we can route it thru our API Gateway and handle all the crosscutting concerns.
If you want them to call your service, or have them write a secondary event to anywhere, then you need to ask them to do that... No real alternatives there.
If you have access to their Kafka service, I see no reason why embedding a consumer into your API would be an issue.
Or you could use MirrorMaker/Replicator to copy their Kafka to your own local Kafka, but you still need some consumer to get data over to a REST action.

Microservices - handling asynchronous data creation in frontend application

I have problem, that is pretty crucial, but I couldn't find good answer to it for a while.
I have a microservice-based backend with the gateway, a few other microservices, and Kafka brokers.
Gateway offers synchronous REST API for reads/queries and asynchronous for writes.
Write scenario looks as follows. The gateway returns 202 Accepted status and publishes event e.g. CreateItem to Kafka. Item service subscribes to this kind of event, creates an item and emits ItemCreated event.
My problem is how to handle such scenario on frontend side.
The most basic approach I thought about is to route to the items list page and poll for items, so the newly created items show there eventually (maybe with some kind of indicator, that shipment creation is being processed) but it's kinda stupid.
I also thought about pushing writes from frontend over WebSocket to the gateway and on ItemCreated event gateway would push that info back to the client, but it doesn't resolve the problem - what to show to the user?
On the other hand, I can use the WebSocket solution and show some loading screen with an indeterminate progress bar, when waiting for a response over a socket, but that would make the write effectively synchronous - at least on the frontend side. Just as well I could make the write HTTP POST endpoint synchronous on the Gateway side and return the response only after the ItemCreated event has been received.
So, what would be the best solution to that problem? Are some of these I listed any good?

Two channels for one API

We have a SaaS. It consists of Single Page application (client), Gateway, Data Service 1, Data Service 2 and Notification Service.
Client talk with Gateway (using REST) and service route the request to appropriate Data Service (1 or 2) or do own calculations.
One request from the client can be split on multiple at Gateway service. The result is an aggregation of responses from the sub-services.
Notification Service - is a service which pushing information about changes made by other users using MQ and WebSocket connection to the client. Notification can be published by any service.
With enginers, we had a discussion how the process can be optimized.
Currently, the problem that Gateway spending a lot of time just waiting for the response from Data Services.
One of the proposals is letting Gateway service response 200 Ok as soon as message pushed to the Data Service and let client wait for operation progress throw Notification channel (WebSocket connection).
It means that client always sends HTTP request for operation and get confirmation that operation is executed by WebSocket from the different endpoint.
This schema can be hidden by providing JS client library which will hide all this internal complexity.
I think something wrong with this approach. I have never seen such design. But I don't have valuable arguments against it, except complexity and two points of failure (instead of one).
What do you think about this design approach?
Do you see any potential problems with it?
Do you know any public solutions with
such approach?
Since your service is slow it might makes sense to treat it more like a batch job.
Client sends a job request to Gateway.
Gateway returns a job ID immediately after accepting it from the Client.
Client periodically polls the Gateway for results for that job ID.

WSO2 ESB Topic Subscription for REST url

We are successfully using WSO2 ESB for basic mediation using REST for the caller and the service (via a message queue).
We are now trying to use the topics and subscription model. However, unlike in other parts of the ESB where you can modify the message format to POX, in the subscription interface there is way to define the format of the payload to the URL. The ESB always sends soap to the URL, even though we want it to send POX. We don;t want to have to write SOAP services.
Is there way to change the format that the subscriber gets? I know we can setup a proxy which then sends the message etc, but this is cumbersome and cannot be automated for new services.
If you use WS-Eventing, then you get the soap messages to the subscriber endpoints. You can use JMS based pub-sub scenario to implement what you want. Please have a look at [1].
[1] http://wso2.org/library/articles/2011/12/wso2-esb-example-pubsub-soa