Quick question. Is there a way to send a message from a microservice using a kafka and receive a message on another microservice? I've seen some articles on github, dzone, etc, but everyone is using Docker which is not supported on my PC (I'm a Windows 10 Home loser :)
and the docker toolbox is ...
Thanks for the help.
Regards.
"Microservice" != Docker
Yes, you can use Kafka to communicate between any two applications, given there are compatible Kafka clients
Related
Looking to come up with solution that would mirror or replicate one Kafka environment without needing Kafka Connect. Having a hard time coming up with any possible solutions or workarounds. Very new to Kafka, would appreciate any thoughts and/or guidance!
MirrorMaker2 is based on Kafka Connect. The original MirrorMaker is not, however it is not recommended to use this anymore as it's not very fault tolerant.
Most Kafka replication solutions are built on Kafka Connect (Confluent Replicator as another example)
Uber uReplicator mentioned in the comments is built on Apache Helix and requires a Zookeeper connection, which Kafka Connect does not, so ultimately depends on what access and infrastructure you have available
Since Kafka comes with the Connect API and MirrorMaker2 pre-installed, there should be little reason to find alternatives unless it absolutely doesn't work for your use case (which is...?)
We have a microservice architecture communicated by Kafka on Confluent where each service is set in its own consumer group in order to balance message delivery between the multiple instances.
For example:
SERVICE_A_INSTANCE_1 (CONSUMER_GROUP_A)
SERVICE_A_INSTANCE_2 (CONSUMER_GROUP_A)
SERVICE_A_INSTANCE_3 (CONSUMER_GROUP_A)
SERVICE_B_INSTANCE_1 (CONSUMER_GROUP_B)
SERVICE_B_INSTANCE_2 (CONSUMER_GROUP_B)
When a message is emitted it should only be consumed by one instance of each consumer group.
This worked fine until two days ago. All of the sudden, each message is being delivered to all the instances, so each message is processed multiple times. Basically, the consumer-group stopped working and messages are not being distributed.
Important points:
We use Kafka paas in Confluent on GCP.
We tested this in a different environment and everything worked as expected
No changes have been made on our consumers
No changes have been made on our part to the cluster (we cant know if Confluent changed something)
We suspect it might be a problem on Confluent or an update that is not compatible with our current configuration. Kafka 2.2.0 was recently released and it has some changes to consumer groups behavior.
We are currently working on migrating to AWS MSK to see if the issue prevails.
Any ideas on what could be causing this?
TL;DR: We solved the issue by moving away from Confluent into our own Kafka cluster on GCP.
I will answer my own question since its been a while and we have already solved this. Also, my insights might help others make more informed decisions on where to deploy their Kafka infrastructure.
Unfortunately we could not get to the bottom of the problem with Confluent. It is most likely something on their side because we simply migrated to our own self managed instances on GCP and everything went back to normal.
Some important clarifications before my final thoughts and warnings about using Confluent as a managed Kafka service:
We think this is related to something that affected Node.js in particular. We tested external libraries in languages other than Node and the behavior was as expected. When testing on multiple of the most popular Node libraries the problem persisted.
We did not have premium support with Confluent.
I cannot confirm that this issue is not our fault.
With all of those points in mind, our conclusion is that for companies that decide on using a managed service with Confluent, its best to calculate costs with premium support included. Otherwise, Kafka turns into a completely closed blackbox, making it impossible to diagnose issues. In my personal opinion, the dependency on the Confluent team during a problem is so large that not having them ready to help when needed renders the service non-production ready.
from the official Nest.JS docs we can see that
We don't support streaming platforms with log based persistance, such
as Kafka or NATS streaming because they have been created to solve a
different range of issues.
However, you can see here how to create a microservice with NATS https://docs.nestjs.com/microservices/nats which theorically is not supported as seen above.
I would like to use Kafka with Nest. Is it / Will it be supported?
Thank you in advance!
Kafka message broker is an ecosystem that container whole codebase and other actor of your project.
All microservice pattern that mentions in Basics TCP, Redis InMemCacH, MQTT, NATS, gRPC can abstract in Nest.js core scaffold.
If you want use KAFKA or RABBITMQ may restructure your project and change it some applications that should separated and work together with message and message queue.
we are trying to start the kafka server (zookeeper and broker) programmatically in our application. Any api / library available for the same?
Yes, you can use embedded Kafka, which will run zookeeper and kafka server for you. It is generally used for testing kafka producer/consumer where there is no need to explicitly run them.
For more detail refer
To run it, we write EmbeddedKafka.start() at the start and EmbeddedKafka.stop() at the end.
I'd like to host apps that uses queue to communicate with each other on OpenShift.
One kind of apps - producers will put some data to the queue and another type - consumer will process the message. My question is how to implement message queue. I've thought about two approaches:
Create an app with JBoss, HornetQ and consumer and create proxy port for HornetQ, so that producers can send messages there.
Create an app with JBoss and consumer, and make a JBoss's HornetQ available to producers. It sounds a bit better for me, but I don't know if I can make queue available to producers and how it works if there are more instances of consumer on different nodes (and different JBoss instances).
I'm not sure how else to answer you besides showing you a link on how to use Wildfly. You can just use the Wildfly Cartridge:
https://www.openshift.com/quickstarts/wildfly-8
If you provide me some extra context I can try to enrich the answer a bit better. I need to know what is your problem, and what's not working.
If you just want to know how to configure Wildfly with HornetQ, the Wildfly cartridge I posted is the way to go.