We have a backend module which listen ActiveMQ, after changing the backend arch, we are using Mesos,Marathon and Zookeeper,
Now we want to listen Zookeeper events, if any update come to zookeeper.
Is there any Client or anything, for connect to Zookeeper and listen the Zookeeper Queues/Events.
Thanks in advance.
There are ZooKeeper bindings (client libraries) for a variety of languages, you can use those libraries to interact with ZooKeeper. The ZooKeeper Programmer's Guide is a great place to start. And here is the link to Curator.
We're using apache curator framework for this purpose. There is watch method which allow you to subscribe for specific path and listen for different events from it. Like: node created, updated, deleted or child changed...
Related
I have a critical Kafka application that needs to be up and running all the time. The source topics are created by debezium kafka connect for mysql binlog. Unfortunately, many things can go wrong with this setup. A lot of times debezium connectors fail and need to be restarted, so does my apps then (because without throwing any exception it just hangs up and stops consuming). My manual way of testing and discovering the failure is checking kibana log, then consume the suspicious topic through terminal. I can mimic this in code but obviously no way the best practice. I wonder if there is the ability in KafkaStream api that allows me to do such health check, and check other parts of kafka cluster?
Another point that bothers me is if I can keep the stream alive and rejoin the topics when connectors are up again.
You can check the Kafka Streams State to see if it is rebalancing/running, which would indicate healthy operations. Although, if no data is getting into the Topology, I would assume there would be no errors happening, so you need to then lookup the health of your upstream dependencies.
Overall, sounds like you might want to invest some time into using monitoring tools like Consul or Sensu which can run local service health checks and send out alerts when services go down. Or at the very least Elasticseach alerting
As far as Kafka health checking goes, you can do that in several ways
Is the broker and zookeeper process running? (SSH to the node, check processes)
Is the broker and zookeeper ports open? (use Socket connection)
Are there important JMX metrics you can track? (Metricbeat)
Can you find an active Controller broker (use AdminClient#describeCluster)
Are there a required minimum number of brokers you would like to respond as part of the Controller metadata (which can be obtained from AdminClient)
Are the topics that you use having the proper configuration? (retention, min-isr, replication-factor, partition count, etc)? (again, use AdminClient)
I am trying to use https://github.com/nestjs/cqrs Nest.JS module. Seems there is impossible to make CommandBus and QueryBus implementations external like using Kafka or RabbitMQ under the hood and share command handling between many microservices. Am I wrong?
Thanks.
Kafka support was added in 6.7.0
https://github.com/nestjs/nest/issues/2361
As documented, the command bus is observable, so you could consume the events and forward them into a Kafka producer or consumer, as you need to
I have installed kafka_2.11-1.1.0 and set advertised listener to advertised.listeners=PLAINTEXT://<my-ip>:9092 (in $KAFKA_HOME/config/server.properties).
I can connect and write to my kafka using java code and see my cluster via kafka-tool from another server but I can't write messages to my topic from my local machine (the one that I have installed kafka cluster on it).
I have also tried to set listeners value to listeners = PLAINTEXT://:9092 but there is no change. What should I do to my kafka to make it reachable and writable from both outside and inside of the localhost?
In the server.properties use these two following properties
listeners=PLAINTEXT://0.0.0.0:9092
advertised.listeners=PLAINTEXT://<your ip>:9092
I finally solved the issue by changing my code's org.apache.kafka library from version 1.1.0 to version 2.1.0.
I mention that all of these libraries were imported (downloaded) and used via mvnrepository.com.
Also, our kafka producer and consumer code pattern were written using this article:
https://dzone.com/articles/kafka-producer-and-consumer-example.
Have a look in the below following links, it may be helpful for your scenario,
Kafka access inside and outside docker
Kafka Listeners - Explained
we are trying to start the kafka server (zookeeper and broker) programmatically in our application. Any api / library available for the same?
Yes, you can use embedded Kafka, which will run zookeeper and kafka server for you. It is generally used for testing kafka producer/consumer where there is no need to explicitly run them.
For more detail refer
To run it, we write EmbeddedKafka.start() at the start and EmbeddedKafka.stop() at the end.
I'd like to host apps that uses queue to communicate with each other on OpenShift.
One kind of apps - producers will put some data to the queue and another type - consumer will process the message. My question is how to implement message queue. I've thought about two approaches:
Create an app with JBoss, HornetQ and consumer and create proxy port for HornetQ, so that producers can send messages there.
Create an app with JBoss and consumer, and make a JBoss's HornetQ available to producers. It sounds a bit better for me, but I don't know if I can make queue available to producers and how it works if there are more instances of consumer on different nodes (and different JBoss instances).
I'm not sure how else to answer you besides showing you a link on how to use Wildfly. You can just use the Wildfly Cartridge:
https://www.openshift.com/quickstarts/wildfly-8
If you provide me some extra context I can try to enrich the answer a bit better. I need to know what is your problem, and what's not working.
If you just want to know how to configure Wildfly with HornetQ, the Wildfly cartridge I posted is the way to go.