Cluster in ActiveMQ Artemis - activemq-artemis

I am new to ActiveMQ Artemis,
I am trying to understand symmetric-cluster in ActiveMQ Artemis.
Here is example of it which i am trying to understand.
I am getting the list of Topic message and Queue message which are consume from cluster node, How can i get the information about node means which node is returning this information(Queue message/Topic message)

Artemis doesn't add any meta-data to the message to indicate which cluster node it's coming from. Typically a cluster is comprised of interchangeable/indistinguishable nodes so it doesn't actually matter where the message comes from.

Related

Consume directly from ActiveMQ Artemis replica

In a cluster scenario using HA/Data replication feature is there a way for consumers to consume/fetch data from a slave node instead of always reaching out to the master node (master of that particular queue)?
If you think about scalability, having all consumers call a single node responsible to be the master of a specific queue means all traffic goes to a single node.
Kafka allows consumers to fetch data from the closest node if that node contains a replica of the leader, is there something similar on ActiveMQ?
In short, no. Consumers can only consume from an active broker and slave brokers are not active, they are passive.
If you want to increase scalability you can add additional brokers (or HA broker pairs) to the cluster. That said, I would recommend careful benchmarking to confirm that you actually need additional capacity before increasing your cluster size. A single ActiveMQ Artemis broker can handle millions of messages per second depending on the use-case.
As I understand it, Kafka's semantics are quite different from a "traditional" message broker like ActiveMQ Artemis so the comparison isn't particularly apt.

Which JMX Metrics gives the health of a kafka cluster?

I want to know whether Kafka broker is up or not? I have enabled JMX in Kafka but couldn't find any Mbean name that can provide me the status of Kafka. Any idea?
There are multiple things to check health of a cluster, both of which are individual Mbeans and will need to be aggregated over the entire cluster
Is there only one controller
Are there no out-of-sync replicas
You may also want to externally port check the brokers from the environments you want to produce and consume from

How do I set up Elastic Node APM distributed tracing to work with Kafka and multiple Node services?

I'm using Kafka for a queue, with Node services producing and consuming messages to Kafka topics using Kafka-Node.
I've been using a home-brewed distributed tracing solution, but now we are moving to the Elastic APM.
This seems to be tailored to HTTP servers, but how do I configure it to work with Kafka?
I want to be able to track transactions like the following: Service A sends an HTTP request to Service B, which produces it to Kafka Topic C, from which it is consumed by Service D, which puts some data into Kafka Topic E, from which it is consumed by Service B.
I worked with the Elastic APM team, who had just rolled out this package: https://www.npmjs.com/package/elastic-apm-node
The directions are pretty self-explanatory, works like a charm.

How to permanently remove a broker from Kafka cluster?

How do I permanently remove a broker from a Kafka cluster?
Scenario:
I have a stable cluster of 3 brokers.
I temporarily added a fourth broker that successfully joined the cluster. The controller returned metadata indicating this broker was part of the cluster.
However, I never rebalanced partitions onto this broker, so this broker #4 was never actually used.
I later decided to remove this unused broker from the cluster. I shutdown the broker successfully and Zookeeper /broker/ids no longer lists broker #4.
However, when our application code connects to any Kafka broker and fetches metadata, we get a broker list that includes this deleted broker.
How do I indicate to the cluster that this broker has been permanently removed from the cluster and not just a transient downtime?
Additionally, what's happening under the covers that causes this?
I'm guessing that when I connect to a broker and ask for metadata, the broker checks its local cache for the controller ID, contacts the broker and asks it for the list of all brokers. Then the controller checks it's cached list of brokers and returns the list of all brokers known to have belonged to the cluster at any point in time.
I'm guessing this happens because it's not certain if the dead broker is permanently removed or just transient downtime. So I'm thinking we just need to indicate to the controller that it needs to reset it's list of known cluster brokers to the known live brokers in zookeeper. But would not be surprised if something in my mental model is incorrect.
This is for Kafka 0.8.2. I am planning to upgrade to 0.10 soon, so if 0.10 handles this differently, I'd also love to know that.
It looks like this is most likely due to this bug in Kafka 8, which was fixed in Kafka 9.

How to obtain number of messages in clustered queue in JBoss EAP6/HornetQ

I am trying to count messages in HornetQ clustered queue on JBoss EAP 6.4 (domain mode)
Obtaining number of messages in particular HornetQ instance is not an problem (here is the way I do it), but what I am actually want, is to get cumulative/total number of messages of given queue in whole cluster.
Right now when I send to given queue 24604 messages, they are being nicely distributed to 3 nodes:
Node A: 8201 messages
Node B: 8202 messages
Node C: 8201 messages
Is there a way to count all messages of given queue in a cluster?
I have finally found a solution to obtain total number of messages in cluster by invoking broadcast ejb call on all cluster members, where each cluster member gets number of messages from InVm jms sender.
I have described it here:
http://jeefix.com/how-to-invoke-broadcast-ejb-at-all-jboss-eap6-ejb-cluster-members/
http://jeefix.com/managing-hornetq-queues-via-jms-api/