Diverts deleted when restarting ActiveMQ Artemis - activemq-artemis

I have searched and haven't found anything about this online nor in the manual.
I have setup addresses and use both multicast to several queues and an AnyCast to a single queue (All durable queues). To this I have connected Diverts created in the API in runtime.
The diverts works great when sending messages. BUT when I restart the ActiveMQ Artemis instance the Diverts are deleted. Everything else is in place. Just the Diverts are deleted.
Any ideas on how to keep Diverts after a Restart?

Diverts created via the management API during runtime are volatile. If you wish to have a divert which survives a broker restart you should modify the broker.xml with the desired divert configuration.
Of course, the current behavior may not work for your use-case. If that's true then I would encourage you to open a "Feature Request" JIRA at Artemis JIRA project. Furthermore, if you're really committed to seeing the behavior change you can download the code, make the necessary changes, and submit a pull-request (or attach a patch to the JIRA). Checkout the Artemis Hacking Guide for help getting started.

Related

Ressource announcement - Message queing registry on Kubernetes

I have an application which makes use of RabbitMQ messages - it sends messages. Other applications can react on these messages but they need to know which messages are available on the system and what they semantically mean.
My message queuing system is RabbitMQ and RabbitMQ as well as the applications are hosted and administered using Kubernetes.
I am aware of
https://kubemq.io/: That seems to be an alternative to RabbitMQ?
https://knative.dev/docs/eventing/event-registry/ also an alternative to RabbitMQ? but with a meta-layer approach to integrate existing event sources? The documentation is not clear for me.
Is there a general-purpose "MQ-interface service availabe, a solution, where I can register which messages are sent by an application, how the payload is technically and semantically set up, which serialization format is used and under what circumstances errors will be sent?
Can I do this in Kubernetes YAML-files?
RabbitMQ does not have this in any kind of generic fashion so you would have to write it yourself. Rabbit messages are just a series of bytes, there is no schema registry.

How to add health check for topics in KafkaStreams api

I have a critical Kafka application that needs to be up and running all the time. The source topics are created by debezium kafka connect for mysql binlog. Unfortunately, many things can go wrong with this setup. A lot of times debezium connectors fail and need to be restarted, so does my apps then (because without throwing any exception it just hangs up and stops consuming). My manual way of testing and discovering the failure is checking kibana log, then consume the suspicious topic through terminal. I can mimic this in code but obviously no way the best practice. I wonder if there is the ability in KafkaStream api that allows me to do such health check, and check other parts of kafka cluster?
Another point that bothers me is if I can keep the stream alive and rejoin the topics when connectors are up again.
You can check the Kafka Streams State to see if it is rebalancing/running, which would indicate healthy operations. Although, if no data is getting into the Topology, I would assume there would be no errors happening, so you need to then lookup the health of your upstream dependencies.
Overall, sounds like you might want to invest some time into using monitoring tools like Consul or Sensu which can run local service health checks and send out alerts when services go down. Or at the very least Elasticseach alerting
As far as Kafka health checking goes, you can do that in several ways
Is the broker and zookeeper process running? (SSH to the node, check processes)
Is the broker and zookeeper ports open? (use Socket connection)
Are there important JMX metrics you can track? (Metricbeat)
Can you find an active Controller broker (use AdminClient#describeCluster)
Are there a required minimum number of brokers you would like to respond as part of the Controller metadata (which can be obtained from AdminClient)
Are the topics that you use having the proper configuration? (retention, min-isr, replication-factor, partition count, etc)? (again, use AdminClient)

Kafka consumer groups suddenly stopped balancing messages among instances

We have a microservice architecture communicated by Kafka on Confluent where each service is set in its own consumer group in order to balance message delivery between the multiple instances.
For example:
SERVICE_A_INSTANCE_1 (CONSUMER_GROUP_A)
SERVICE_A_INSTANCE_2 (CONSUMER_GROUP_A)
SERVICE_A_INSTANCE_3 (CONSUMER_GROUP_A)
SERVICE_B_INSTANCE_1 (CONSUMER_GROUP_B)
SERVICE_B_INSTANCE_2 (CONSUMER_GROUP_B)
When a message is emitted it should only be consumed by one instance of each consumer group.
This worked fine until two days ago. All of the sudden, each message is being delivered to all the instances, so each message is processed multiple times. Basically, the consumer-group stopped working and messages are not being distributed.
Important points:
We use Kafka paas in Confluent on GCP.
We tested this in a different environment and everything worked as expected
No changes have been made on our consumers
No changes have been made on our part to the cluster (we cant know if Confluent changed something)
We suspect it might be a problem on Confluent or an update that is not compatible with our current configuration. Kafka 2.2.0 was recently released and it has some changes to consumer groups behavior.
We are currently working on migrating to AWS MSK to see if the issue prevails.
Any ideas on what could be causing this?
TL;DR: We solved the issue by moving away from Confluent into our own Kafka cluster on GCP.
I will answer my own question since its been a while and we have already solved this. Also, my insights might help others make more informed decisions on where to deploy their Kafka infrastructure.
Unfortunately we could not get to the bottom of the problem with Confluent. It is most likely something on their side because we simply migrated to our own self managed instances on GCP and everything went back to normal.
Some important clarifications before my final thoughts and warnings about using Confluent as a managed Kafka service:
We think this is related to something that affected Node.js in particular. We tested external libraries in languages other than Node and the behavior was as expected. When testing on multiple of the most popular Node libraries the problem persisted.
We did not have premium support with Confluent.
I cannot confirm that this issue is not our fault.
With all of those points in mind, our conclusion is that for companies that decide on using a managed service with Confluent, its best to calculate costs with premium support included. Otherwise, Kafka turns into a completely closed blackbox, making it impossible to diagnose issues. In my personal opinion, the dependency on the Confluent team during a problem is so large that not having them ready to help when needed renders the service non-production ready.

Messages delivered twice at same instant to AMQP message consumer (ActiveMQ Artemis)

I recently had to rollback from WF13 to WF11 due to a regression in one of the dependencies.
Now I am trying to get the AMQP protocol to work on WildFly 11's messaging system. I am running a high availability setup with two nodes. Each of the node has a message consumer locally. This message consumer connects through AMQP1. I've added io.netty as a dependency to the org/apache/activemq/artemis/protocol/amqp module and updated org/apache/qpid to get the AMQP protocol to work (see also WFLY-7823). Now my AMQP message consumer works fine, but it seems to receive messages always exactly twice, and it appears to be even in the same frame. This happens on the same node (the other node receives messages through the bridge if the message isn't handled locally in the first place). So on one node and one queue consumer, I receive every message exactly twice, at the very same instant, before I even got to send an ACK/NACK for the first message I received.
I don't remember seeing this issue on WildFly 13.
Are there any known regressions regarding how messages are sent through the remote connectors? Perhaps an issue in the AMQP protocol? Or could it be a compatibility issue with the updated version of qtip?
For what it's worth Wildfly only uses ActiveMQ Artemis to fulfill their need for a JMS implementation (i.e. a traditional part of Java EE). I don't believe Wildfly has any real interest in other protocols or APIs aside from JMS.
My understanding here is that if you need to support multi-protocol message use-cases you should likely be using a standalone broker. I believe this is why none of the other protocols which Artemis supports (e.g. AMQP, STOMP, MQTT, OpenWire) are enabled in Wildfly or have documentation on how to enable them (aside from the occasional forum post).
It's also worth noting that Wildfly 11 has Artemis 1.5.5 in it which is 10 releases or so behind the latest version (i.e. 2.6.2). Much work has been done on the AMQP implementation in those releases so I think you're best served by using a standalone version of Artemis rather than Artemis embedded in Wildfly.

OpenShift message queue

I'd like to host apps that uses queue to communicate with each other on OpenShift.
One kind of apps - producers will put some data to the queue and another type - consumer will process the message. My question is how to implement message queue. I've thought about two approaches:
Create an app with JBoss, HornetQ and consumer and create proxy port for HornetQ, so that producers can send messages there.
Create an app with JBoss and consumer, and make a JBoss's HornetQ available to producers. It sounds a bit better for me, but I don't know if I can make queue available to producers and how it works if there are more instances of consumer on different nodes (and different JBoss instances).
I'm not sure how else to answer you besides showing you a link on how to use Wildfly. You can just use the Wildfly Cartridge:
https://www.openshift.com/quickstarts/wildfly-8
If you provide me some extra context I can try to enrich the answer a bit better. I need to know what is your problem, and what's not working.
If you just want to know how to configure Wildfly with HornetQ, the Wildfly cartridge I posted is the way to go.