How to provide a custom broker.xml to ActiveMQ Artemis broker instance - activemq-artemis

How can I provide a custom broker.xml to my instance of ActiveMQ Artemis? The standard implementation generates the config data in the broker instance's etc directory but I wish to mount my own config files.

If you're starting the broker with the artemis run command then you can pass the location of a custom bootstrap.xml, e.g.:
$ ./artemis run xml:/path/to/myBootstrap.xml
The bootstrap.xml will then indicate where to find a custom broker.xml.
You can also use --broker to override the location of broker.xml read from bootstrap.xml, e.g.:
$ ./artemis run --broker /path/to/myBroker.xml

Related

How to configure ActiveMQ Artemis as a broker in JBoss 4?

I configured a standalone ActiveMQ-5.16.1 to be used as a broker from a JBoss 4.2.3 server and my Camel-routes are able to get messages send from another component a ServerMix-Adaptor to this broker.
I replaced the standalone ActiveMQ-5.16.1 with an ActiveMQ Artemis-2.17.0 broker and besides few changes to the subscriptions the whole system is working.
I am aware that ActiveMQ 5.16.1 and ActiveMQ Artemis 2.17.0 are quite different, and I am trying to replace the activemq-rar-5.4.3.rar downloaded from here with the artemis-rar-2.17.0.rar downloaded from here. This .rar file is deployed under $JBOSS_HOME/server/default/deploy together with an *-ds.xml file.
In the *-ds.xml file 2 connection factories are declared, one for for queues and one for topics. This file references a file contained in the activemq-rar-5.4.3.rar META-INF/ra.xml and in this file are defined the connection-factories.
The problem is that even though the artemis-rar-2.17.0.rar contains the same META-INF/ra.xml file it's not obvious and not documented what to configure and how to configure it.
I couldn't find an example or a documentation about this.

Add acceptor and run it without reboot broker

I have embedded Artemis broker version 2.16.0.
Is there a way to add an acceptor and run it without having to reboot the broker?
For example, it is possible to create a queue or address in ActiveMQServerControl.
Or maybe I can add it to the broker.xml and then restart some services and the acceptor starts.
Yes, you can add an acceptor to an embedded broker at runtime and start it. Use something like this:
ActiveMQServer server;
...
server.getRemotingService().createAcceptor("myAcceptor", "tcp://127.0.0.1:61617").start();
It is possible to add/change certain things in broker.xml at runtime but an acceptor is not one of them. See the documentation for more details on that.

How to run Kafka-connect-replicator in distributed mode?

I want to replicate data from one system to another using confluent's replicator.I am using two Ubuntu 18.04 systems where one is acting as source and other as destination.
I tried to run kafka-connect-replicator in distributed mode where I changed the following configurations:
In confluent/etc/kafka/server.properties I made the following changes
SOURCE
> advertised.listeners=PLAINTEXT://source.ip:9092
DESTINATION
> advertised.listeners=PLAINTEXT://destination.ip:9092
In confluent/etc/kafka-connect-replicator/replicator.connect.distributed.properties I made the following changes
- group.id=connect-replicator
group.id is same on source and destination system
SOURCE
- bootstrap.servers=destination.ip:9092, source.ip:9092
DESTINATION
- bootstrap.servers=destination.ip:9092, source.ip:9092
In confluent/etc/kafka-connect-replicator/quickstart-replicator.properties I changed the following configurations
SOURCE
name=replicator-source
connector.class=io.confluent.connect.replicator.ReplicatorSourceConnector
# source cluster connection info
src.kafka.bootstrap.servers=source.ip:9092
# Set to use direct connection to Zookeeper by Replicator on the source
src.zookeeper.connect=localhost:2181
# destination cluster connection info
dest.kafka.bootstrap.servers=destination.ip:9092
# Set to use direct connection to Zookeeper by Replicator on the destination
dest.zookeeper.connect=destination.ip:2181
# configure topics to replicate
topic.whitelist= test-topic
topic.rename.format=${topic}.replica
DESTINATION
name=replicator-source
connector.class=io.confluent.connect.replicator.ReplicatorSourceConnector
# source cluster connection info
src.kafka.bootstrap.servers=source.ip:9092
# Set to use direct connection to Zookeeper by Replicator on the source
src.zookeeper.connect=source.ip:2181
# destination cluster connection info
dest.kafka.bootstrap.servers=destination.ip:9092
# Set to use direct connection to Zookeeper by Replicator on the destination
dest.zookeeper.connect=destination.ip:2181
# configure topics to replicate
topic.whitelist= test-topic
topic.rename.format=${topic}.replica
And then I created topic in source system and launched the connector using the below command
PATH_TO_CONFLUENT> sudo ./bin/connect-distributed ./etc/kafka-connect-replicator/replicator-connect-distributed.properties ./etc/kafka-connect-replicator/quickstart-replicator.properties
After this I produced data in the topic from source system and try to consume in destination system with the topic name {topic}.replica but there is not topic present to consume from.
It's not clear what errors you've having, but some notes.
connect-distributed only takes one property file, not two. You HTTP Post the Properties to the Connect Cluster as JSON, not load a properties file during cluster startup. The quickstart file is meant to be used for connect-standalone
The JSON would look like
{"name": "your-replicator-name", "config": {
"src.kafka.bootstrap.servers": "...",
...
}
./etc/kafka/connect-distributed.properties should be a starting point for running any Connect or Replicator cluster in Distributed mode, although there may be similar configurations in replicator-connect-distributed.properties
bootstrap.servers should only ever point to a single cluster. The source and destination would be separated within src.kafka.bootstrap.servers and dest.kafka.bootstrap.servers

How do i change the logging in runtime in ActiveMQ Artemis broker

In case of any network issues or during triage some other issue, I would like to enable logging in the ActiveMQ Artemis broker (2.6.1) without restarting the broker to get more logging.
Currently I'm restarting the broker after enabling/disabling logging in logging.properties
logger.level=DEBUG
In ActiveMQ 5.x there is a JMX operation exposed (as mentioned in https://activemq.apache.org/how-do-i-change-the-logging). I couldn't find similar one for Artemis.
The version of ActiveMQ Artemis you're using doesn't support reloading logging configuration at runtime. This functionality was implemented via ARTEMIS-2121 so you'll need to move to 2.6.4 or 2.7.0 to get it.
Once you update your logging.properties the broker will reload it and a message will be logged that the logging configuration was reloaded. By default it may take up to 5 seconds to reload (based on the configuration-file-refresh-period in broker.xml which defaults to 5000 milliseconds).

Kafka broker is not available from localhost

I have installed kafka_2.11-1.1.0 and set advertised listener to advertised.listeners=PLAINTEXT://<my-ip>:9092 (in $KAFKA_HOME/config/server.properties).
I can connect and write to my kafka using java code and see my cluster via kafka-tool from another server but I can't write messages to my topic from my local machine (the one that I have installed kafka cluster on it).
I have also tried to set listeners value to listeners = PLAINTEXT://:9092 but there is no change. What should I do to my kafka to make it reachable and writable from both outside and inside of the localhost?
In the server.properties use these two following properties
listeners=PLAINTEXT://0.0.0.0:9092
advertised.listeners=PLAINTEXT://<your ip>:9092
I finally solved the issue by changing my code's org.apache.kafka library from version 1.1.0 to version 2.1.0.
I mention that all of these libraries were imported (downloaded) and used via mvnrepository.com.
Also, our kafka producer and consumer code pattern were written using this article:
https://dzone.com/articles/kafka-producer-and-consumer-example.
Have a look in the below following links, it may be helpful for your scenario,
Kafka access inside and outside docker
Kafka Listeners - Explained