I configured a standalone ActiveMQ-5.16.1 to be used as a broker from a JBoss 4.2.3 server and my Camel-routes are able to get messages send from another component a ServerMix-Adaptor to this broker.
I replaced the standalone ActiveMQ-5.16.1 with an ActiveMQ Artemis-2.17.0 broker and besides few changes to the subscriptions the whole system is working.
I am aware that ActiveMQ 5.16.1 and ActiveMQ Artemis 2.17.0 are quite different, and I am trying to replace the activemq-rar-5.4.3.rar downloaded from here with the artemis-rar-2.17.0.rar downloaded from here. This .rar file is deployed under $JBOSS_HOME/server/default/deploy together with an *-ds.xml file.
In the *-ds.xml file 2 connection factories are declared, one for for queues and one for topics. This file references a file contained in the activemq-rar-5.4.3.rar META-INF/ra.xml and in this file are defined the connection-factories.
The problem is that even though the artemis-rar-2.17.0.rar contains the same META-INF/ra.xml file it's not obvious and not documented what to configure and how to configure it.
I couldn't find an example or a documentation about this.
Related
I am trying install ActiveMQ Browser, and I wanted to connect with my ActiveMQ Artemis server. How do we configure that?
I assume you're talking about this ActiveMQ Browser GUI tool.
If that assumption is correct then there's no way to integrate it with ActiveMQ Artemis as it's hard-coded to use the specific JMX management beans from ActiveMQ 5.x.
I recommend you use the ActiveMQ Artemis web console. It has a rich set of functionality that should cover most of the use-cases you're interested in. Among other things, it will allow you to:
Send new messages to addresses.
Delete messages.
Move messages to another address.
Create or delete addresses & queues.
Shutdown broker.
etc.
In Kafka-manager github page it is written that:
The minimum configuration is the zookeeper hosts which are to be used
for kafka manager state. This can be found in the application.conf
file in conf directory. The same file will be packaged in the
distribution zip file; you may modify settings after unzipping the
file on the desired server.
kafka-manager.zkhosts="my.zookeeper.host.com:2181" You can specify
multiple zookeeper hosts by comma delimiting them, like so:
kafka-manager.zkhosts="my.zookeeper.host.com:2181,other.zookeeper.host.com:2181"
Alternatively, use the environment variable ZK_HOSTS if you don't want
to hardcode any values.
ZK_HOSTS="my.zookeeper.host.com:2181"
So my questions are:
Does Kafka-manager already contain Zookeeper when I download it?
Should I install Zookeeper for Kafka Manager seperately or use already installed Zookeeper used for Apache Kafka?
How many Zookeper instances are required for Kafka-Manager?
If I should install Zookeeper dedicated to Kafka-Manager, is it okey to install it in the same machine which Kafka-Manager installed or should I create another Zookeeper cluster in different machines?
I wonder what is the best practice?
Does Kafka-manager already contain Zookeeper when I download it?
No. It's just a web application. You can use the Zookeeper that's used by Kafka, though
That should answer the rest of your question...
In case of any network issues or during triage some other issue, I would like to enable logging in the ActiveMQ Artemis broker (2.6.1) without restarting the broker to get more logging.
Currently I'm restarting the broker after enabling/disabling logging in logging.properties
logger.level=DEBUG
In ActiveMQ 5.x there is a JMX operation exposed (as mentioned in https://activemq.apache.org/how-do-i-change-the-logging). I couldn't find similar one for Artemis.
The version of ActiveMQ Artemis you're using doesn't support reloading logging configuration at runtime. This functionality was implemented via ARTEMIS-2121 so you'll need to move to 2.6.4 or 2.7.0 to get it.
Once you update your logging.properties the broker will reload it and a message will be logged that the logging configuration was reloaded. By default it may take up to 5 seconds to reload (based on the configuration-file-refresh-period in broker.xml which defaults to 5000 milliseconds).
When I create a source or sink connector using Confluent Control Center where does it save the settings related to that connector? Are there files I can browse? We are planning to create 50+ connectors and at one point we need to copy them from one environment to another, I was wondering if there is an easy way to do that.
Kafka Connect in distributed mode uses Kafka topics for storing configuration.
Kafka Connect supports a REST API. You can use this for viewing existing connector configuration, creating new ones (including programatically/automatically for 50+ new connectors), starting/stopping connectors, etc.
The REST API is documented here.
Kafka Connect distributed mode is started with a property file. That property file defines a "config topic".
The connectors you're able to load, however, are not stored there - that's only for the running source/sink configurations.
The classes themselves are bundled as JAR files in the classpaths of the individual Connect Workers, and Control Center has no current way of provisioning new Connect classes. In other words, you must use something like Ansible or manually connect to each worker, download the Connect type you want, and extract it next to the other connects.
For example, let's pretend you wanted the Syslog connector.
You'd already have folders for these under usr/share/java in the Confluent installation
kafka-connect-hdfs
kafka-connect-jdbc
...
So, you download or build that Syslog connector, make a kafka-connect-syslog folder, and drop all necessary jar libraries there.
Once you do this for all connect instances, you'll need to also restart the Kafka Connect process on those machines.
Once Control Center connects back to the Connect server, you'll be able to configure your new Connect classes
from what I can see in the documentation, always a Redis instance is needed for Spring Cloud Dataflow to work.
Is it also possible to work with different message broker, e.g. RabbitMQ?
How would one specify a different message broker during startup?
With the recent 1.0.0.M3 release, when using the Local server, we load kafka based OOTB applications, by default.
If you'd like to switch from kafka to rabbit, you can via --binder=rabbit as a command line argument when starting the spring-cloud-dataflow-server-local JAR (and be sure to start the RabbitMQ server).