I want to use ActiveMQ to create a broker to connect to another Mosquitto broker.
And then, I can use ActiveMQ to receive the message from Mosquitto broker.
What I am done now is:
integrate the ActiveMQ with JBoss EAP 6.3.
create MQTT broker in ActiveMQ: http://activemq.apache.org/mqtt.html
But after I add NetworkConnector in broker-config.xml:
<transportConnectors>
<transportConnector name="openwire" uri="tcp://localhost:61616"/>
<transportConnector name="mqtt" uri="mqtt://localhost:1883"/>
</transportConnectors>
<networkConnectors>
<networkConnector uri="static:(tcp://mosquitto_server_ip:1883)"/>
</networkConnectors>
the server shows exception after starting:
"Network connection between vm://localhost#8 and
tcp:///mosquitto_server_ip:1883#42688 shutdown due to a remote error:
java.util.concurrent.TimeoutException"
I also try to use "mqtt://..." to connect, but it's still failed:
java.lang.IllegalArgumentException: Invalid connect parameters:
{wireFormat.host=0.0.0.0}
Does anyone know how to use JBoss ActiveMQ to connect to mosquitto broker?
This is not supported, the ActiveMQ Network Connector only works between ActiveMQ brokers using the native OpenWire protocol, MQTT is not supported. You would need to use something like Camel or some other bridging mechanism to support cross broker communication between ActiveMQ and Mosquito
Related
I wonder if it is possible to configure message redelivery on the client side. I have read the ActiveMQ Artemis docs and have not found any information about this feature. So I made a conclusion that there is no opportunity to configure message redelivery on the client side. The only place to configure message redelivery is the broker.xml file. Am I right about it?
By the way I can configure the connection to ActiveMQ Artemis by using broker URL params or by application.yml since I using Spring Boot 2.x.
ActiveMQ Artemis supports AMQP, STOMP, MQTT, OpenWire, etc. Many clients exist for these protocols written in lots of different languages across all kinds of platforms. Whether or not a given client supports client-side redelivery is really up to the client itself. You don't specify which client you're using so it's impossible to give you a specific yes/no answer.
However, I can say that ActiveMQ Artemis ships a JMS client implementation which uses the core protocol. That client does not support client-side redelivery. However, the OpenWire JMS client shipped with ActiveMQ "Classic" does support client-side redelivery, and it can be used with ActiveMQ Artemis as well.
I am trying to use Lenses MQTT source connector [https://docs.lenses.io/connectors/source/mqtt.html] with confluent kafka v5.4.
Following is my MQTT source connector properties file:
connector.class=com.datamountaineer.streamreactor.connect.mqtt.source.MqttSourceConnector
connect.mqtt.clean=false
key.converter.schemas.enable=false
connect.mqtt.timeout=1000
value.converter.schemas.enable=false
name=kmd-source-4
connect.mqtt.kcql=INSERT INTO kafka-source-topic-2 SELECT * FROM ctt/+/+/location WITHCONVERTER=`com.datamountaineer.streamreactor.connect.converters.source.JsonSimpleConverter` WITHKEY(id)
value.converter=org.apache.kafka.connect.json.JsonConverter
connect.mqtt.service.quality=1
key.converter=org.apache.kafka.connect.json.JsonConverter
connect.mqtt.hosts=tcp://ip:1883
connect.mqtt.converter.throw.on.error=true
connect.mqtt.username=username
connect.mqtt.password=password
errors.log.include.messages=true
errors.log.enable=true
I am publishing messages from UI based MQTT client MQTT fx to MQTT topic 'ctt/+/+/location' and subscribing those messages on the kafka topic 'kafka-source-topic-2'.I am using Rabbit MQ as my MQTT broker and my confluent platform and RabbitMQ are on different VMs. I do not think using RabbitMQ broker instead of Mosquitto MQTT should be a problem. Whatever and whenever I publish from MQTT fx I successfully see the messages in the MQTT fx upon subscription. I had also set up confleunt MongoDB source connector and it works seamlessly.
But my problem is - the messages published on MQTT topic are available on the mapped kafka topic in an intermittent manner. What could be the reason? I do not see any error messages in kafka connect logs. Are there any connection related properties with respect to MQTT broker that I need to specify in my MQTT source properties file? Are there any properties to be included for sure in Rabbit MQ broker? Has anyone used Lenses MQTT source and sink connectors and would like to suggest anything about them?
Your connect.mqtt.timeout is only 1 second?!? Intermittent messages suggests to me that your connector is timing out and has to re-establish its connection, and while its busy doing that, MQTT messages are coming in but not making it to the connector as it is not subscribed to the broker at that instance. Try increasing your timeout to something like 60000 (1 minute) and see what happens. Is there any reason you need it to timeout? RabbitMQ can handle connections that stay open for long periods of time with no traffic.
Is it possible to connect to HornetQ server using ActiveMQ Artemis client libraries (1.5.x or 2.x)?
ActiveMQ Artemis has kept compatibility with HornetQ in that HornetQ clients can connect to an ActiveMQ Artemis broker. Compatibility has also been maintained so that newer ActiveMQ Artemis clients can connect to older ActiveMQ Artemis brokers. However, there's no tests that cover ActiveMQ Artemis clients connecting to a HornetQ broker. It may work, but there's no guarantee. The recommendation would be to simply continue using HornetQ clients to connect to HornetQ brokers.
I am trying to connect an Apache Artemis broker with an Amazon MQ broker to create a hybrid architecture. I have tried connecting ActiveMQ with Amazon MQ, and I could achieve it by using "network connectors" in the broker.xml file and it worked fine.
For connecting Amazon MQ and Artemis brokers I have added below shown "bridge configuration" and the "connector" to the Artemis broker.xml file
<bridges>
<bridge name="my-bridge">
<queue-name>factory</queue-name>
<forwarding-address>machine</forwarding-address>
<filter string="name='rotor'"/>
<reconnect-attempts>-1</reconnect-attempts>
<user>admin</user>
<password>12345678</password>
<static-connectors>
<connector-ref>netty-ssl-connector</connector-ref>
</static-connectors>
</bridge>
</bridges>
<connectors>
<connector name="netty-ssl-connector">ssl://b-...c-1.mq.us-west-2.amazonaws.com:61617?sslEnabled=true;</connector>
</connectors>
I'm getting an exception: ssl schema not found.
So I'm trying to understand whether connecting the Artemis and AmazonMQ brokers is same as connecting Activemq and AmazonMQ brokers (i.e by changing the configuration in the broker.xml file)? If so, what are the changes I need to make to the above shown configuration?
ActiveMQ Classic (i.e. 5.x) and Amazon MQ use the OpenWire protocol to establish connections in a network of brokers. ActiveMQ Artemis supports clients using the OpenWire protocol. However, ActiveMQ Artemis uses its own "core" protocol for bridges and clustering. Therefore you won't be able to create a bridge from ActiveMQ Artemis to ActiveMQ Classic or Amazon MQ since those brokers don't understand the Artemis "core" protocol.
The ssl schema is used by OpenWire clients, not "core" clients. That is why you can't create an Artemis bridge using it.
If you want to integrate Artemis and Amazon MQ I'd recommend something like Camel or even possibly the JMS bridge that ships with Artemis. You can see examples of both in this example which ships with Artemis.
I am using Kafka Version 0.10.1. I connected Kafka brokers and its clients via SSL and its working fine.Now I have a query with some limitations.
My limitations are
No Plain text communications allowed
The connection between Kafka-brokers and its clients be SSL.
The connection between Kafka-brokers and zookeeper via SASL (since zookeeper doesn't support SSL).
Since all inter-broker communications are set to SSL. I have a query that, Whether SASL connection between Zookeeper and Kafka-Broker is possible without enabling plaintext in Kafka-Broker.
Thanks in advance.
Yes it is possible to setup a Kafka cluster with Zookeeper with all the requirements you listed.
You'll need to have 2 listeners SSL and SASL_SSL (No PLAINTEXT) in your Kafka config:
listeners=SASL_SSL://host.name:port,SSL://host.name:port
Set inter broker to SSL
security.inter.broker.protocol=SSL
I suggest you check the Security section in the Kafka documentation to see what you need to do exactly to get this working, including how to configure clients so they connect over SASL_SSL: http://kafka.apache.org/documentation/#security
It also contains a section about securing Zookeeper:
http://kafka.apache.org/documentation/#zk_authz