Understanding Connectors ActiveMQ Artemis - activemq-artemis

I am new in ActiveMQ Artemis
I have read doc and found connectors are used by a client to define how it connects to a server.
I have a broker.xml file which have following peace of code
<connectors>
<connector name="netty-connector">tcp://0.0.0.0:61616</connector>
<!-- connector to the server1 -->
<connector name="server1-connector">tcp://0.0.0.0:9616</connector>
</connectors>
<!-- Acceptors -->
<acceptors>
<acceptor name="netty-acceptor">tcp://0.0.0.0:61616</acceptor>
</acceptors>
so here acceptor is saying,Hey you can connect with me on port 61617, I am listening on it(which is making sense for me)
but what about role of connector in this broker.xml.
Connector is targeting same port(tcp://0.0.0.0:61616) as in acceptor,
I want to understand it what is it port means which is mentioned in Connector, can some please explain it.

Did you happen to read the documentation on this subject? There is a section titled "Understanding Connectors" which should answer most, if not all, of your questions. I'll quote the most salient parts:
Whereas acceptors are used on the server to define how we accept connections, connectors are used to define how to connect to a server.
A connector is used when the server acts as a client itself, e.g.:
When one server is bridged to another
When a server takes part in a cluster
In these cases the server needs to know how to connect to other servers. That's defined by connectors.

Related

Configure ActiveMQ Artemis message redelivery on the client side

I wonder if it is possible to configure message redelivery on the client side. I have read the ActiveMQ Artemis docs and have not found any information about this feature. So I made a conclusion that there is no opportunity to configure message redelivery on the client side. The only place to configure message redelivery is the broker.xml file. Am I right about it?
By the way I can configure the connection to ActiveMQ Artemis by using broker URL params or by application.yml since I using Spring Boot 2.x.
ActiveMQ Artemis supports AMQP, STOMP, MQTT, OpenWire, etc. Many clients exist for these protocols written in lots of different languages across all kinds of platforms. Whether or not a given client supports client-side redelivery is really up to the client itself. You don't specify which client you're using so it's impossible to give you a specific yes/no answer.
However, I can say that ActiveMQ Artemis ships a JMS client implementation which uses the core protocol. That client does not support client-side redelivery. However, the OpenWire JMS client shipped with ActiveMQ "Classic" does support client-side redelivery, and it can be used with ActiveMQ Artemis as well.

Add acceptor and run it without reboot broker

I have embedded Artemis broker version 2.16.0.
Is there a way to add an acceptor and run it without having to reboot the broker?
For example, it is possible to create a queue or address in ActiveMQServerControl.
Or maybe I can add it to the broker.xml and then restart some services and the acceptor starts.
Yes, you can add an acceptor to an embedded broker at runtime and start it. Use something like this:
ActiveMQServer server;
...
server.getRemotingService().createAcceptor("myAcceptor", "tcp://127.0.0.1:61617").start();
It is possible to add/change certain things in broker.xml at runtime but an acceptor is not one of them. See the documentation for more details on that.

Connecting Artemis and Amazon MQ brokers

I am trying to connect an Apache Artemis broker with an Amazon MQ broker to create a hybrid architecture. I have tried connecting ActiveMQ with Amazon MQ, and I could achieve it by using "network connectors" in the broker.xml file and it worked fine.
For connecting Amazon MQ and Artemis brokers I have added below shown "bridge configuration" and the "connector" to the Artemis broker.xml file
<bridges>
<bridge name="my-bridge">
<queue-name>factory</queue-name>
<forwarding-address>machine</forwarding-address>
<filter string="name='rotor'"/>
<reconnect-attempts>-1</reconnect-attempts>
<user>admin</user>
<password>12345678</password>
<static-connectors>
<connector-ref>netty-ssl-connector</connector-ref>
</static-connectors>
</bridge>
</bridges>
<connectors>
<connector name="netty-ssl-connector">ssl://b-...c-1.mq.us-west-2.amazonaws.com:61617?sslEnabled=true;</connector>
</connectors>
I'm getting an exception: ssl schema not found.
So I'm trying to understand whether connecting the Artemis and AmazonMQ brokers is same as connecting Activemq and AmazonMQ brokers (i.e by changing the configuration in the broker.xml file)? If so, what are the changes I need to make to the above shown configuration?
ActiveMQ Classic (i.e. 5.x) and Amazon MQ use the OpenWire protocol to establish connections in a network of brokers. ActiveMQ Artemis supports clients using the OpenWire protocol. However, ActiveMQ Artemis uses its own "core" protocol for bridges and clustering. Therefore you won't be able to create a bridge from ActiveMQ Artemis to ActiveMQ Classic or Amazon MQ since those brokers don't understand the Artemis "core" protocol.
The ssl schema is used by OpenWire clients, not "core" clients. That is why you can't create an Artemis bridge using it.
If you want to integrate Artemis and Amazon MQ I'd recommend something like Camel or even possibly the JMS bridge that ships with Artemis. You can see examples of both in this example which ships with Artemis.

ArtemisMQ Connector

I'm new to ArtemisMQ and absolutely don't understand the sense of connectors.
Why is connector essential, as we already specify accepter of Broker Server in broker.xml -> we know which port (it is accepter port) to send a request to if we want to connect to this server. Even if this server is part of cluster, what is a role of connector? There is also information from other part of documentation about "Clusters", but there is words about cluster connections :
The cluster is formed by each node declaring cluster connections to other nodes in the core configuration file broker.xml. When a node forms a cluster connection to another node, internally it creates a core bridge (as described in Core Bridges) connection between it and the other node, this is done transparently behind the scenes - you don't have to declare an explicit bridge for each node. These cluster connections allow messages to flow between the nodes of the cluster to balance load.
From documentation "Understanding Connectors":
connectors are used by a client to define how it connects to a server.
What does it mean "define how"?
I've already read and another question about connector, but it doesn't help me.
Additional questions:
Is connector always the same as acceptor(I've downloaded some official examples and all of them(that i've seen) have both same acceptor and connector )?
What information does connector encapsulates, if it only consists of host+port (and it is same as acceptor's (if we omit that acceptor host can me 0.0.0. or localhost))?
Why does stand-alone Broker have connector, for example by default creation ./artemis create?
What should we write in connector?
Can you give a simple example when acceptor and connector are
different?
Two important points to note:
A connector is not essential depending on your use-case. You'll find that the default broker.xml doesn't have any connector elements defined. For example, if you just run ./artemis create the generated broker.xml will not have any connector elements.
The documentation you cited is quite old (from the very first release of Artemis). You may benefit from reading the latest documentation which has been updated for clarity in many places.
As noted in both the documentation and the other Stack Overflow answer you cited, certain components in the broker need to connect to other brokers (e.g. core bridges, cluster-connections, etc.). A connector encapsulates the information necessary for these other components to make the connections they need. It's really as simple as that.
Now regarding your individual questions...
Even if this server is part of cluster, what is a role of connector?
In the case of a cluster using a broadcast-group and a discovery-group each node in the cluster needs to broadcast to all the other nodes in the cluster how the other nodes can connect to itself. It does this by broadcasting a connector which is referenced in the cluster-connection configuration. When the other nodes in the cluster receive this broadcast they take the connector information and use it to connect back to the node which broadcast it originally. In this way nodes can dynamically discover and connect to each other. It's also worth noting that in this case the connector configuration will essentially mirror one of the broker's acceptor configurations (since the connector will be used by other nodes to connect to the broadcasting node's acceptor). This is discussed further in the cluster documentation.
...connectors are used by a client to define how it connects to a server...
This bit of documentation you quoted is accurate but may be a bit confusing. Keep in mind that that a client can run anywhere, even within the broker itself. In the case of core bridges and cluster connections there is a client running in the broker which use the connector to determine how to connect to another broker. For what it's worth the updated documentation doesn't have this specific wording.
What does it mean "define how"?
A connector is the URL that the client needs to connect to the broker. The URL can simply include the host and port or it can contain lots of configuration details for the connection (e.g. SSL config).
Is connector always the same as acceptor..?
No, not always. In the case of a cluster they will be the same (or very close) for the reasons I already outlined, but in the case of a bridge they won't be the same.
What information does connector encapsulates..?
See above.
Why does stand-alone Broker have connector, for example by default creation ./artemis create?
It doesn't. See above.
What should we write in connector?
The URL needed to connect.
Can you give a simple example when acceptor and connector are different?
As mentioned previously, bridging is an example where different acceptors and connectors are used. ActiveMQ Artemis ships with a "core-bridge" example in the examples/features/standard directory which demonstrates different acceptors and connectors. The example involves 2 different brokers with one broker having a core bridge configured to send messages to the other broker. Here's the broker.xml with the bridge defined. You can see the acceptor listening on the localhost:61616 and the connector for localhost:61617. This connector points to the other broker which is listening on localhost:61617.

Filtering in ActiveMQ Artemis. Reload of config in a cluster

A question about Filtering in ActiveMQ Artemis.
If I have a queue named MyQueue.IN and a filter only accepting a certain JMS Headers. Let's say ORDER.
In Broker.xml under the tag
<core>
<configuration-file-refresh-period>5000</configuration-file-refresh-period>
<queues>
<queue name="MyQueue.IN">
<address>MyQueue.IN</address>
<filter string="TOSTATUS='ORDER'"/>
<durable>true</durable>
</queue>
</queues>
</core>
As I read the manual, changing the Broker.xml it should now relaod config in Broker.xml every 5 seconds.
But when I change the filter to
<filter string="TOSTATUS='ORDERPICKUP'"/>
The config is not changed in ActiveMQ Artemis.
Not even if I restart the node.
It is in a cluster but I have changed Broker.xml on both sides.
Any ideas on how to change a filter on a queue? Preferably by changing the Broker.xml
/Zeddy
You are seeing the expected behavior. Although this behavior may not be intuitive or particularly user friendly it is meant to protect data integrity. Queues are immutable so once they are created they can't be changed. Therefore, to "change" a queue it has to be deleted and re-created. Of course deleting a queue means losing all the messages in the queue which is potentially catastrophic. In general, there are 2 ways to delete the queue and have it re-created:
Set <config-delete-queues>FORCE</config-delete-queues> in a matching <address-setting>. However, there is currently a problem with this approach which will be resolved via ARTEMIS-2076.
Delete the queue via management while the broker is running. This can be done via the JMX (e.g. using JConsole), the web console, the Artemis CLI, etc. Once the broker is stopped, update the XML, and then restart the broker.