Synchronization mode of ha replication - activemq-artemis

Version : ActiveMQ Artemis 2.10.1
When we use ha-policy and replication, is the synchronization mode between the master and the slave full synchronization? Can we choose full synchronization or asynchronization?

I'm not 100% certain what you mean by "full synchronization" so I'll just explain how the brokers behave...
When a master broker receives a durable (i.e. persistent) message it will write the message to disk and send the message to the slave in parallel. The broker will then wait for the local disk write operation to complete as well as receive a response from the slave that it accepted the message before it responds to the client who originally sent the message.
This behavior is not configurable.

Related

What happens when a Kafka Broker runs out of space before the configured Retention time/bytes?

I understand that most systems should have monitoring to make sure that this doesn't happen (and that we should set the retention policies properly), but am just curious what happens if the Kafka Broker does indeed run out of disk space (for example, if we set retention time to 30 days, but the Broker runs out of disk space by the 1st day)?
In a single Broker scenario, does the Broker simply stop receiving any new messages and return an exception to the Producer? Or does it delete old message to make space for the new ones?
In a multi Broker scenario, assuming we have Broker A (leader of the partition but has not more disk space) and Broker B (follower of the partition and still has disk space), will leadership move to Broker B? What happens when both Brokers run out of space? Does it also return an exception to the Producer?
Assuming the main data directory is not on a separate volume, the OS processes themselves will start locking up because there's no free space left on the device.
Otherwise, if the log directories are isolated to Kafka, you can expect any producer acks to stop working. I'm not sure if a specific error message is returned to clients, though. From what I remember, the brokers just stop responding to Kafka client requests and we had to SSH to them, stop kafka services, and manually clean up files rather than waiting for retention policies. No, the brokers don't preempt old data in favor of new records

Consuming # subscription via MQTT causes huge queue on lost connection

I am using Artemis 2.14 and Java 14.0.2 on two Ubuntu 18.04 VM with 4 Cores an 16 GB RAM. My producers send approximately 2,000 Messages per seconds to approx 5,500 different Topics.
When I connect via the MQTT.FX client with certificate based authorization and do a subscription to # the MQTT.FX client dies after some time and in the web console I see a queue under # with my client id that won't be cleared by Artemis. It seems that this queue grows until the RAM is 100% used. After some time my Artemis Broker restarts itself.
Is this behaviour of Artemis normal? How can I tell Artemis to clean up "zombie" queues after some time?
I already tried to use this configuration parameters in different ways, but nothing works:
confirmationWindowSize=0
clientFailureCheckPeriod=30000
consumerWindowSize=0
Auto-created queues are automatically removed by default when:
consumer-count is 0
message-count is 0
This is done so that no messages are inadvertently deleted.
However, you can change the corresponding auto-delete-queues-message-count address-setting in broker.xml to -1 to skip the message-count check. Also, can adjust the auto-delete-queues-delay to configure a delay if needed.
It's worth noting that if you create a subscription like # (which is fairly dangerous) you need to be prepared to consume the messages as quickly as they are produced to avoid accumulation of messages in the queue. If accumulation is unavoidable then you should configure the max-size-bytes and address-full-policy according to your needs so the broker doesn't get overwhelmed. See the documentation for more details on that.

Client Local Queue in Red Hat AMQ

We have a network of Red Hat AMQ 7.2 brokers with Master/Slave configuration. The client application publish / subscribe to topics on the broker cluster.
How do we handle the situation wherein the network connectivity between the client application and the broker cluster goes down? Does Red Hat AMQ have a native solution like client local queue and a jms to jms bridge between local queue and remote broker so that network connectivity failure will not result in loss of messages.
It would be possible for you to craft a solution where your clients use a local broker and that local broker bridges messages to the remote broker. The local broker will, of course, never lose network connectivity with the local clients since everything is local. However, if the local broker loses connectivity with the remote broker it will act as a buffer and store messages until connectivity with the remote broker is restored. Once connectivity is restored then the local broker will forward the stored messages to the remote broker. This will allow the producers to keep working as if nothing has actually failed. However, you would need to configure all this manually.
That said, even if you don't implement such a solution there is absolutely no need for any message loss even when clients encounter a loss of network connectivity. If you send durable (i.e. persistent) messages then by default the client will wait for a response from the broker telling the client that the broker successfully received and persisted the message to disk. More complex interactions might require local JMS transactions and even more complex interactions may require XA transactions. In any event, there are ways to eliminate the possibility of message loss without implementing some kind of local broker solution.

How can I get messages without missing in Kafka?

i'm a newbie in Kafka. I've been testing Kafka for sending messages.
This is my situation, now.
add.java in my local VM is sending messages to kafka in my local VM regularly.
relay.java in another server is polling from kafka in my local VM and producing to kafka in another server.
While I was sending messages from kafka in my local VM to kafka in another server,
I pulled LAN cable out from my lap top. Few seconds later, I connected LAN cable to it again.
And then I found that some messages were lost while LAN cable was disconnected.
However, When the network is reconnected, I want to get all messages which are in disconnection without
missing.
Are there any suggestions?
Any help would be highly appreciated.
First of all, I suggest you use MirrorMaker (1 or 2) because it supports exactly this use case of consuming and producing to another cluster.
Secondly, add.java should not be dropping messages if your LAN is disconnected.
Whether you end up with dropped messages on the way from relay.java depends on your consumer and producer settings within there. For example, you should definitely disable auto offset commits and only commit after you have gotten a completion event and acknowledgement from its producer action. This will result in at least once delivery.
You can find multiple posts about processing guarantees in Kafka

ActiveMQ Artemis Error - AMQ224088: Timeout (10 seconds) while handshaking has occurred

In ActiveMQ Artemis, I occasionally receive the connection error below. I can't see any obvious impact to the brokers or message queues. Anyone able to advise exactly what it means or what impact it could be having?
Current action performed is to either restart the brokers or check to see they're still connected to the cluster. Is either of this action necessary?
Current ActiveMQ Artemis version deployed is v2.7.0.
//error log line received at least once a month
2019-05-02 07:28:14,238 ERROR [org.apache.activemq.artemis.core.server] AMQ224088: *Timeout (10 seconds) while handshaking* has occurred.
This error indicates that something on the network is connecting to the ActiveMQ Artemis broker, but it's not completing any protocol handshake. This is commonly seen with, for example, load balancers that do a health check by creating a socket connection without sending any real data just to see if the port is open on the target machine.
The timeout is configurable so that the ERROR messages aren't logged, but that will also disable the clean-up which may or may not be a problem in your use-case. You should just be able to set handshake-timeout=0 on the relevant acceptor URL in broker.xml.
When you see this message there should be no need to restart the broker.
In the next ActiveMQ Artemis release the IP address of the remote client where the connection originated will be included as part of the message.