We have several ActiveMQ Artemis 2.17.0 clusters setup to replicate between data centres with mirroring.
Our previous failover had been an emergency, and it's likely the state had fallen out of sync. When we next performed our scheduled failover tests weeks-old messages were sent to the consumers. I know that mirroring is asynchronous so it is expected that synchronization may not be 100% all the time. However, these messages were not within the time frame of synchronization delays. It is worth noting that we've had several events which I expect might throw mirroring off. We had hit the NFS split brain issue as well as the past emergency fail over
As such, we are looking for a way to purge (or sync) all messages on the standby server after we know that there have been problems with the mirroring to prevent a similar scenario from happening. There are over 5,000 queues, so preferably the operation doesn't need to be run on a queue by queue basis.
Is there any way to accomplish this, either in ActiveMQ Artemis 2.17.0 or a later version?
There's no programmatic way to simply delete all the data from every queue on the broker. However, you can combine a few management operations (e.g. in a script) to get the same result. You can use the getQueueNames method to get the name of every queue and then pass those names to the destroyQueue(String) method.
However, the simplest way to clear all the data would probably be to simply stop the broker, clear the data directory, and then restart the broker.
Related
Setup:
We have a Spring Boot application that is reading messages from an ActiveMQ Artemis JMS queue.
The messages are processing in a JPA transaction.
When there is an exception triggering a rollback in JPA, it also triggers a JMS rollback in Artemis that is setup with redelivery delay.
Our app is running in multiple instances in parallel and this cause optimistic locking issues when processing multiple messages that share common data.
Issue: When X messages are processed in parallel and there is optimistic locking issue, then only 1 message goes through and all the others are re-scheduled with the delay. When redelivery happens, then same as before the X-1 messages will arrive at the same time since the delay is the same and cause the same issue with only one going through.
Question: Does anyone know a way to add variance to the redelivery delay time of ActiveMQ Artemis?
Note: I know that there is an option for that in ActiveMQ 5.x called collisionAvoidanceFactor, but it is missing from ActiveMQ Artemis.
As you note, there is no equivalent for collisionAvoidanceFactor in ActiveMQ Artemis. I know of no way to modify the redelivery delay in a similar manner. There is the redelivery-delay-multiplier, but that is enforced consistently across redeliveries and would not provide the variance that you're looking for.
You may consider using message grouping so that "messages that share common data" are consumed serially by the same consumer and therefore avoid the locking issues in the first place.
After looking at what it would take to implement this feature I opened ARTEMIS-2364. I'll be sending a pull-request soon so it will likely be in the next version of Artemis (i.e. 2.10).
I want to create a distributed system that can support around 10,000 different types of jobs. One single machine can host only 500 such jobs, as each job needs some data to be pre-loaded into memory, which can't be kept in a cache. Each job must have redundancy for availability.
I had explored open-source libraries like zookeeper, hadoop, but none solves my problem.
The easiest solution that I can think of, is to maintain a map of job type, with its hosted machine. But how can I support dynamic allocation of job type on my fleet? How to handle machine failures, to make sure that each job type must be available on atleast 1 machine, at any point of time.
Based on the answers that you mentioned in the comments, I propose you to go for a MQ-based (Message Queue) architecture. What I propose in this answer is to:
Get the input from users and push them into a distributed message queue. It means that you should set up a message queue (Such as ActiveMQ or RabbitMQ) on several servers. This MQ technology, helps you to replicate the input requests for fault tolerance issues. It also provides a full end-to-end asynchronous system.
After preparing this MQ layer, you can setup you computing servers layers. This means that some computing servers (~20 servers in your case) will read the requests from the message queue and start a job based on the request. Because this MQ is distributed, you can make sure that a good level of load balancing can happen in your computing servers. In addition, each server is capable of running as much as jobs that you want (~500 in your case) based on the requests that it reads from the MQ.
Regarding the failures, the computing servers may only pop from the MQ, if and only if the job is completed. If one server is crashing, the job is still in the MQ and another server can work on it. If the job is saving some state somewhere or updates something, you should manage its duplicate run then.
The good point about this approach is that it is very salable. It means that if in future you have more jobs to handle, by adding a computing server and connecting it to the MQ, you can process more requests on the servers without any change to the system. In addition, some nice features in the MQ like priority-based queuing, helps you to prioritize the requests and process them based on the job type.
p.s. Your Q does not provide any details about the type and parameters of the system. This is a draft solution that I can propose. If you provide more details, maybe the community can help you more.
How to get history messages from ActiveMQ like Kafka's consumer group, from beginning?
No. ActiveMQ does not store messages longer than needed for delivery to be complete. If you need to keep track of history - you should implement something for that purpose. ActiveMQ offers mirrored queues so that all queues can be wire-taped at once. But actual storage has to be made by hand.
From own experience - it's easy to implement archiving to a database, but if you want to make some decent search queries, handle replay of older messages etc - it takes some thought
I'm setting up 2 kafka v0.10.1.0 clusters on different DCs and planning to use mirror-maker to keep one as source and the other one as target, what I'm not sure is how to ensure high availability when my source/main cluster goes down (complete DC where source kafka cluster goes down) do I need to make my application switch to produce messages to the target kafka and what will happen when source kafka is back? how to bring it back in sync with the possible lost messages?
Thanks
From reading your question I don't think, that MirrorMaker will be a suitable tool for your needs I am afraid.
Basically MirrorMaker is simply a Consumer and a Producer tied together to replicate messages from one cluster to another. It is not a tool to tie two Kafka clusters together in an active-active configuration, which sounds a lot like what you are looking for.
But to answer your questions in order:
Do I need to make my application switch to produce messages to the
target kafka?
Yes, there is currently no failover function, you would need to implement logic in your producers to try the target cluster after x amount of failed messages or no messages sent in y minutes or something like that.
What will happen when source kafka is back?
Pretty much nothing that you don't implement yourself :)
MirrorMaker will start replicating data from your source cluster to your target cluster again, but since your producers now switched over to the target cluster, the source cluster is not getting any data, so they will idle along.
Your producers will keep producing into the target cluster, unless you implemented a regular check whether the source came back online and have them switch back.
How to bring it back in sync with the possible lost messages?
When your source cluster is back online and assuming all the things I mentioned above have happened you effectively switched your clusters around, depending on whether you want your source as primary cluster that gets written to or are happy to reverse roles when this happens you have two options that I can come up with off the top of my head:
reverse the direction of mirrormaker and set the consumer group offsets manually so that it picks up at the point where the source cluster died
stop producing new data for a while, recover missing data to the source cluster, switch back your producers and start everything up again.
Both options require you to figure out, what data is missing on the source cluster manually though, I don't think there is a way around this.
Bottom line is, that this in not an easy thing to do with MirrorMaker and it might be worth having another think about whether you really want to switch producers over to the target cluster if the source goes down.
You could also have a look at Confluent's Replicator, which might better suit what you are looking for and is part of their corporate offering. Information is a bit sparse on that, let me know if you are interested in it and I can make an introduction to someone who can tell you more about it (or of course just send a mail to Confluent, that'll reach the right person as well).
I've been working on a project of mine using Akka to create a real-time processing system which takes in the Twitter stream (for now) and uses actors to process said messages in various ways. I've been reading about similar architectures that others have built using Akka and this particular blog post caught my eye:
http://blog.goconspire.com/post/64901258135/akka-at-conspire-part-5-the-importance-of-pulling
Here they explain different issues that arise when pushing work (ie. messages) to actors vs. having the actors pull work. To paraphrase the article, by pushing messages there is no built-in way to know which units of work were received by which worker, nor can that be reliably tracked. In addition, if a worker suddenly receives a large number of messages where each message is quite large you might end up overwhelmed and the machine could run out of memory. Or, if the processing is CPU intensive you could render your node unresponsive due to CPU thrashing. Furthermore, if the jvm crashes, you will lose all the messages that the actor(s) had in its mailbox.
Pulling messages largely eliminates these problems. Since a specific actor must pull work from a coordinator, the coordinator always knows which unit of work each worker has; if a worker dies, the coordinator knows which unit of work to re-process. Messages also don’t sit in the workers’ mailboxes (since it's pulling a single message and processing it before pulling another one) so the loss of those mailboxes if the actor crashes isn't an issue. Furthermore, since each worker will only request more work once it completes its current task, there are no concerns about a worker receiving or starting more work than it can handle concurrently. Obviously there are also issues with this solution like what happens when the coordinator itself crashes but for now let's assume this is a non-issue. More about this pulling pattern can also be found at the "Let It Crash" website which the blog references:
http://letitcrash.com/post/29044669086/balancing-workload-across-nodes-with-akka-2
This got me thinking about a possible alternative to doing this pulling pattern which is to do pushing but with durable mailboxes. An example I was thinking of was implementing a mailbox that used RabbitMQ (other data stores like Redis, MongoDB, Kafka, etc would also work here) and then having each router of actors (all of which would be used for the same purpose) share the same message queue (or the same DB/collection/etc...depending on the data store used). In other words each router would have its own queue in RabbitMQ serving as a mailbox. This way, if one of the routees goes down, those that are still up can simply keep retrieving from RabbitMQ without too much worry that the queue will overflow since they are no longer using typical in-memory mailboxes. Also since their mailbox isn't implemented in-memory, if a routee crashes, the most messages that it could lose would just be the single one it was processing before the crash. If the whole router goes down then you could expect RabbitMQ (or whatever data store is being used) to handle an increased load until the router is able to recover and start processing messages again.
In terms of durable mailboxes, it seems that back in version 2.0, Akka was gravitating towards supporting these more actively since they had implemented a few that could work with MongoDB, ZooKeeper, etc. However, it seems that for whatever reason they abandoned the idea at some point since the latest version (2.3.2 as of the writing of this post) makes no mention of them. You're still able to implement your own mailbox by implementing the MessageQueue interface which gives you methods like enqueue(), dequeue(), etc... so making one that works with RabbitMQ, MongoDB, Redis, etc wouldn't seem to be a problem.
Anyways, just wanted to get your guys' and gals' thoughts on this. Does this seem like a viable alternative to doing pulling?
This question also spawned a rather long and informative thread on akka-user. In summary it is best to explicitly manage the work items to be processed by a (persistent) actor from which a variable number of worker actors pull new jobs, since that allows better resource management and explicit control over what gets processed and how retries are handled.