JBoss HornetQ/ActiveMQ Artemis saving queue message to file system - jboss

Can someone please help me in understanding the impact of saving Hornetq/ActiveMQ Artemis messages to the file system and bypassing the queue every time?
The message is more than 2GB, and I run into Maximum size 2GB exceeded exception in HornetQ described here. So I was planning to not add the message to queue but write it manually to disk and pass the path of the file in header and read the message from the file. I really don't know the performance impact so asking if I do for all messages less than 2GB, will there be any performance impact?

Given the information you've provided I don't think anybody but you can determine the "performance impact" of manually writing the file to disk vs. sending the file to the broker.
Generally speaking, you will save the time required to send the file to the broker in the first place, but you don't indicate how fast the hard-drives are. If the hard-drive on the broker is much faster than the hard-drive on the client then it may take longer overall to write the file to disk manually.
Also, if the network is slow between the clients and the broker and fast between the clients and the shared drive where you're going to write the file then it may be overall faster to write the file to disk manually.
Ultimately it's going to be up to you to test the performance impact of your changes.

Related

How does persistence work in ActiveMQ Artemis?

In ActiveMQ 5.x when using kahadb for persistence all the files are managed in a single database. This can have serious consequences.
I have hundreds of queues that see millions of messages per day. If a consumer of a queue is temporarily stopped for maintenance reasons the queues continue to fill and empty, and the one whose consumer is suspended sees the messages accumulate. But on the disc it is different. Kahadb indeed marks the deleted (consumed) messages, but cannot free the place if a more present message is kept in the database. This is the case with those that accumulate in the suspended queue.
Very quickly the disk space is full.
To remedy this, you have to change the configuration and use mkahadb. In this case there is one database per queue and therefore on the disk only the suspended queue takes up space.
I am considering switching to Artemis. But the persistence has been completely redesigned. So what happens in terms of disk occupancy when suspending a consumer?
This question is pretty broad, but I'll take a crack at it...
By default ActiveMQ Artemis uses a file-based journal. The journal consists of a pool of files that can grow and shrink based on configuration (see journal-min-files and journal-pool-files in the documentation). The size of each file is also configurable (i.e. via journal-file-size).
An initial pool of files will be created when the broker starts and as messages are stored and the initial pool of files fills up then additional files will be created. As messages are consumed the pool can shrink through a process called "compaction" which is also configurable (see journal-compact-min-files and journal-compact-percentage in the documentation). As long as 1 record in a journal file is considered "live" (e.g. an unconsumed message) then the whole journal file will remain. However, you can tune the impact of this to fit your environment (e.g. by lowering the journal-file-size, making compaction more aggressive, etc.). To be clear, if compaction runs and there is a journal file with only 1 "live" record that means all the other journal files are "full" and at most you will only ever have 1 journal file like that.
Also, you can configure max-disk-usage to block producers from sending more messages once disk utilization reaches a certain point.
Ultimately, if a consumer becomes inactive (for whatever reason) then the messages that consumer was supposed to receive will accumulate in the queues (and potentially on disk). If you want to prevent messages from accumulating in the first place you could implement flow control or blocking for producers. However, even if they do accumulate the file-based journal should be able to grow and shrink as needed.
If I understood correctly there is no way to guarantee that only the payload is kept. (as with mkahadb)
But we can limit the size of the pages and fix their number.
Considering the very large number of queues I have to manage, I think the best is to divide this into a cluster. But I am worried. because when an application is in maintenace (and I have 10 000) the messages of the others cannot be erased because the messages accumulate in a queue. It is clear that whatever the configuration in a few seconds I will crash or stop.
I am surprised to see stop communication between two applications because two others no longer communicate with each other. This is a strong limitation compared to ActiveMQ.
this will limit the problem but not solve it.
with mkahadb if I have 2 queues A and B, that A receives a message every second and B receives 5000/s and the consumers of B consume them immediately. the queue B is always empty or almost and occupies very little disk. If the consumer of A is stopped. the queue A increases but the queue B does not occupy more disk.
With Artemis if I reduce the journal size to 5000. Every second a journal file is full and deleted. If A stops, there must be 1 message from A in the journal. We therefore keep 5000 messages on the disk every second. Although queue B is almost always empty. if I reduce the journal size to 500 I keep less messages but it still grows 500 times faster than with mkahaDB. And if I reduce the journal to 1 to get the same result as with mkahadb, but I force Artemis to handle millions of files which collapses the perf.
I have the impression that Artémis is not made to have very large numbers of queues contrary to ActiveMQ.
thank

How to modify the configuration of Kafka to process large amount of data

I am using kafka_2.10-0.10.0.1. I have two questions:
- I want to know how I can modify the default configuration of Kafka to process large amount of data with good performance.
- Is it possible to configure Kafka to process the records in memory without storing in disk?
thank you
Is it possible to configure Kafka to process the records in memory without storing in disk?
No. Kafka is all about storing records reliably on disk, and then reading them back quickly off of disk. In fact, its documentation says:
As a result of taking storage seriously and allowing the clients to control their read position, you can think of Kafka as a kind of special purpose distributed filesystem dedicated to high-performance, low-latency commit log storage, replication, and propagation.
You can read more about its design here: https://kafka.apache.org/documentation/#design. The implementation section is also quite interesting: https://kafka.apache.org/documentation/#implementation.
That said, Kafka is also all about processing large amounts of data with good performance. In 2014 it could handle 2 million writes per second on three cheap instances: https://engineering.linkedin.com/kafka/benchmarking-apache-kafka-2-million-writes-second-three-cheap-machines. More links about performance:
https://docs.confluent.io/current/kafka/deployment.html
https://www.confluent.io/blog/optimizing-apache-kafka-deployment/
https://community.hortonworks.com/articles/80813/kafka-best-practices-1.html
https://www.cloudera.com/documentation/kafka/latest/topics/kafka_performance.html

How does Kafka guarantee sequential disk access?

I'm a newbie for Kafka. When I read the documentation of Kafka, I saw that Kafka is performing well because of sequential disk access.
But how is that possible? In Java(or something else), If I use File I/O, OS will handle it appropriately. However, I can't know if OS store the files I want to store in multiple sectors or in contiguous sectors. So, Kafka cannot always say that sequential disk access occurs in my opinion.
Am I true or not?
Kafka does not always access disk sequentially but it does some things that make it much more likely that disk access is often sequential. All Kafka messages are stored in larger segment files (1GB each by default) and since Kafka messages are not deleted when consumed (like in other message brokers) Kafka will not end up creating a fragmented filesystem over time by continuously creating and deleting many variable length files. Instead it creates segment files and then appends to that file until it reaches 1GB (a configurable limit). Only when all messages in the segment expire will it delete the entire 1GB segment. This means that often these 1GB sections of disk are actually laid out as contiguous blocks. It is a recommended best practice to keep these Kafka commit log files on a dedicated filesystem so it does not get fragmented by other apps reading and writing variable length files into the same filesystem. More importantly most reading an writing to these segment files is sequential and goes through OS page cache so as to reduce disk I/O even further by caching the most often accessed pages in memory. This is why it is a recommendation to tune the kernel to set swappiness to 1 to reduce the likelihood that these cached pages would get swapped out of memory.

Fluentd persistence and reliability

Since fluentd does not use redis but supposedly has better built in reliability, how does that solve the problem of the instance going down before it has a chance to send the logs to elastic search? Is this something not significant enough to worry about, for example you could set the steaming of logs at a high frequency, so if you ever lose the instance, only a few lines would have no transferred over?
Fluentd uses a buffering mechanism, once it receive a set of events they are stored either in memory or in the file system, the latest is what is used for reliability. The events are stored in chunks, then upon a certain period of time it flush the chunks to the destination. If a chunk failed, it will retry later.
You can read more about buffering in the official documentation:
http://docs.fluentd.org/articles/buffer-plugin-overview

How to increase number of messages that can be stored in MSMQ

We have a number of MSMQ queues throughout our system, both private and public queues. Sometimes a windows service that reads from a queue will crash, and so messages will build up in that queue. Once the queue gets to a certain size (maybe 60K messages), all queues on that server will stop working, throwing errors about insufficient resources.
My question is, how are the queues really working behind the scenes, are they storing messages in RAM or on the hard drive? Does it run out of resources and crash when the server runs out of RAM? If it's using some allocated space on the hard drive, is there a way to increase the allowable size? If it's using RAM, can I simply add RAM to the servers and then that will increase the allowable size?
I need to make sure that when a service goes down, we can handle storing 100K or 200K messages in that queue while we work on fixing the service, as those messages are critical to our business.
Here is an article on MSDN that seems to address your question (as John points out below, this only applies to Windows Server 2000 so should probably be ignored by most people): Resource management in MSMQ applications. Specifically:
For MSMQ 1.0 and MSMQ 2.0, the combined size of messages capable of being stored on one machine is not limited to the amount of RAM in the machine or the size of the hard disk, but to the amount of virtual address space provided to the MSMQ service by the operating system (this limitation has been lifted in MSMQ 3.0). Each process in an x86 machine is allotted a virtual 4 GB of addressable memory. 2GB is reserved for use in kernel mode and 2GB for user mode. The MSMQ Queue Manager operates in user mode and therefore has an addressable 2GB of virtual address space to work with. Each message's data is stored in RAM, which is backed up by the system's paging file or memory mapped files. MSMQ uses memory mapped files to store both express and recoverable messages. Since we are limited to 2GB of addressable memory, we are limited to 2GB worth of messages on a disk. When you take into account the memory utilized by MSMQ code and its internal data structures, as well as file allocation to store message files on disk, we end up with between 1.4GB and 1.6GB worth of messages that can be stored on disk.
Note   This limitation of 1.6GB can be raised to approximately 2.6GB by enabling 3GB tuning on the MSMQ Service. See Q171793 for more information on how to enable 3GB tuning.
Edit: the tuning link seems to be broken. I believe it should be pointing here.
In terms of later versions of MSMQ, John discusses the issue in a blog post.
Maximum number of messages
This one is not as simple to work out. From my Insufficient Resources post we know that each message needs 75 bytes of kernel memory for indexing so, for example, 2 million messsages would require roughly 150 megabytes. It would seem, therefore, that all you need to do is add more RAM. After looking at a comparison of 32-bit and 64-bit memory architectures, though, you will quickly have to move to the 64-bit platform to take advantage of your investment as 32-bit machines max out at 450 MB of paged pool memory regardless of the amount of RAM fitted.
But, again, if you are trying to work out what amount of RAM will generate the paged pool memory required to accommodate a billion MSMQ messages, your design spec is up for some serious reviewing.
Not sure about the in-depth answer, but on a surface level anyhow, a non-transactional queue stores messages in memory, whereas a transactional queue stores messages on disk.
UPDATE
As John states below, all messages are held on disk whether durable or non-durable queues are used.