Artemis: AMQ222210: Storage usage is beyond max-disk-usage. System will start blocking producers - activemq-artemis

I'm sending a message from Application A to Artemis but I'm getting this error from Application A:
AMQ212054: Destination address=my-service is blocked. If the system is configured to block make sure you consume messages on this configuration.
Looking at the logs of artemis starting up this is what I see which I believe is the cause:
AMQ222210: Storage usage is beyond max-disk-usage. System will start blocking producers
I've looked at the documentation here and found nothing that could help. Also have logged into the running container and changed the 'max-disk-usage' to 100 as per my google research and so far nothing has helped.
I'm running artemis using the following command:
docker run -it --rm -e ARTEMIS_USERNAME=artemis -e ARTEMIS_PASSWORD=artemis -p 8161:8161 -p 61616:61616 vromero/activemq-artemis
Any help is appreciated~ Thank you

You are receiving this message because you computer's disk space is over 90% full and Artemis blocks producers once this happens. To solve your problem you can either:
Clear up disk space on your computer so that it is below 90% .
Increase how full your disk can be before Artimes blocks producers. To do this you need to modify the broker configuration file which is located at:
path-to-broker\artemis\etc\broker.xml
In this file, there is a tag labeled max-disk-usage which is by default set to 90. Simply increase this to 100 (or whatever value you feel comfortable with).
Note that the reason Artemis configures your brokers to start blocking producers once you computer's disk space usage reaches 90% and above is to prevent potentially using up all of your disk space in the case of message back log.

I've downloaded a different version and this issue hasn't occurred anymore.

Related

Command confluent local services start gives an error : Starting ZooKeeper Error: ZooKeeper failed to start

I'm trying to run this command : confluent local services start
I don't know why each time it gives me an error before passing to the next step. So I had to run it again over and over until it passes all the steps.
what is the reason for the error and how to solve the problem?
You need to open the log files to inspect any errors that would be happening.
But, it's possible the services are having a race condition. Schema Registry requires Kafka, REST Proxy and Connect require the Schema Registry... Maybe they are not waiting for the previous components to start.
Or maybe your machine does not have enough resources to start all services. E.g. I believe at least 6GB of RAM are necessary. If you have 8GB on the machine, and Chrome and lots of other services are running, for example, then you wouldn't have 6GB readily available.

Debugging What Process Most Consumed Memory on Pods

I have issue that my application running almost got into its limit at 1 Gi. I've done checking ...
the describe pods but nothing events come
check htop process through exec but just shows nothing heavy running on background
check the memory.stat and showing this
How can I debug whats the process consume most of my memory? I have no many idea about the memory.stat, i've already read the memory.state documentation from this kernel docs and read some stackoverflow but still puzzled. could you please give me a suggest?
htop is a good approach to find relative memory utilization. we see on the screenshot that inside the pod only apache2 are running. Knowing apache I would guess that it has big log files. Can you check by kubectl describe pod if they use emptyDir volumes.
Another approach is from inside the pod to do du -sh /var/log/apache2/* ( check the logs location in config file is no logs there) ; if there big file(s), just truncate them by cat > /var/log/apache2/[name_of_file] , check memory usage, if the volume is backend by RAM you would see decrease in memory usage.

Confluent Kafka services (local) do not start properly on wsl2 and seems to timeout communicating their status

I am seeing various different issues while trying to start Kafka services on wsl2. Details/symptoms below:
Confluent Kafka (7.0.0) platform
wsl2 - ubuntu 20.04LTS
When I use the command:
confluent local services start
Typically the system will take a long time and then exit with service failed (e.g. zookeeper, as that is the first service to start).
If I check the logs, it is actually started. So I again type the command and sure enough it immediately says zookeeper up, then proceed to try start kafka, which again after a min will say failed to start (but it really has started).
I suspect after starting the service (which is quite fast), system is not able to communicate back/exit and thus times out, I am not sure where the logs related to this are.
Can see this in the screenshot below
This means to start the whole stack (zookeeper/kafka/schema-registry/kafka-rest/kafka-connect/etc), takes forever, and in between I start getting other errors (sometimes, schema-registry is not able to find the cluster id, sometimes its a log file related error), which means I need to destroy and start again.
I have tried this over a couple of days and cant get this to work. Is confluent kafka that unstable on windows or I am missing some config change.
In terms of setup, I have not done any change in the config and am using the default config/ports.

What is the best way to monitor Heroku Postgres memory and cpu

We're on Heroku and trying to understand if it's time to upgrade our Postgres database or not. I have two questions:
Is there any tools you know of that track heroku postgres logs to track their memory and cpu
usage stats over time?
Are those (Memory and CPU usage) even the best metrics to look at to determine if we should upgrade to a larger instance or not?
The most useful tool I've found for monitoring heroku postgres instancs is the logs associated with the database's dyno, which you can monitor using heroku logs -t -d heroku-postgres. This spits out some useful stats every 5 minutes, so if you fill your logs up quickly, this might not output anything right away — use -t to wait for the next log line.
Output will look something like this:
2022-06-27T16:34:49.000000+00:00 app[heroku-postgres]: source=HEROKU_POSTGRESQL_SILVER addon=postgresql-fluffy-55941 sample#current_transaction=81770844 sample#db_size=44008084127bytes sample#tables=1988 sample#active-connections=27 sample#waiting-connections=0 sample#index-cache-hit-rate=0.99818 sample#table-cache-hit-rate=0.9647 sample#load-avg-1m=0.03 sample#load-avg-5m=0.205 sample#load-avg-15m=0.21 sample#read-iops=14.328 sample#write-iops=15.336 sample#tmp-disk-used=543633408 sample#tmp-disk-available=72435159040 sample#memory-total=16085852kB sample#memory-free=236104kB sample#memory-cached=15075900kB sample#memory-postgres=223120kB sample#wal-percentage-used=0.0692420374380985
The main stats I pay attention to are table-cache-hit-rate which is a good proxy for how much of your active dataset fits in memory, and load-avg-1m, which tells you how much load per CPU the server is experiencing.
You can read more about all these metrics here.

mongod main process killed by KILL signal

One of the mongo nodes in the replica set went down today. I couldn't find what happened but when i checked the logs on the server, I saw this message 'mongod main process killed by KILL signal'. I tried googling for more information but failed. Basically i like to know what is KILL signal, who triggered it and possible causes/fixes.
Mongo version 3.2.10 on Ubuntu.
The KILL signal means that the app will be killed instantly and there is no chance left for the process to exit cleanly. It is issued by the system when something goes very wrong.
If this is the only log left, it was killed abruptly. Probably this means that your system ran out of memory (I've had this problem with other processes before). You could check if swap is configured on your machine (by using swapon -s), but perhaps you should consider adding more memory to your server, because swap would be just for it not to break, as it is very slow.
Another thing worth looking at is the free disk space left and the syslog (/var/log/syslog)