log4j2 does not seem to be following my log rolling policies - jboss

We are using log4j2 to handle our application logs, and the application is running under JBoss EAP 7.1.
In my log4j2.xml file, I am trying to keep a maximum of 5 log files of 10MB each, but there is something strange going on because I am ending up with 5 or more log files per day (depending on the appender). I don't know how/why the date is being added to my file names for most of my logs nor why I have more than 5 logs (per day) in some instances.
Here is part of my log4j2.xml:
<RollingFile name="springAppender" fileName="/opt/jboss-eap-7.1/standalone/log/spring.log"
filePattern="/opt/jboss-eap-7.1/standalone/log/spring-%i.log">
<PatternLayout> <pattern>%d${ISO8601} [%t] [%X{sessionId}] [%X{orgName}] %-9p %c - %enc{%m}%n</pattern></PatternLayout>
<Policies>
<SizeBasedTriggeringPolicy size="10 MB"/>
</Policies>
<DefaultRolloverStrategy max="5"/>
</RollingFile>
<RollingFile name="SqlAppender" fileName="/opt/jboss-eap-7.1/standalone/log/sql.log"
filePattern="/opt/jboss-eap-7.1/standalone/log/sql-%i.log">
<PatternLayout> <pattern>%d{ISO8601} [%t] [%X{sessionId}] [%X{orgName}] %-9p %c - %enc{%m}%n</pattern></PatternLayout>
<Policies>
<SizeBasedTriggeringPolicy size="10 MB"/>
</Policies>
<DefaultRolloverStrategy max="5"/>
</RollingFile>
And here is an excerpt of my log directory listing:
-rw-r--r--. 1 jboss jboss 10485857 May 30 05:26 spring-05-30-2019-1.log
-rw-r--r--. 1 jboss jboss 10485924 May 30 05:34 spring-05-30-2019-2.log
-rw-r--r--. 1 jboss jboss 10485861 May 30 05:40 spring-05-30-2019-3.log
-rw-r--r--. 1 jboss jboss 10485796 May 30 05:49 spring-05-30-2019-4.log
-rw-r--r--. 1 jboss jboss 10485946 May 30 05:56 spring-05-30-2019-5.log
-rw-r--r--. 1 jboss jboss 10485950 May 30 07:43 spring-05-30-2019-6.log
-rw-r--r--. 1 jboss jboss 10485808 May 30 13:13 spring-05-30-2019-7.log
-rw-r--r--. 1 jboss jboss 2302827 Jun 25 17:51 spring.log
-rw-r--r--. 1 jboss jboss 10485766 Jun 25 04:51 sql-1.log
-rw-r--r--. 1 jboss jboss 10485896 Jun 25 04:52 sql-2.log
-rw-r--r--. 1 jboss jboss 10485990 Jun 25 04:54 sql-3.log
-rw-r--r--. 1 jboss jboss 10485874 Jun 25 04:56 sql-4.log
-rw-r--r--. 1 jboss jboss 10485967 Jun 25 04:58 sql-5.log
-rw-r--r--. 1 jboss jboss 5782246 Jun 25 14:55 sql.log
As far as I can see, the SqlAppender and the springAppender are structurally the same. Yet my sql.log rolls as desired, while spring.log both includes the date in the rolled file name as well as preserves more than the max 5 files. This is slowly filling my hard drive.
I have grepped within my JBoss directory looking for patterns like "\-yyyy\-" that might tip me off to another place that might be controlling how logs roll to no avail.
We run in standalone mode, and there is a periodic-rotating-file-handler in our standalone.xml, but that controls server.log. There is also a logging.properties file that I believe controls logging until the application fully starts and hands off the logging to log4j2, but I don't see anything in there that seems to contain a pattern that would insert a date into my file names.
How do I stop this behavior?

Related

need to compress daily rotated jboss server logs and remove 15 days old file

I have daily rotated jboss server log files and I need to compress them on daily basis and remove all 15 days old zipped files.
These are how my logs look now:
Please help me with exact code to put under /etc/logrotate.d
-rw-rwxr--+ 1 jboss jboss 725462 Sep 9 14:14 server.log.2020-09-09
-rw-rwxr--+ 1 jboss jboss 1106353 Sep 10 12:44 server.log.2020-09-10
-rw-rwxr--+ 1 jboss jboss 114741 Sep 15 04:15 server.log.2020-09-15
-rw-rwxr--+ 1 jboss jboss 670181 Sep 22 15:46 server.log.2020-09-22
-rw-rwxr--+ 1 jboss jboss 115057 Sep 29 05:14 server.log.2020-09-29
-rw-rwxr--+ 1 jboss jboss 553603 Sep 30 14:13 server.log.2020-09-30
-rw-rwxr--+ 1 jboss jboss 113213 Oct 6 04:57 server.log

kafka + which files should created under kafka-logs

usually after kafka cluster scratch installation I saw this files under /data/kafka-logs ( kafka broker logs. where all topics should be located )
ls -ltr
-rw-r--r-- 1 kafka hadoop 0 Jan 9 10:07 cleaner-offset-checkpoint
-rw-r--r-- 1 kafka hadoop 57 Jan 9 10:07 meta.properties
drwxr-xr-x 2 kafka hadoop 4096 Jan 9 10:51 _schemas-0
-rw-r--r-- 1 kafka hadoop 17 Jan 10 07:39 recovery-point-offset-checkpoint
-rw-r--r-- 1 kafka hadoop 17 Jan 10 07:39 replication-offset-checkpoint
but on some other Kafka scratch installation we saw the folder - /data/kafka-logs is empty
is this indicate on problem ?
note - we still not create the topics
I'm not sure when each checkpoint file is created (though, they track log cleaner and replication offsets), but I assume that the meta properties is created at broker startup.
Otherwise, you would see one folder per Topic-partition, for example, looks like you had one topic created, _schemas.
If you only see one partition folder out of multiple brokers, then your replication factor for that topic is set to 1

Monitor kafka with Prometheus and Grafana

I have followed the below steps to monitor kafka with Prometheus and Grafana.
jmx port not get opened
wget http://ftp.heanet.ie/mirrors/www.apache.org/dist/kafka/0.10.1.0/kafka_2.11-0.10.1.0.tgz
tar -xzf kafka_*.tgz
cd kafka_*
wget https://repo1.maven.org/maven2/io/prometheus/jmx/jmx_prometheus_javaagent/0.6/jmx_prometheus_javaagent-0.6.jar
wget https://raw.githubusercontent.com/prometheus/jmx_exporter/master/example_configs/kafka-0-8-2.yml
./bin/zookeeper-server-start.sh config/zookeeper.properties &
KAFKA_OPTS="$KAFKA_OPTS -javaagent:$PWD/jmx_prometheus_javaagent-0.6.jar=7071:$PWD/kafka-0-8-2.yml"
./bin/kafka-server-start.sh config/server.properties &
Then i have the checked with curl http://localhost:7071/metrics in the terminal
it reports curl: (7) Failed connect to localhost:7071; Connection refused
Currently i have opened all my ports to my network in the server.
while i m checking with netstat -tupln | grep LISTEN
port number 7071 was not listed in the output
The below is the kafka directory's contents:
drwxr-xr-x. 3 root root 4096 Aug 23 12:22 bin
drwxr-xr-x. 2 root root 4096 Oct 15 2016 config
-rw-r--r--. 1 root root 20356 Aug 21 10:50 hs_err_pid1496.log
-rw-r--r--. 1 root root 19432 Aug 21 10:55 hs_err_pid2447.log
-rw-r--r--. 1 root root 1225418 Feb 5 2016 jmx_prometheus_javaagent-0.6.jar
-rw-r--r--. 1 root root 2824 Aug 21 10:48 kafka-0-8-2.yml
drwxr-xr-x. 2 root root 4096 Aug 21 10:48 libs
-rw-r--r--. 1 root root 28824 Oct 5 2016 LICENSE
drwxr-xr-x. 2 root root 4096 Oct 11 15:05 logs
-rw-------. 1 root root 8453 Aug 23 12:08 nohup.out
-rw-r--r--. 1 root root 336 Oct 5 2016 NOTICE
drwxr-xr-x. 2 root root 46 Oct 15 2016 site-docs
kafka is running in 2181 port and zookeeper is also running
If you do not mind opening up the jmx port, you can also do it like this:
export JMX_PORT=9999
export KAFKA_JMX_OPTS=-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.rmi.port=9999'
./bin/kafka-server-start.sh config/server.properties &
java -jar jmx_prometheus_httpserver-0.10-jar-with-dependencies.jar 9300 kafka-0-8-2.yaml &
The jar-with-dependencies you build from the source with mvn package.
I had the same problem when setting KAFKA_OPTS environment variable in the bash. The worse situation is when you add the environment variable to ~/.profile file. The problem with this approach is that the KAFKA_OPTS is used for both kafka-server-start.sh and zookeeper-server-start.sh so when you start Zookeeper, port 7071 will be used by Zookeeper for exporting metrics. Then, when you run Kafka you will receive the "7071 port is in use error".
I solved the problem by setting the environment at systemd service file. I described it at my article last week:
[Unit]
...
[Service]
...
Restart=no
Environment=KAFKA_OPTS=-javaagent:/home/morteza/myworks/jmx_prometheus_javaagent-0.9.jar=7071:/home/morteza/myworks/kafka-2_0_0.yml
[Install]
...

How to monitor ActiveMQ Artemis on WildFly with Hawt.io

I have ActiveMQ Artemis embedded to WildFly10 (as it comes) and want to monitor it via Hawt.io.
What I did:
ActiveMQ Artemis is configured and running
I dropped hawtio.war to deployment directory.
I dropped builded by maven artemis-plugin with name chanaged just to artemis-plugin.war
I even dropped jolokia.war
I tried with the standalone jar - made a plugin directory and put artemis-plugin.war there.
when I connect to jolokia I get CPU usage (and stuff like that) for WildFly, I can see, via JMX the queue, but still, to success with Artemis.
hawtio doesnot recognize the pluginhawtio doesnot recognize the plugin however it got loaded:
[main] INFO org.eclipse.jetty.webapp.WebAppContext - An Artemis plugin at http://0.0.0.0:8081/artemis-plugin
[main] INFO jetty - Added 3rd party plugin with context-path: /artemis-plugin
Added 3rd party plugin with context-path: /artemis-plugin
I donot have 'Artemis' tab. , going to http://0.0.0.0:8081/artemis-plugin shows me the dir:
Directory: /artemis-plugin/
META-INF/ 4096 bytes Aug 4, 2016 10:41:10 AM
WEB-INF/ 4096 bytes Aug 4, 2016 10:41:10 AM
log4j.properties 215 bytes Aug 3, 2016 3:39:10 PM
plugin/ 4096 bytes Aug 4, 2016 10:41:09 AM
Any ideas?
jmx managment has to be switched on:
<management jmx-enabled="true"/>
source

How to purge zookeeper logs with PurgeTxnLog?

Zookeeper's rapidly pooping its internal binary files all over our production environment.
According to: http://zookeeper.apache.org/doc/r3.3.3/zookeeperAdmin.html
and
http://dougchang333.blogspot.com/2013/02/zookeeper-cleaning-logs-snapshots.html
this is expected behavior and you must call org.apache.zookeeper.server.PurgeTxnLog
regularly to rotate its poop.
So:
% ls -l1rt /tmp/zookeeper/version-2/
total 314432
-rw-r--r-- 1 root root 67108880 Jun 26 18:00 log.1
-rw-r--r-- 1 root root 947092 Jun 26 18:00 snapshot.e99b
-rw-r--r-- 1 root root 67108880 Jun 27 05:00 log.e99d
-rw-r--r-- 1 root root 1620918 Jun 27 05:00 snapshot.1e266
... many more
% sudo java -cp zookeeper-3.4.6.jar::lib/jline-0.9.94.jar:lib/log4j-1.2.16.jar:lib/netty-3.7.0.Final.jar:lib/slf4j-api-1.6.1.jar:lib/slf4j-log4j12-1.6.1.jar:conf \
org.apache.zookeeper.server.PurgeTxnLog \
/tmp/zookeeper/version-2 /tmp/zookeeper/version-2 -n 3
but I get:
% ls -l1rt /tmp/zookeeper/version-2/
... all the existing logs plus a new directory
/tmp/zookeeper/version-2/version-2
Am I doing something wrong?
zookeeper-3.4.6/
ZooKeeper now has an Autopurge feature as of 3.4.0. Take a look at https://zookeeper.apache.org/doc/trunk/zookeeperAdmin.html
It says you can use autopurge.snapRetainCount and autopurge.purgeInterval
autopurge.snapRetainCount
New in 3.4.0: When enabled, ZooKeeper auto purge feature retains the autopurge.snapRetainCount most recent snapshots and the corresponding transaction logs in the dataDir and dataLogDir respectively and deletes the rest. Defaults to 3. Minimum value is 3.
autopurge.purgeInterval
New in 3.4.0: The time interval in hours for which the purge task has to be triggered. Set to a positive integer (1 and above) to enable the auto purging. Defaults to 0.
Since I'm not hearing a fix via Zookeeper, this was an easy workaround:
COUNT=6
DATADIR=/tmp/zookeeper/version-2/
ls -1drt ${DATADIR}/* | head --lines=-${COUNT} | xargs sudo rm -f
Should run once a day from a cron job or jenkins to prevent zookeeper from exploding.
You need to specify the parameter dataDir and snapDir with the value that is configured as dataDir in your .properties file of zookeeper.
If your configuration looks like the following.
dataDir=/data/zookeeper
You need to call PurgeTxnLog (version 3.5.9) like the following if you want to keep the last 10 logs/snapshots
java -cp zookeeper.jar:lib/slf4j-api-1.7.5.jar:lib/slf4j-log4j12-1.7.5.jar:lib/log4j-1.2.17.jar:conf org.apache.zookeeper.server.PurgeTxnLog /data/zookeeper /data/zookeeper -n 10