hornetq restart overrides the log files - hornetq

hornetq restart overrides the log files,although the log file rotation is working fine, I am using the following config, I am running hornet in a standalone clustered mode
# File handler configuration
handler.FILE=org.jboss.logmanager.handlers.PeriodicRotatingFileHandler
handler.FILE.level=DEBUG
handler.FILE.properties=autoFlush,fileName,suffix,append
handler.FILE.autoFlush=true
handler.FILE.fileName=../logs/hornetq.log
handler.FILE.suffix=.yyyy-MM-dd
handler.FILE.append=true
handler.FILE.formatter=PATTERN

found out the issue, the order of the properties matter!
# File handler configuration
handler.FILE=org.jboss.logmanager.handlers.PeriodicRotatingFileHandler
handler.FILE.level=DEBUG
handler.FILE.properties=autoFlush,append,fileName,suffix
handler.FILE.autoFlush=true
handler.FILE.append=true
handler.FILE.fileName=../logs/hornetq.log
handler.FILE.suffix=.yyyy-MM-dd
handler.FILE.formatter=PATTERN
https://community.jboss.org/message/742699

Related

Can't turn audit logging off in Artemis

We're getting spammed in our artemis.log with log messages from org.apache.activemq.audit.message and org.apache.activemq.audit.base, like the following:
2020-06-04 12:02:26,151 INFO [org.apache.activemq.audit.message] AMQ601500: User xxx is sending a core message on target resource: ...
and
2020-06-04 12:02:26,081 INFO [org.apache.activemq.audit.base] AMQ601019: User amq|xxx| is getting mbean info on target resource: org.apache.activemq.artemis.core.management.impl.AddressControlImpl#60975100 []
We've added the following lines to our logging.properties with no luck:
logger.org.apache.activemq.audit.base.level=ERROR
logger.org.apache.activemq.audit.message.level=ERROR
What's going on here? How do we turn these off?
It looks like you haven't configured your logging.properties appropriately to ignore messages from those loggers. You've added lines to set the level for those loggers, but have you added those loggers to the loggers list?
For example, this is the default logging.properties shipped with ActiveMQ Artemis 2.13.0:
loggers=org.eclipse.jetty,org.jboss.logging,org.apache.activemq.artemis.core.server,org.apache.activemq.artemis.utils,org.apache.activemq.artemis.journal,org.apache.activemq.artemis.jms.server,org.apache.activemq.artemis.integration.bootstrap,org.apache.activemq.audit.base,org.apache.activemq.audit.message,org.apache.activemq.audit.resource
# Root logger level
logger.level=INFO
# ActiveMQ Artemis logger levels
logger.org.apache.activemq.artemis.core.server.level=INFO
logger.org.apache.activemq.artemis.journal.level=INFO
logger.org.apache.activemq.artemis.utils.level=INFO
logger.org.apache.activemq.artemis.jms.level=INFO
logger.org.apache.activemq.artemis.integration.bootstrap.level=INFO
logger.org.eclipse.jetty.level=WARN
# Root logger handlers
logger.handlers=FILE,CONSOLE
# to enable audit change the level to INFO
logger.org.apache.activemq.audit.base.level=ERROR
logger.org.apache.activemq.audit.base.handlers=AUDIT_FILE
logger.org.apache.activemq.audit.base.useParentHandlers=false
logger.org.apache.activemq.audit.resource.level=ERROR
logger.org.apache.activemq.audit.resource.handlers=AUDIT_FILE
logger.org.apache.activemq.audit.resource.useParentHandlers=false
logger.org.apache.activemq.audit.message.level=ERROR
logger.org.apache.activemq.audit.message.handlers=AUDIT_FILE
logger.org.apache.activemq.audit.message.useParentHandlers=false
# Console handler configuration
handler.CONSOLE=org.jboss.logmanager.handlers.ConsoleHandler
handler.CONSOLE.properties=autoFlush
handler.CONSOLE.level=DEBUG
handler.CONSOLE.autoFlush=true
handler.CONSOLE.formatter=PATTERN
# File handler configuration
handler.FILE=org.jboss.logmanager.handlers.PeriodicRotatingFileHandler
handler.FILE.level=DEBUG
handler.FILE.properties=suffix,append,autoFlush,fileName
handler.FILE.suffix=.yyyy-MM-dd
handler.FILE.append=true
handler.FILE.autoFlush=true
handler.FILE.fileName=${artemis.instance}/log/artemis.log
handler.FILE.formatter=PATTERN
# Formatter pattern configuration
formatter.PATTERN=org.jboss.logmanager.formatters.PatternFormatter
formatter.PATTERN.properties=pattern
formatter.PATTERN.pattern=%d %-5p [%c] %s%E%n
#Audit logger
handler.AUDIT_FILE=org.jboss.logmanager.handlers.PeriodicRotatingFileHandler
handler.AUDIT_FILE.level=INFO
handler.AUDIT_FILE.properties=suffix,append,autoFlush,fileName
handler.AUDIT_FILE.suffix=.yyyy-MM-dd
handler.AUDIT_FILE.append=true
handler.AUDIT_FILE.autoFlush=true
handler.AUDIT_FILE.fileName=${artemis.instance}/log/audit.log
handler.AUDIT_FILE.formatter=AUDIT_PATTERN
formatter.AUDIT_PATTERN=org.jboss.logmanager.formatters.PatternFormatter
formatter.AUDIT_PATTERN.properties=pattern
formatter.AUDIT_PATTERN.pattern=%d [AUDIT](%t) %s%E%n
Notice that the first line defines the loggers list and includes org.apache.activemq.audit.base, org.apache.activemq.audit.message, & org.apache.activemq.audit.resource.

Kafka Server - Could not find a 'KafkaServer' in JAAS

I have a standalone kafka broker that I'm trying to configure SASL for. Configurations are below. I'm trying to set up SASL_PLAIN authentication on the broker.
My understanding is that with the listener.name... configuration in the server.properties, I shouldn't need the jaas file. But I've experimented with one to see if that might be a better approach.
I have experimented with each of these commands, but both result in the same exception.
sudo bin/kafka-server-start etc/kafka/server.properties
sudo -Djava.security.auth.login.config=etc/kafka/kafka_server_jaas.conf bin/kafka-server-start etc/kafka/server.properties
the exception displayed is:
Fatal error during KafkaServer startup. Prepare to shutdown... Could
not find a 'KafkaServer' or 'sasl_plaintext.KafkaServer' entry in the
JAAS configuration. System property 'java.security.auth.login.config'
is not set
server.properties:
listeners=SASL_PLAINTEXT://0.0.0.0:9092
listener.security.protocol.map: SASL_PLAINTEXT:SASL_PLAINTEXT
listener.name.SASL_PLAINTEXT.plain.sasl.jaas.config:
org.apache.kafka.common.security.plain.PlainLoginModule required /
username="username" /
password="Password" /
user_username="Password";
advertised.listeners=SASL_PLAINTEXT://[ipaddress]:9092
sasl.enabled.mechanisms=PLAIN
sasl.mechanism.inter.broker.protocol=PLAIN
secutiy.inter.broker.protocol=SASL_PLAINTEXT
kafka_server_jaas.conf:
KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="username"
password="Password"
user_username="Password";
};
I've spent a day looking at this already - has anyone else had experience with this problem?
You need to export a variable, not in-line the config to kafka-server-start (or sudo).
export KAFKA_OPTS="-Djava.security.auth.login.config=/path/to/kafka_server_jaas.conf"
bin/kafka-server-start /path/to/server.properties
Ref. Confluent's sections on Kafka security
Putting my mistakes here for austerity:
Don't do your startup commands from the cli, put them in a .sh file and run from there:
For example, something like this:
zkstart
export KAFKA_OPTS="-Djava.security.auth.login.config=etc/kafka/zookeeper_jaas.conf"
bin/zookeeper-server-start etc/kafka/zookeeper.properties &
kafkastart
export KAFKA_OPTS=-Djava.security.auth.login.config=etc/kafka/kafka_server_jaas.conf
bin/kafka-server-start etc/kafka/server.properties
If you still encounter an error related to the configs, check your _jaas files to ensure all the configuration sections in the error messages are present. If they are, it's likely the format isn't quite correct - check for the two semi-colons in each section and if that fails, try recreating the file entirely from scratch (or from a c&p from the documentation).
edit
So, the final solution for me was to add the export.... lines to the beginning of the corresponding kafka-server-start and zookeeper-server-start files. It took me a while before the 'everything is a file' finally clicked and I realized the script files were the actual basis for the services.

mosquitto.db file does not get created

In the process of testing mosquitto Persistence, I have removed mosquitto.db from Persistence location to enable a fresh start. But, to my chagrin, the file does not get created even after I restart the broker.
Did I get it wrong that the broker creates the .db file as per the config? Any pointers on how to get a fresh mosquitto.db file would be appreciated.
# Place your local configuration in /etc/mosquitto/conf.d/
#
# A full description of the configuration file is at
# /usr/share/doc/mosquitto/examples/mosquitto.conf.example
pid_file /var/run/mosquitto.pid
max_inflight_messages 1
persistence true
persistence_file mosquitto.db
persistence_location /var/lib/mosquitto/
log_dest file /var/log/mosquitto/mosquitto.log
include_dir /etc/mosquitto/conf.d
password_file /etc/mosquitto/passwd
allow_anonymous false
max_queued_messages 1000000
autosave_interval 30
# autosave_on_changes false
If you delete the file while the broker is running it is likely to not get recreated because the broker will already hold an open file handle.
Deleting a file while it's open by a process does not actually remove the file, just it's entry in the directory, the process will continue to read/write to the file until the handle is closed.
If you restart mosquitto after deleting the file it won't write to the file until it actually has some data to write to it, e.g.
have a subscribed client (at QOS 1 or 2)
send some messages
disconnect the subscriber
send more messages
shutdown mosquitto
The file should now be written containing the messages that were published while the client was disconnected.

IntellijIdea - Disable Info Message when running Spark Application

I'm getting so many message when running application that using Apache Spark and Hbase/Hadoop Library. For Example :
0 [main] DEBUG org.apache.hadoop.metrics2.lib.MutableMetricsFactory - field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with annotation #org.apache.hadoop.metrics2.annotation.Metric(about=, sampleName=Ops, always=false, type=DEFAULT, valueName=Time, value=[Rate of successful kerberos logins and latency (milliseconds)])
How to disable it, so i just get straight to the point Log like println(varABC) only ?
What you are seeing is logs produced by Spark through log4j, as by default it enables quite a log of printouts printed to stderr. You can configure it as you are usually configuring log4j behavior, e.g. through a log4j.properties configuration file. Refer to http://spark.apache.org/docs/latest/configuration.html#configuring-logging
In /spark-2.0.0-bin-hadoop2.6/conf folder you have a file log4j.properties.template
Rename from log4j.properties.template to log4j.properties
and make the following change in log4j.properties
from: log4j.rootCategory=INFO, console
to: log4j.rootCategory=ERROR, console
Hope this Help!!!...
Under $SPARK_HOME/conf dir modify the log4j.properties file - change values INFO to ERROR as below:
log4j.rootLogger=${root.logger}
root.logger=ERROR,console
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.err
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n
log4j.logger.org.apache.spark.repl.Main=WARN
log4j.logger.org.eclipse.jetty=WARN
log4j.logger.org.spark-project.jetty=WARN
log4j.logger.org.spark-project.jetty.util.component.AbstractLifeCycle=ERROR
log4j.logger.org.apache.spark.repl.SparkIMain$exprTyper=ERROR
log4j.logger.org.apache.spark.repl.SparkILoop$SparkILoopInterpreter=ERROR
log4j.logger.org.apache.parquet=ERROR
log4j.logger.parquet=ERROR
log4j.logger.org.apache.hadoop.hive.metastore.RetryingHMSHandler=FATAL
log4j.logger.org.apache.hadoop.hive.ql.exec.FunctionRegistry=ERROR
this will disable all the INFO log messages and only will print ERROR or FATAL log messages. you can change these values according to your requirement(s).

How can I set the logger level with Quartz Scheduler and Cocoon?

I have a project with an old version of Cocoon. There two cron jobs.
The project has the following log4j config:
log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender
log4j.appender.CONSOLE.layout=org.apache.log4j.PatternLayout
log4j.appender.CONSOLE.layout.conversionPattern=%d %-5p - %-26.26c{1} - %m\n
log4j.rootLogger=WARN,CONSOLE
In logs folder there exists file cron.log. But there are some INFO entries. How can I setup log level for this?
You can try adding the following line to set the debug level of the org.quartz package.
log4j.logger.org.quartz=WARN,CONSOLE
BTW, you probably have something that configures this file appender (cron.log) because by default quartz (2.x) does not provides such configuration.
HIH