I am using log4j 1.2.17 with the following configuration:
log4j.rootLogger=INFO, stdout
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.Target=System.out
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n
log4j.logger.org.apache.kafka=OFF
With configuration above, I had expected that we would not see DEBUG level logs from kafka 2.4.0 libraries that we are using. However, somehow I still see logs as below. I have also tried using log4j2 with the same properties file in my application, it is the same. How shall we disable DEBUG level logging from kafka client libraries?
06:59:40.995 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name join-latency
06:59:40.995 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name sync-latency
06
Create a logback.xml with below contents (within the Resources folder):
<configuration>
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>[%thread] %-5level %logger{36} - %msg%n</pattern>
</encoder>
</appender>
<root level="info">
<appender-ref ref="STDOUT" />
</root>
</configuration>
Related
I'm trying to implement Logback for an existing EAP7.2 app.
JBoss EAP 7.2.8.GA (WildFly Core 6.0.27.Final-redhat-00001)
When I run gradle clean build it creates the proper log in the location and logs all of the test results. But when I deploy the app it's not using the logback.xml at all and the logs aren't created. Only the server.log is active when the app is deployed because it's the default jboss setup.
How do I implement logback so that the app knows to use it when deployed? I've checked the war the logback.xml gets created under the proper WEB-INF/classes/
EAR build.gradle
dependencies {
implementation 'org.slf4j:slf4j-api:1.7.30'
implementation 'ch.qos.logback:logback-classic:1.2.3'
implementation 'ch.qos.logback:logback-core:1.2.3'
}
jboss exclusions
<exclusions>
<!-- don't want to integrate with server logging yet -->
<module name="org.jboss.logging"/>
<module name="org.slf4j"/>
<module name="org.slf4j.impl"/>
</exclusions>
server.log
2020-09-25 16:28:05,188 INFO [stdout] (QuartzScheduler_AppScheduler-<server>11601051265073_ClusterManager) 16:28:05.187 [QuartzScheduler_AppScheduler-<server>11601051265073_ClusterManager] DEBUG org.quartz.impl.jdbcjobstore.StdRowLockSemaphore - Lock 'STATE_ACCESS' given to: QuartzScheduler_AppScheduler-<server>11601051265073_ClusterManager
2020-09-25 16:28:05,188 INFO [stdout] (QuartzScheduler_AppScheduler-<server>11601051265073_ClusterManager) 16:28:05.188 [QuartzScheduler_AppScheduler-<server>11601051265073_ClusterManager] DEBUG org.quartz.impl.jdbcjobstore.StdRowLockSemaphore - Lock 'TRIGGER_ACCESS' is desired by: QuartzScheduler_AppScheduler-<server>11601051265073_ClusterManager
2020-09-25 16:28:05,189 INFO [stdout] (QuartzScheduler_AppScheduler-<server>11601051265073_ClusterManager) 16:28:05.188 [QuartzScheduler_AppScheduler-<server>11601051265073_ClusterManager] DEBUG org.quartz.impl.jdbcjobstore.StdRowLockSemaphore - Lock 'TRIGGER_ACCESS' is being obtained: QuartzScheduler_AppScheduler-<server>11601051265073_ClusterManager
2020-09-25 16:28:05,189 INFO [stdout] (QuartzScheduler_AppScheduler-<server>11601051265073_ClusterManager) 16:28:05.189 [QuartzScheduler_AppScheduler-<server>11601051265073_ClusterManager] DEBUG org.quartz.impl.jdbcjobstore.StdRowLockSemaphore - Lock 'TRIGGER_ACCESS' given to: QuartzScheduler_AppScheduler-<server>11601051265073_ClusterManager
logback.xml
<configuration debug="true" scan="true">
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n
</pattern>
</encoder>
</appender>
<appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>${jboss.server.log.dir}/logs.log</file>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<FileNamePattern>${jboss.server.log.dir}/logs.log.%d{yyyy-MM-dd}.gz</FileNamePattern>
<maxHistory>10</maxHistory>
</rollingPolicy>
<encoder>
<pattern>%date %level [username:%X{username}][%thread] %logger [%file:%line] %msg%n
</pattern>
</encoder>
</appender>
<logger name="org.quartz" level="INFO"/>
<root level="ALL">
<appender-ref ref="FILE"/>
<appender-ref ref="STDOUT"/>
</root>
</configuration>
You should exclude logging subsystem in jboss-deployment-structure.xml and also set org.jboss.logging.provider system property.
You can follow this Red Hat article on steps to configure LogBack with JBoss EAP 7
Also have a look at various options of configuring external logging.
I am using the library scalikejdbc for my Application. And when i read the logs, that scalikejdbc generates itself, it gives me something like that:
06:02:16.891 [main] DEBUG scalikejdbc.ConnectionPool$ - Registered connection pool : ConnectionPool(url:jdbc:sqlserver://foo.bar:8080;databaseName=foobar;user=user;password=**PASSWORD**;...
So scalike itself throws my database user's password into the logs, which is inappropriate.
I was thinking about filtering those logs at all with something like https://logback.qos.ch/manual/filters.html , but I do need the odd information from those logs from scalike, so I cannot filter them fully.
What do I have now:
06:02:16.891 [main] DEBUG scalikejdbc.ConnectionPool$ - Registered connection pool : ConnectionPool(url:jdbc:sqlserver://foo.bar:8080;databaseName=foobar;user=user;password=somepassword;
What am I try to get:
06:02:16.891 [main] DEBUG scalikejdbc.ConnectionPool$ - Registered connection pool : ConnectionPool(url:jdbc:sqlserver://foo.bar:8080;databaseName=foobar;user=user;password=CENSORED;
Consider using replace conversion word in logback.xml configuration, for example,
%replace(%msg){"password=.*", "password=CENSORED"}
which given
logger.debug("password=123456")
should output something like
[debug] application - password=CENSORED
Docs state
replace(p){r, t} Replaces occurrences of 'r', a regex, with its
replacement 't' in the string produces by the sub-pattern 'p'. For
example, "%replace(%msg){'\s', ''}" will remove all spaces contained
in the event message.
Here is an example of my logback.xml:
<configuration>
<conversionRule conversionWord="coloredLevel" converterClass="play.api.libs.logback.ColoredLevel" />
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>%coloredLevel %logger{15} - %replace(%msg){"password=.*", "password=CENSORED"}%n %xException{10}</pattern>
</encoder>
</appender>
<logger name="play" level="INFO" />
<logger name="application" level="DEBUG" />
<logger name="com.gargoylesoftware.htmlunit.javascript" level="OFF" />
<root level="WARN">
<appender-ref ref="STDOUT" />
</root>
</configuration>
I have a docker image for a Spring Boot app with the log file location as --logging.config=/conf/logs/logback.xml and the log file is as follows.
I am able to get the logs as
kubectl log POD_NAME
But, unable to find the log file when I log in to the pod. Is there any default location where the log file is placed as I haven't mentioned the logging location in the logback.xml file.
Logback file:
<?xml version="1.0" ?>
<configuration>
<property name="server.encoder.pattern"
value="%d{yyyy-MM-dd'T'HH:mm:ss.SSSZ} %-5level : loggerName="%logger{36}" threadName="%thread" txnId="%X{txnId}" %msg%n" />
<property name="metrics.encoder.pattern"
value="%d{yyyy-MM-dd'T'HH:mm:ss.SSSZ} %-5level : %msg%n" />
<!-- Enable LevelChangePropagator for jul-to-slf4j optimization -->
<contextListener class="ch.qos.logback.classic.jul.LevelChangePropagator" />
<appender name="METRICS" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>${metrics.encoder.pattern}</pattern>
</encoder>
</appender>
<logger name="appengAluminumMetricsLogger" additivity="false">
<appender-ref ref="METRICS" />
</logger>
<appender name="SERVER" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>${server.encoder.pattern}</pattern>
</encoder>
</appender>
<root level="INFO">
<appender-ref ref="SERVER" />
</root>
</configuration>
What you see from kubectl logs is console log from your service. Only console log can be seen like that and this is via docker logs support.
I am trying to setup my logger instances to send mails whenever an error log is generated by the application. I thought of using gmail for this purpose. My logger.xml is as below:
<configuration>
<appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>/path-to-application/application.log</file>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<!-- rollover daily -->
<fileNamePattern>/path-to-application/application-%d{yyyy-MM-dd}.%i.log.gz</fileNamePattern>
<timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
<!-- or whenever the file size reaches 250MB -->
<maxFileSize>250MB</maxFileSize>
</timeBasedFileNamingAndTriggeringPolicy>
<maxHistory>30</maxHistory>
</rollingPolicy>
<encoder>
<pattern>%date - %thread [%level] - %message%n%xException%n</pattern>
</encoder>
</appender>
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>%date - %thread [%level] - %message%n%xException%n</pattern>
</encoder>
</appender>
<appender name="EMAIL" class="ch.qos.logback.classic.net.SMTPAppender">
<smtpHost>smtp.gmail.com</smtpHost>
<smtpPort>465</smtpPort>
<ssl>true</ssl>
<username>abc#gmail.com</username>
<password>abc's password</password>
<asynchronousSending>false</asynchronousSending>
<to>xyz#gmail.com#gmail.com</to>
<!--<to>ANOTHER_EMAIL_DESTINATION</to> <!– additional destinations are possible –>-->
<from>abc#gmail.com</from>
<subject>TESTING: %logger{20} - %m</subject>
<layout class="ch.qos.logback.classic.PatternLayout">
<pattern>%date %-5level %logger{35} - %message%n</pattern>
</layout>
<cyclicBufferTracker class="ch.qos.logback.core.spi.CyclicBufferTracker">
<!-- send just one log entry per email -->
<bufferSize>1</bufferSize>
</cyclicBufferTracker>
</appender>
<logger name="play" level="WARN"/>
<logger name="application" level="INFO"/>
<root level="ERROR">
<!-- <appender-ref ref="STDOUT" /> -->
<appender-ref ref="EMAIL"/>
</root>
</configuration>
I added javax.mail and activation as dependencies in my SBT file. The Rolling file appender and console appender are working fine but SMTP appender seems not to work with these settings. Error logs are being logged in my rolling file but not being send as mail. There are no exceptions being logged so that I could investigate further.
Is there something wrong with these specifications or problem might be from gmail smtp server end?
I had the same problem, and finally found out via debugging.
Logback cannot authenticate you to Gmail if you have setup 2-step authentication for instance. Try to generate an app password for logback and use it in place of your normal password in the config.
You can follow these instructions:
https://stackoverflow.com/a/25238515/1540818
In my Scala project I'am using the phantom-sbt plugin in order to start embedded Cassandra. The problem is, this plugin is pretty verbose - all cassandra logs will be written to stdout.
I've seen on phantom github page, they are using log4j to configure all loggers. But it seems not to work (at least for me). I've set all loggers in the log4j.xml on 'ERROR', but it has no effect.
How should I change log level for all cassandra loggers?
You need a logback-test.xml inside /src/test/resources wherever you are running the embedded Cassandra. Then you can easily turn off individual loggers or set them to the appropriate levels.
Take this as an example configuration:
<configuration>
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<!-- encoders are assigned the type
ch.qos.logback.classic.encoder.PatternLayoutEncoder by default -->
<encoder>
<pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
</encoder>
</appender>
<root level="debug">
<appender-ref ref="STDOUT" />
</root>
<logger name="com.datastax.driver.core" level="ERROR"/>
<logger name="io.netty" level="ERROR"/>
<logger name="org.cassandraunit" level="ERROR"/>