Logback XML not being picked up - jboss

I'm trying to implement Logback for an existing EAP7.2 app.
JBoss EAP 7.2.8.GA (WildFly Core 6.0.27.Final-redhat-00001)
When I run gradle clean build it creates the proper log in the location and logs all of the test results. But when I deploy the app it's not using the logback.xml at all and the logs aren't created. Only the server.log is active when the app is deployed because it's the default jboss setup.
How do I implement logback so that the app knows to use it when deployed? I've checked the war the logback.xml gets created under the proper WEB-INF/classes/
EAR build.gradle
dependencies {
implementation 'org.slf4j:slf4j-api:1.7.30'
implementation 'ch.qos.logback:logback-classic:1.2.3'
implementation 'ch.qos.logback:logback-core:1.2.3'
}
jboss exclusions
<exclusions>
<!-- don't want to integrate with server logging yet -->
<module name="org.jboss.logging"/>
<module name="org.slf4j"/>
<module name="org.slf4j.impl"/>
</exclusions>
server.log
2020-09-25 16:28:05,188 INFO [stdout] (QuartzScheduler_AppScheduler-<server>11601051265073_ClusterManager) 16:28:05.187 [QuartzScheduler_AppScheduler-<server>11601051265073_ClusterManager] DEBUG org.quartz.impl.jdbcjobstore.StdRowLockSemaphore - Lock 'STATE_ACCESS' given to: QuartzScheduler_AppScheduler-<server>11601051265073_ClusterManager
2020-09-25 16:28:05,188 INFO [stdout] (QuartzScheduler_AppScheduler-<server>11601051265073_ClusterManager) 16:28:05.188 [QuartzScheduler_AppScheduler-<server>11601051265073_ClusterManager] DEBUG org.quartz.impl.jdbcjobstore.StdRowLockSemaphore - Lock 'TRIGGER_ACCESS' is desired by: QuartzScheduler_AppScheduler-<server>11601051265073_ClusterManager
2020-09-25 16:28:05,189 INFO [stdout] (QuartzScheduler_AppScheduler-<server>11601051265073_ClusterManager) 16:28:05.188 [QuartzScheduler_AppScheduler-<server>11601051265073_ClusterManager] DEBUG org.quartz.impl.jdbcjobstore.StdRowLockSemaphore - Lock 'TRIGGER_ACCESS' is being obtained: QuartzScheduler_AppScheduler-<server>11601051265073_ClusterManager
2020-09-25 16:28:05,189 INFO [stdout] (QuartzScheduler_AppScheduler-<server>11601051265073_ClusterManager) 16:28:05.189 [QuartzScheduler_AppScheduler-<server>11601051265073_ClusterManager] DEBUG org.quartz.impl.jdbcjobstore.StdRowLockSemaphore - Lock 'TRIGGER_ACCESS' given to: QuartzScheduler_AppScheduler-<server>11601051265073_ClusterManager
logback.xml
<configuration debug="true" scan="true">
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n
</pattern>
</encoder>
</appender>
<appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>${jboss.server.log.dir}/logs.log</file>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<FileNamePattern>${jboss.server.log.dir}/logs.log.%d{yyyy-MM-dd}.gz</FileNamePattern>
<maxHistory>10</maxHistory>
</rollingPolicy>
<encoder>
<pattern>%date %level [username:%X{username}][%thread] %logger [%file:%line] %msg%n
</pattern>
</encoder>
</appender>
<logger name="org.quartz" level="INFO"/>
<root level="ALL">
<appender-ref ref="FILE"/>
<appender-ref ref="STDOUT"/>
</root>
</configuration>

You should exclude logging subsystem in jboss-deployment-structure.xml and also set org.jboss.logging.provider system property.
You can follow this Red Hat article on steps to configure LogBack with JBoss EAP 7
Also have a look at various options of configuring external logging.

Related

cannot disable DEBUG level log from org.apache.kafka.common

I am using log4j 1.2.17 with the following configuration:
log4j.rootLogger=INFO, stdout
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.Target=System.out
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n
log4j.logger.org.apache.kafka=OFF
With configuration above, I had expected that we would not see DEBUG level logs from kafka 2.4.0 libraries that we are using. However, somehow I still see logs as below. I have also tried using log4j2 with the same properties file in my application, it is the same. How shall we disable DEBUG level logging from kafka client libraries?
06:59:40.995 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name join-latency
06:59:40.995 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name sync-latency
06
Create a logback.xml with below contents (within the Resources folder):
<configuration>
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>[%thread] %-5level %logger{36} - %msg%n</pattern>
</encoder>
</appender>
<root level="info">
<appender-ref ref="STDOUT" />
</root>
</configuration>

Logback SMTPAppender not working with gmail configuration

I am trying to setup my logger instances to send mails whenever an error log is generated by the application. I thought of using gmail for this purpose. My logger.xml is as below:
<configuration>
<appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>/path-to-application/application.log</file>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<!-- rollover daily -->
<fileNamePattern>/path-to-application/application-%d{yyyy-MM-dd}.%i.log.gz</fileNamePattern>
<timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
<!-- or whenever the file size reaches 250MB -->
<maxFileSize>250MB</maxFileSize>
</timeBasedFileNamingAndTriggeringPolicy>
<maxHistory>30</maxHistory>
</rollingPolicy>
<encoder>
<pattern>%date - %thread [%level] - %message%n%xException%n</pattern>
</encoder>
</appender>
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>%date - %thread [%level] - %message%n%xException%n</pattern>
</encoder>
</appender>
<appender name="EMAIL" class="ch.qos.logback.classic.net.SMTPAppender">
<smtpHost>smtp.gmail.com</smtpHost>
<smtpPort>465</smtpPort>
<ssl>true</ssl>
<username>abc#gmail.com</username>
<password>abc's password</password>
<asynchronousSending>false</asynchronousSending>
<to>xyz#gmail.com#gmail.com</to>
<!--<to>ANOTHER_EMAIL_DESTINATION</to> <!– additional destinations are possible –>-->
<from>abc#gmail.com</from>
<subject>TESTING: %logger{20} - %m</subject>
<layout class="ch.qos.logback.classic.PatternLayout">
<pattern>%date %-5level %logger{35} - %message%n</pattern>
</layout>
<cyclicBufferTracker class="ch.qos.logback.core.spi.CyclicBufferTracker">
<!-- send just one log entry per email -->
<bufferSize>1</bufferSize>
</cyclicBufferTracker>
</appender>
<logger name="play" level="WARN"/>
<logger name="application" level="INFO"/>
<root level="ERROR">
<!-- <appender-ref ref="STDOUT" /> -->
<appender-ref ref="EMAIL"/>
</root>
</configuration>
I added javax.mail and activation as dependencies in my SBT file. The Rolling file appender and console appender are working fine but SMTP appender seems not to work with these settings. Error logs are being logged in my rolling file but not being send as mail. There are no exceptions being logged so that I could investigate further.
Is there something wrong with these specifications or problem might be from gmail smtp server end?
I had the same problem, and finally found out via debugging.
Logback cannot authenticate you to Gmail if you have setup 2-step authentication for instance. Try to generate an app password for logback and use it in place of your normal password in the config.
You can follow these instructions:
https://stackoverflow.com/a/25238515/1540818

Change log level for phantom embedded Cassandra

In my Scala project I'am using the phantom-sbt plugin in order to start embedded Cassandra. The problem is, this plugin is pretty verbose - all cassandra logs will be written to stdout.
I've seen on phantom github page, they are using log4j to configure all loggers. But it seems not to work (at least for me). I've set all loggers in the log4j.xml on 'ERROR', but it has no effect.
How should I change log level for all cassandra loggers?
You need a logback-test.xml inside /src/test/resources wherever you are running the embedded Cassandra. Then you can easily turn off individual loggers or set them to the appropriate levels.
Take this as an example configuration:
<configuration>
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<!-- encoders are assigned the type
ch.qos.logback.classic.encoder.PatternLayoutEncoder by default -->
<encoder>
<pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
</encoder>
</appender>
<root level="debug">
<appender-ref ref="STDOUT" />
</root>
<logger name="com.datastax.driver.core" level="ERROR"/>
<logger name="io.netty" level="ERROR"/>
<logger name="org.cassandraunit" level="ERROR"/>

Changing the log level for a running Actor system ?

My resources/application.conf looks like the following.
akka {
loggers = ["akka.event.slf4j.Slf4jLogger"]
loglevel = "DEBUG"
}
I'm creating an fat jar using $sbt assembly that contains this application.conf and deploying this jar to run my actors. Now can I change the log level (from DEBUG to INFO) of my program at runtime without brining down my actor system? If yes, how ?
My logback.xml looks like the following:
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
<target>System.out</target>
<encoder>
<pattern>%X{akkaTimestamp} %-5level[%thread] %logger{0} - %msg%n</pattern>
</encoder>
</appender>
<appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>logs/myjobs.log</file>
<rollingPolicy class="ch.qos.logback.core.rolling.FixedWindowRollingPolicy">
<fileNamePattern>logs/myjobs.%i.log.zip</fileNamePattern>
<minIndex>1</minIndex>
<maxIndex>5</maxIndex>
</rollingPolicy>
<triggeringPolicy class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy">
<maxFileSize>100MB</maxFileSize>
</triggeringPolicy>
<encoder>
<pattern>%date{yyyy-MM-dd} %X{akkaTimestamp} %-5level[%thread] %logger{1} - %msg%n</pattern>
</encoder>
</appender>
<logger name="akka" level="INFO" />
<root level="DEBUG">
<appender-ref ref="CONSOLE"/>
<appender-ref ref="FILE"/>
</root>
</configuration>
You can set the Akka loglevel using system.eventStream.setLogLevel() (within an Actor you would use context.system as the starting point).

Set logging level in Akka

I have developed a financial data distribution server with Akka, and I want to set logging level for the application. The documentation at akka.io is sketchy at the best; they say there is no more "logging" in Akka and logging is defined through event handlers now. There is also an example of event handler configuration, including logging level:
akka {
event-handlers = ["akka.event.EventHandler$DefaultListener"]
event-handler-level = "INFO"
}
I did that, but though akka.conf is successfully loaded, logging still appears to be at "DEBUG" level. What can be the problem there?
It appears that Akka uses slf4j/logback logging with default configuration. So the (never documented) solution would be to put e.g. the following logback.xml in your classpath:
<?xml version="1.0" encoding="UTF-8"?>
<configuration scan="false" debug="false">
<appender name="stdout" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>[%4p] [%d{ISO8601}] [%t] %c{1}: %m%n</pattern>
</encoder>
</appender>
<!-- you can also drop it completely -->
<logger name="se.scalablesolutions" level="DEBUG"/>
<root level="INFO">
<appender-ref ref="stdout"/>
</root>
</configuration>