JBoss EAP 6.2 and Log4j2 stops writing logs after some time - jboss

I am using Log4j2 (RollingFile with routes) in my web application to log application specific logs in a few separate log files. The log4j2.xml file is bundled with in the WAR file.
Log files are generated and logs are generating fine to start with. After some time, it stops writing logs to the existing file and fails creating new folders/files too.
On restart everything resumes working and that is for some time only.
Tried monitoring, couldn't figure out any specific pattern or steps to simulate it.
<Configuration status="error" name="logger">
<Properties>
<Property name="logpath">path_to_log_file</Property>
</Properties>
<Appenders>
<Routing name="RoutingUserLogFile">
<Routes pattern="$${ctx:user}/">
<Route>
<RollingFile name="UserLogFile" fileName="${logpath}/${ctx:user}/MyLogFile.log" filePattern="${logpath}/${ctx:user}/%d{dd-MM-yyyy}-MyLogFile-%i.log.gz">
<PatternLayout>
<Pattern>%d %p %-40C{1.} %m%n</Pattern>
</PatternLayout>
<Policies>
<TimeBasedTriggeringPolicy interval="1" modulate="true" />
<SizeBasedTriggeringPolicy size="4 MB" />
</Policies>
</RollingFile>
</Route>
</Routes>
</Routing>
</Appenders>
<Loggers>
<Root>
<level value="debug" />
<AppenderRef ref="RoutingUserLogFile" level="debug" />
</Root>
</Loggers>
</Configuration>

Related

Duplicate events in Splunk with Kafka and log4j2

We are using log4j2 and kafkaAppender to send logs to a topic which is consumed by Splunk.
Kafka topic has 5 partitions and 24 hour retention. All our services(~ 14) send log events to the same topic.
Below is our log4j configuration:
<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="debug">
<Appenders>
<Console name="consolelog" target="SYSTEM_OUT">
<PatternLayout pattern="" chatset="UTF-8"/>
</Console>
<Kafka name="kafka" topic="topic1" ignoreExceptions="false" syncSend="false">
<PatternLayout pattern="" chatset="UTF-8"/>
<Property name="bootstarp.servers">hostname:port</Property>
<Property name="security.protocol">SSL</Property>
<Property name="ssl.truststore.location">abc.jks</Property>
<Property name="ssl.truststore.password">abcd</Property>
<Property name="ssl.keystore.location">def.jks</Property>
<Property name="ssl.keystore.password">defg</Property>
<Property name="ssl.key.password">defg</Property>
</Kafka>
<Async name="Async">
<AppenderRef ref="kafka"/>
</Async>
<Async name="console-log">
<AppenderRef ref="consolelog"/>
</Async>
</Appenders>
<Loggers>
<AsyncLogger name="com.abc" level="debug" addivity="false">
<AppenderRef ref="Async"/>
</AsyncLogger>
<AsyncLogger name="org.springframework" level=debug"" addivity="false">
<AppenderRef ref="Async" level="ERROR"/>
<AppenderRef ref="console-log" level="ERROR"/>
</AsyncLogger>
<AsyncLogger name="com.def" level="warn" addivity="false">
<AppenderRef ref="Async"/>
</AsyncLogger>
<Root level="info">
<AppenderRef ref="Async"/>
</Root>
<Logger name="org.apache.kafka" level="WARN"/>
</Loggers>
</Configuration>
We are seeing duplicate events in Splunk.
Splunk team has informed us that they are seeing same event across different partitions having different offsets.
Is it possible that the above log4j configuration might be causing this issue?
How should I troubleshoot this further? Have any of you ever faced such an issue with similar setup and what was the root cause, resolution in your case?

Kubernetes log location inside the pod

I have a docker image for a Spring Boot app with the log file location as --logging.config=/conf/logs/logback.xml and the log file is as follows.
I am able to get the logs as
kubectl log POD_NAME
But, unable to find the log file when I log in to the pod. Is there any default location where the log file is placed as I haven't mentioned the logging location in the logback.xml file.
Logback file:
<?xml version="1.0" ?>
<configuration>
<property name="server.encoder.pattern"
value="%d{yyyy-MM-dd'T'HH:mm:ss.SSSZ} %-5level : loggerName="%logger{36}" threadName="%thread" txnId="%X{txnId}" %msg%n" />
<property name="metrics.encoder.pattern"
value="%d{yyyy-MM-dd'T'HH:mm:ss.SSSZ} %-5level : %msg%n" />
<!-- Enable LevelChangePropagator for jul-to-slf4j optimization -->
<contextListener class="ch.qos.logback.classic.jul.LevelChangePropagator" />
<appender name="METRICS" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>${metrics.encoder.pattern}</pattern>
</encoder>
</appender>
<logger name="appengAluminumMetricsLogger" additivity="false">
<appender-ref ref="METRICS" />
</logger>
<appender name="SERVER" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>${server.encoder.pattern}</pattern>
</encoder>
</appender>
<root level="INFO">
<appender-ref ref="SERVER" />
</root>
</configuration>
What you see from kubectl logs is console log from your service. Only console log can be seen like that and this is via docker logs support.

Log4J2 : How to Pass Application Name in TCPSocketServer implementation from client to server

I am running a Log4J2 TCPSocketServer on an edge node in a cluster. All the data nodes send log events to the TCPSocketServer on the edge node and also log locally in the data node using the log4j2.xml configuration file as shown below. The Application Name is stored as a System property and is accessible in the data node or client's log4j2.xml configuration using ${sys:ABC.appname}. How can I send the same appname to the edge node where TCPSocketServer is running using the log4j2.xml. I would be using the same Application Name in the log4j2-server.xml to log events into separate log files just like I am doing locally on data node.
Sample snippet from data node or Client - log4j2.xml :
<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="warn" name="MyApp" >
<Appenders>
<Socket name="socket" host="localhost" port="12345" >
<SerializedLayout />
</Socket>
<File name="MyFile" fileName="/var/log/${sys:ABC.appname}.log" >
<PatternLayout>
<Pattern>%d{ISO8601} %p %c{1.} [%t] %m%n</Pattern>
</PatternLayout>
</File>
</Appenders>
<Loggers>
<Root level="debug">
<AppenderRef ref="socket"/>
<AppenderRef ref="MyFile"/>
</Root>
</Loggers>
</Configuration>
Sample snippet from edge node or Server - log4j2-server.xml
<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="WARN">
<Appenders>
<File name="MyFile" fileName="/var/log/data/${hostName}-<**This is where I would like to see the appname from data node**>.log" >
<PatternLayout>
<Pattern>%d{ISO8601} %p %c{1.} [%t] %m%n</Pattern>
</PatternLayout>
</File>
<Async name="AsyncFile">
<AppenderRef ref="MyFile" />
</Async>
</Appenders>
<Loggers>
<Root level="WARN">
<AppenderRef ref="AsyncFile"/>
</Root>
</Loggers>
</Configuration>
I used ThreadContext to resolve this issue. Its pretty easy to add Thread Context into the codebase and later use Routing to segregate the log events based on the ThreadContext. I followed the example on this link to do the same https://logging.apache.org/log4j/2.x/faq.html#separate_log_files

Chainsaw v2 SocketReceiver not working with log4j2 SocketAppender

I'm trying to use Chainsaw v2 from http://people.apache.org/~sdeboy
I don't want to use zero configuration. Just a simple socketAppender/SocketReceiver combo.
I'm using log4j2 with the following configuration
<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="WARN" >
<Appenders>
<Console name="CONSOLE" target="SYSTEM_OUT">
<PatternLayout pattern="%d{HH:mm:ss.SSS} [%t] %-5level %logger{36} - %msg%n" />
</Console>
<Socket name="SharathZeroConf" host="localhost" port="4445">
</Socket>
</Appenders>
<Loggers>
<Root level="debug">
<AppenderRef ref="SharathZeroConf" />
<AppenderRef ref="CONSOLE" />
</Root>
</Loggers>
</Configuration>
On ChainSaw, I'm selecting the option "Receive events from network" with port 4445.
However chainsaw doesnt log anything.
I've verified that the appender configuration is correct on log4j side by using the builtin socketserver
java -cp ~/.m2/reposiry/org/apache/logging/log4j/log4j-api/2.0.2/log4j-api-2.0.2.jar org.apache.logging.log4j.core.net.server.TcpSocketServer 4445
So the bug must be on chainsaw side. Any pointers #Scott ?
You're right, I got the same issue. I just tried with LogMX instead, and it works like a charm:
I just had to copy Log4j JARs in LogMX lib/ directory (i.e. log4j-api-2.xx.jar and log4j-core-2.xx.jar)

dotnetopenauth Provider WebConfig Error

I am required to create a provider using the DNOA. I have downloaded the libraries from the DNOA site and attempted to load the oAuthServiceProvider example. I couldn't load this as this is looking for
\Microsoft\VisualStudio\v10.0\WebApplications\Microsoft.WebApplication.targets"
which is part of Visual Studio 2010, I however am on 2008. Despite changing the version to v9.0 it was still looking for v10.0. Nevermind.
I open the project as a website and tried to run this. But now get the error "Error 1 Unrecognized configuration section uri."
Any ideas as to what is going on here? Below is the webconfig
<?xml version="1.0"?>
<!-- The uri section is necessary to turn on .NET 3.5 support for IDN (international domain names),
which is necessary for OpenID urls with unicode characters in the domain/host name.
It is also required to put the Uri class into RFC 3986 escaping mode, which OpenID and OAuth require. -->
<uri>
<idn enabled="All"/>
<iriParsing enabled="true"/>
</uri>
<system.net>
<defaultProxy enabled="true" />
<settings>
<!-- This setting causes .NET to check certificate revocation lists (CRL)
before trusting HTTPS certificates. But this setting tends to not
be allowed in shared hosting environments. -->
<!--<servicePointManager checkCertificateRevocationList="true"/>-->
</settings>
</system.net>
<!-- this is an optional configuration section where aspects of dotnetopenauth can be customized -->
<dotNetOpenAuth>
<!-- Allow DotNetOpenAuth to publish usage statistics to library authors to improve the library. -->
<reporting enabled="true" />
<messaging>
<untrustedWebRequest>
<whitelistHosts>
<add name="localhost"/>
</whitelistHosts>
</untrustedWebRequest>
</messaging>
</dotNetOpenAuth>
<appSettings/>
<connectionStrings>
<system.web>
<!--
Set compilation debug="true" to insert debugging
symbols into the compiled page. Because this
affects performance, set this value to true only
during development.
-->
<compilation debug="true" targetFramework="4.0">
<assemblies>
<remove assembly="DotNetOpenAuth.Contracts"/>
<add assembly="System.Data.Linq, Version=4.0.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089"/>
</assemblies>
</compilation>
<authentication mode="Forms">
<forms name="oauthSP" />
</authentication>
<pages controlRenderingCompatibilityVersion="3.5" clientIDMode="AutoID"/>
</system.web>
<!--
The system.webServer section is required for running ASP.NET AJAX under Internet
Information Services 7.0. It is not necessary for previous version of IIS.
-->
<log4net>
<appender name="TracePageAppender" type="OAuthServiceProvider.Code.TracePageAppender, OAuthServiceProvider">
<layout type="log4net.Layout.PatternLayout">
<conversionPattern value="%date (GMT%date{%z}) [%thread] %-5level %logger - %message%newline"/>
</layout>
</appender>
<!-- Setup the root category, add the appenders and set the default level -->
<root>
<level value="INFO"/>
<!--<appender-ref ref="RollingFileAppender" />-->
<appender-ref ref="TracePageAppender"/>
</root>
<!-- Specify the level for some specific categories -->
<logger name="DotNetOpenAuth">
<level value="ALL"/>
</logger>
</log4net>
<system.serviceModel>
<behaviors>
<serviceBehaviors>
<behavior name="DataApiBehavior">
<serviceMetadata httpGetEnabled="true"/>
<serviceDebug includeExceptionDetailInFaults="true"/>
<serviceAuthorization serviceAuthorizationManagerType="OAuthServiceProvider.Code.OAuthAuthorizationManager, OAuthServiceProvider" principalPermissionMode="Custom"/>
</behavior>
</serviceBehaviors>
</behaviors>
<services>
<service behaviorConfiguration="DataApiBehavior" name="OAuthServiceProvider.DataApi">
<endpoint address="" binding="wsHttpBinding" contract="OAuthServiceProvider.Code.IDataApi">
<identity>
<dns value="localhost"/>
</identity>
</endpoint>
<endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange"/>
</service>
</services>
</system.serviceModel>
<system.webServer>
<modules runAllManagedModulesForAllRequests="true"/>
</system.webServer>