Suppress exception in Wildfly log - wildfly

I want some Exceptions to not appear in the server's console log.
I tried to set a filter in standalone.xml but the exception still shows up.
<console-handler name="CONSOLE">
<level name="INFO"/>
<filter-spec value="not(match(".*java.lang.RuntimeException.*"))" />
<formatter>
<named-formatter name="COLOR-PATTERN"/>
</formatter>
</console-handler>
What am I missing?

Unfortunately at this point messages cannot be filtered by the exception. Only the message being logged can be filtered. There is a JIRA to create an exception filter.
If you just don't want to see exceptions on the console you could remove the %e or %E from the format pattern and then only look for exceptions in the server.log.

Related

Send email alerts from Nlog for specific exceptions

I am working on configuring NLog configuration which works perfectly for logging into text files. Now on the top of this I want to set a conditional target as Mail which should get fired only for a specific set of exceptions. For example - if there is a PaymentFailedException, CardExpiredException then the NLog should target Mail.
I have checked the NLog documentation but I could not find any way to set it for specific set of exceptions. However, NLog allows for setting target as Mail for exception levels.
You could configure use the <when> for this.
e.g.
<logger name="*" writeTo="myMailTarget">
<filters>
<when condition="not contains('${exception:format=type}', 'PaymentFailedException") action="Ignore" />
</filters>
</logger>
See filtering log messages and <when> docs

I have a failing case when making an IDP connection using pingfederate

The server.log says "signature_status": "UNVERIFIED". Is this a certificate issue?
Also what are the best ways to read the pingfederate logs in windows machine.
That sounds like an issue with signature verification, which could be the cert itself but is more likely a configuration issue. More information is really needed to know which it is.
I assume the issue you are having with reading logs on windows machines is because the files are large or are moving quickly. If the files are too big you can modify the log4j2.xml config file at appdir/pingfed*/pingfed*/server/default/conf/log4j2.xml to reduce the log size to something easier to read in notepad. Here is an example rolling file appender that should leave you with easily maneageable files.
<RollingFile name="FILE" fileName="${sys:pf.log.dir}/server.log"
filePattern="${sys:pf.log.dir}/server.log.%i" ignoreExceptions="false">
<PatternLayout>
<!-- Uncomment this if you want to use UTF-8 encoding instead
of system's default encoding.
<charset>UTF-8</charset> -->
<pattern>%d %X{trackingid} %-5p [%c] %m%n</pattern>
</PatternLayout>
<Policies>
<SizeBasedTriggeringPolicy
size="20000 KB" />
</Policies>
<DefaultRolloverStrategy max="5" />
</RollingFile>
If you issue is that the files are moving too fast to read then you might consider using something like baretail or Get-Content in powershell now that it has a tail switch.

AMQPConnector on Mule - SocketException: Too many open files

I'm working a long time with Mule application and AMQPConnector.
Before some days I saw an ERROR: Too many open files, crash, and can't send request till I'm restarted the Mule application.
Mule code is something like this: (critical parts)
<amqp:connector name="AMQPConnector" validateConnections="true"
doc:name="AMQPConnector" host="x.x.x.x" port="5672"
password="xxxxx" username="xxx">
<reconnect-forever frequency="1000" blocking="false" />
From flows:
<flow name="first">
<http:listener config-ref="HTTP_Listener_Configuration" path="order/submit" doc:name="HTTP"/>
<byte-array-to-object-transformer doc:name="Byte Array to Object"/>
<json:object-to-json-transformer doc:name="Object to JSON"/>
<json:json-to-object-transformer returnClass="java.lang.Object" doc:name="JSON to Object"/>
<set-session-variable value="#[payload]" variableName="order" doc:name="Session Variable" />
<set-payload doc:name="order" value="#[payload.info]" />
<amqp:outbound-endpoint queueName="xxxx"
responseTimeout="100000" exchange-pattern="request-response"
connector-ref="AMQPConnector" doc:name="XXXX" />
<object-to-string-transformer doc:name="Object to String"/>
<logger message="response from queue #[payload]"
level="INFO" doc:name="Logger" />
</flow>
<flow name="second">
<amqp:inbound-endpoint queueName="xxxx"
responseTimeout="10000" exchange-pattern="request-response"
connector-ref="AMQPConnector" doc:name="AMQP-0-9" />
<byte-array-to-object-transformer doc:name="Byte Array to Object" />
<json:object-to-json-transformer doc:name="Object to JSON" />
<http:request config-ref="HTTP_Request_XXXX" host="#[sessionVars['partUrl']]" path="#[sessionVars['path']]" method="#[sessionVars['method']]" doc:name="HTTP">
<http:request-builder>
<http:header headerName="#[sessionVars['key']]" value="#[sessionVars['value']]"/>
</http:request-builder>
</http:request>
</flow>
The error exception log is:
2016-11-27 04:30:12,272 [amqpReceiver.1018] ERROR org.mule.exception.CatchMessagingExceptionStrategy -
Message : Error sending HTTP request. Message payload is of type: String
Code : MULE_ERROR--2
Exception stack is:
1. Too many open files (java.net.SocketException) sun.nio.ch.Net:-2 (null)
2. java.net.SocketException: Too many open files (java.util.concurrent.ExecutionException)
org.glassfish.grizzly.impl.SafeFutureImpl$Sync:349 (null)
3. java.util.concurrent.ExecutionException: java.net.SocketException: Too many open files (java.io.IOException)
org.mule.module.http.internal.request.grizzly.GrizzlyHttpClient:223 (null)
4. Error sending HTTP request. Message payload is of type: String (org.mule.api.MessagingException)
org.mule.module.http.internal.request.DefaultHttpRequester:287 (http://www.mulesoft.org/docs/site/current3/apidocs/org/mule/api/MessagingException.html)
Root Exception stack trace:
java.net.SocketException: Too many open files
at sun.nio.ch.Net.socket0(Native Method)
at sun.nio.ch.Net.socket(Net.java:411)
at sun.nio.ch.Net.socket(Net.java:404)
+ 3 more (set debug level logging or '-Dmule.verbose.exceptions=true' for everything)
Request was accepted by http-listner and system stopped.
Maybe miss one configuration, or close connection....
(I saw same questions with solutions to increase system or add code in java class etc... - it wasn't help me, I don't have java classes. )
Someone can help me?
This is an issue with Anypoint Studio. Unfortunately your operating system can only open so many files. Depending on your operating system this limit varies, however I experienced this same issue in both Ubuntu and macOS using the latest Anypoint Studio (5 to 6).
The best thing I can suggest is to restart and try and keep your open files to a minimum at any one point whilst using Anypoint. When I'm only working on 4-5 files at a time I can usually go an entire day without needing to restart.
With heavy usage I end up needing to restart within 5 hours.

How to configure message format to syslog in WildFly

When using JBoss 5.1 with this appender:
<appender name="SYSLOG" class="org.apache.log4j.net.SyslogAppender">
<errorHandler class="org.jboss.logging.util.OnlyOnceErrorHandler"/>
<param name="Facility" value="LOCAL7"/>
<param name="FacilityPrinting" value="true"/>
<param name="SyslogHost" value="localhost"/>
<layout class="org.apache.log4j.PatternLayout">
<param name="ConversionPattern" value="[%d{ABSOLUTE},%c{1}] %m%n"/>
</layout>
</appender>
I see next message for log record (in EventLog Analyzer):
local7:[13:32:45,763,SendingPool] Sending pool task executed. Pool size is [0
In WildFly 8.2.1 I have next configuration for handler:
<syslog-handler name="SYSLOG">
<level name="DEBUG"/>
<server-address value="localhost"/>
<facility value="local-use-7"/>
</syslog-handler>
Message:
1 2016-07-08T13:30:34.943+03:00 - java 910 com.mycompany.component.p - Sending pool task executed. Pool size is [0
How I can change message format for syslog?
Unfortunately there isn't a way using the syslog-handler to format the message. This was an oversight when it was created and there is a long standing JIRA to fix this.
However you can use a custom-handler and use a formatter.
/subsystem=logging/pattern-formatter=syslog-formatter:add(pattern="local7: [%d{hh:mm:ss,SSS},%c{1}] %s")
/subsystem=logging/custom-handler=syslog:add(class=org.jboss.logmanager.handlers.SyslogHandler, module=org.jboss.logmanager, named-formatter=syslog-formatter, properties={hostname="localhost", facility="LOCAL_USE_7", protocol="TCP", useCountingFraming=true})
/subsystem=logging/root-logger=ROOT:add-handler(name=syslog)
Note if you want the local7: to be printed that needs to be part of the format. There is not way to prepend the facility name to the message.
Looking at:
1 2016-07-08T13:30:34.943+03:00 - java 910 com.mycompany.component.p - Sending pool task executed. Pool size is [0
That looks like the raw message minus the priority. By default the syslog handler will use the RFC 5424 format. If you want to use RFC 3164 add syslogType=RFC3163 to the properties attribute in the above custom-handler add operation and remove the useCountingFraming=true. Note the useCountingFraming=true may need to removed anyway. It depends on your syslog server setup.

Mule Persistent storage

I am trying to use persistent queue storage to recover from unexpected failure. My mule version is 3.3.1
I pick up messages from a queue and enter "until successful" loop. If mule stops for some reason, I would like the message to be persistent.
Here is my relevant code
<spring:bean id="outboundStore" class="org.mule.util.store.QueuePersistenceObjectStore" />
<until-successful objectStore-ref="outboundStore"
I do not see the messages in .mule directory. What am I doing wrong?
Sorry if the question is not clear.
Adding Flows as requested:
<flow name="InitialFlow" processingStrategy="synchronous">
<inbound-endpoint ref="firstQueue"/>
<until-successful objectStore-ref="outboundStore" maxRetries="6" secondsBetweenRetries="5" deadLetterQueue-ref="secondQueue" failureExpression="groovy:message.getInvocationProperty('soapResponse') == 'BAD'">
<flow-ref name="somSubFlow" />
</until-successful>
</flow>
<sub-flow name="someSubFlow">
<http:outbound-endpoint ref="someEndpoint" exchange-pattern="request-response" method="GET" />
</sub-flow>
Please do let me know if you need more information.
Using a configuration very similar to yours, I can totally see messages pending delivery written in the .mule/queuestore/queuestore directory.
The only thing I can think of is an issue with this expression groovy:message.getInvocationProperty('soapResponse') == 'BAD' that would somehow mess with the processing.
Is this expression correct? Why not using MEL?