Use log4j to log message in liberty console - kubernetes

Our log server consumes our log messages through kubernetes pods sysout formatted in json and indexes json fields.
We need to specify some predefined fields in messages, so that we can track transactions across pods.
For one of our pod we use Liberty profile and have issue to configure logging for these needs.
One idea was to use log4j to send customized json message in console. But all message are corrupted by Liberty log system that handles and modifies all logs done in console. I failed to configure Liberty logging parameters (copySystemStreams = false, console log level = NO) for my needs and always have liberty modify my outputs and interleaved non json messages.
To workaround all that I used liberty consoleFormat="json" logging parameter, but this introduced unnecessary fields and also do not allow me to specify my custom fields.
Is it possible to control liberty logging and console ?
What is the best way to do my use case with Liberty (and if possible Log4j)

As you mentioned, Liberty has the ability to log to console in JSON format [1]. The two problems you mentioned with that, for your use case, are 1) unnecessary fields, and 2) did not allow you to specify your custom fields.
Regarding unnecessary fields, Liberty has a fixed set of fields in its JSON schema, which you cannot customize. If you find you don't want some of the fields I can think of a few options:
use Logstash.
Some log handling tools, like Logstash, allow you to remove [2] or mutate [3] fields. If you are sending your logs to Logstash you could adjust the JSON to your needs that way.
change the JSON format Liberty sends to stdout using jq.
The default CMD (from the websphere-liberty:kernel Dockerfile) is:
CMD ["/opt/ibm/wlp/bin/server", "run", "defaultServer"]
You can add your own CMD to your Dockerfile to override that as follows (adjust jq command as needed):
CMD /opt/ibm/wlp/bin/server run defaultServer | grep --line-buffered "}" | jq -c '{ibm_datetime, message}'
If your use case also requires sending log4J output to stdout, I would suggest changing the Dockerfile CMD to run a script you add to the image. In that script you would need to tail your log4J log file as follows (this could be combined with the above advice on how to change the CMD to use jq as well)
`tail -F myLog.json &`
`/opt/ibm/wlp/bin/server run defaultServer`
[1] https://www.ibm.com/support/knowledgecenter/en/SSEQTP_liberty/com.ibm.websphere.wlp.doc/ae/rwlp_logging.html
[2] https://www.elastic.co/guide/en/logstash/current/plugins-filters-prune.html
[3] https://www.elastic.co/guide/en/logstash/current/plugins-filters-mutate.html

Just in case it helps, I ran into the same issue and the best solution I found was:
Convert app to use java.util.Logging (JUL)
In server.xml add <logging consoleSource="message,trace" consoleFormat="json" traceSpecification="{package}={level}"/> (swap package and level as required).
Add a bootstrap.properties that contains com.ibm.ws.logging.console.format=json.
This will give you consistent server and application logging in JSON. A couple of lines at the boot of the server are not json but that was one empty line and a "Launching defaultServer..." line.
I too wanted the JSON structure to be consistent with other containers using Log4j2 so, I followed the advice from dbourne above and add jq to my CMD in my dockerfile to reformat the JSON:
CMD /opt/ol/wlp/bin/server run defaultServer | stdbuf -o0 -i0 -e0 jq -crR '. as $line | try (fromjson | {level: .loglevel, message: .message, loggerName: .module, thread: .ext_thread}) catch $line'
The stdbuf -o0 -i0 -e0 stops pipe ("|") from buffering its output.
This strips out the liberty specific json attributes, which is either good or bad depending on your perspective. I don't need to new values so I don't have a good recommendation for that.
Although the JUL API is not quite as nice as Log4j2 or SLF4j, it's very little code to wrap the JUL API in something closer to Log4j2 E.g. to have varargs rather than an Object[].
OpenLiberty will also dynamically change logging if you edit the server.xml so, it pretty much has all the necessary bits; IMHO.

Related

Overriding Jmeter property in Run Taurus task Azure pipeline is not working

I am running jmeter from Taurus and I need a output kpi.jtl file with url listing.
I have tried passing parameter -o modules.jmeter.properties.save.saveservice.url='true' and
-o modules.jmeter.properties="{'jmeter.save.saveservice.url':'true'}". Pipeline is running successfully but the kpi.jtl doesn't have the url. Please help
I have tried few more options like editing jmeter.properties via pipeline - which broke the pipeline and expecting input from user
user.properities- Which is ineffective.
I am expecting kpi.jtl file with all the possible logs especially url.
I believe you're using the wrong property, you should pass the next one:
modules.jmeter.csv-jtl-flags.url=true
More information: CSV file content configuration
However be informed that having a.jtl file "with all possible logs" is some form of a performance anti-pattern as it creates massive disk IO and may ruin your test. More information: 9 Easy Solutions for a JMeter Load Test “Out of Memory” Failure

Wildfly - logging into one file

It seems my wildfly server produces separate log file for each day: like
server.log.2017-06-30 server.log.2017-07-06. Is it possible to make it logging into one (always same) file?
By default WildFly is configured to use a periodic-rotating-file-handler which rotates every day. If you don't want log rotation you can use a file-handler instead.
The following CLI commands will make the change to using a file-handler.
batch
/subsystem=logging/root-logger=ROOT:remove-handler(name=FILE)
/subsystem=logging/periodic-rotating-file-handler=FILE:remove
/subsystem=logging/file-handler=FILE:add(named-formatter=PATTERN, append=true, autoflush=true, file={relative-to=jboss.server.log.dir, path=server.log})
/subsystem=logging/root-logger=ROOT:add-handler(name=FILE)
run-batch
One attribute to note is the append attribute. I've set it to true so that you won't lose any log messages on a reboot or when this command is executed. If you're not concerned about losing log messages you could set it to false.

Fastest way to get openam attribute names for ssoadm

I am trying to script an openam deployment using ssoadm, and want to know what will be the fastest and fool proof way to get the attribute names for ssoadm?
Right now, i login to the console and "view html source" for the attribute i am interested in, and use that via ssoadm. But, this approach is time consuming, plus with openam13 the attribute names are not available in the source.
Are you interested in any configuration or service in particular?
For most configurations and services (such as datastores, auth modules, server properties, etc...) there is an ssoadm command that will give you the current values from where you can grab the property names and use the in your script.
For example if you have a Datastore called OpenDJ in your top-level realm you can get the current configuration values using the following command:
ssoadm show-datastore -u amadmin -f /tmp/amadmin.pwd -e / -m OpenDJ
Typically it's just a matter of finding the right ssoadm command. Another option will be to look at the service definition. All these definitions are kept in xml format in your configuration store inside ou=Services.
Hope this helps.
I think the easiest approach is probably to look up the service XML files. At the time of the configuration, the service XML files are all copied over to ~/<OPENAM_HOME>/config/xml folder, so normally you can just try to grep for certain strings (like dynamic), but even then that may not work well.
If you know what service you are dealing with, then things get a bit easier. Are you trying to change an Authentication configuration? It must be defined in amAuth.xml then. The service name to be used for the ssoadm command is defined in the <Service> element under the "name" attribute. The service attribute names are defined under <AttributeSchema> elements with "name" attribute.
Yet another alternative would be to just read the documentation as most of the property names are already documented:
http://openam.forgerock.org/doc/bootstrap/admin-guide/index.html#auth-core-realm-attributes

HOCON not substituting environment variables

I have read the documentation concerning falling back to environment variables at https://github.com/typesafehub/config/blob/master/HOCON.md#substitution-fallback-to-environment-variables. My understanding was that it would pickup any envars. So for instance, if from the shell I was able to do echo $HOSTNAME and see a non-empty response then HOCON should do that as well.
In my application.conf I have a line
akka.remote.netty.tcp.hostname = ${HOSTNAME}
However, my app is not happy with this and fails to start with.
/conf/application.conf: 9: Could not resolve substitution to a value: ${HOSTNAME}
Is this a user issue? A shell issue? I am able to login as the user and echo $HOSTNAME
Tagging this scala and akka since that userbase probably has the most exposure to HOCON
The reason for HOCON not picking up the envar is that my app runs as a linux service (Centos 6.5) which clears away most environment variables.
See https://unix.stackexchange.com/questions/44370/how-to-make-unix-service-see-environment-variables for a relevant description of the issue
this is a shot in the dark, but are you using an older version of typesafe-config? maybe its a newer-ish feature? the feature seems to be advertised as you describe, but if you are pulling in typesafe-config as a transient dependency (say from akka), maybe you are getting an older version.
what happens if you remove the substitution in your .conf file (so parsing is successful) and then print out the contents of ConfigFactory.systemEnvironment()? for reference: http://typesafehub.github.io/config/latest/api/com/typesafe/config/ConfigFactory.html#systemEnvironment--
HOSTNAME isn't an environment variable. It's a bash internal variable. See https://superuser.com/questions/132489/hostname-environment-variable-on-linux for more details.

Logging JBoss request/response ONLY using log4j

How can JBoss requests/responses ONLY be logged using log4j?
For my 3-tiered application (client, web-service and database), I'm trying to gather request/response times.
For instance, timestamps before/after:
Client sends request
WS receives request
WS sends query to database
Currently, my log displays several thousand lines of text (DEBUG mode). But, I'm looking only for request/response information.
I suppose I could choose a different log level, but I'm not able to find my log4j.xml that most solutions are referring to (server/xxx/conf/jboss-log4j.xml). The log4j.properties file in my Eclipse for some reason is not allowing edits.
I'm new to JBoss; in fact inherited the current setup from somebody else, so I'm a little clueless about the entire JBoss thing.
Edit 1
Examples of log4j.properties can be found here.
Edit 2
My log4j.properties:
log4j.rootLogger=TRACE, file
log4j.appender.file=org.apache.log4j.RollingFileAppender
log4j.appender.file.File=C\:log4j.log
log4j.appender.file.MaxFileSize=1MB
log4j.appender.file.MaxBackupIndex=1
log4j.appender.file.layout=org.apache.log4j.PatternLayout
log4j.appender.file.layout.ConversionPattern=%d{ABSOLUTE} %5p %c{1}:%L - %m%n
log4j.category.org.springframework.beans.factory=DEBUG
It's probably best to configure logging through the logging subsystem. If you're looking to use your a log4j configuration file, see the instructions here.