It seems my wildfly server produces separate log file for each day: like
server.log.2017-06-30 server.log.2017-07-06. Is it possible to make it logging into one (always same) file?
By default WildFly is configured to use a periodic-rotating-file-handler which rotates every day. If you don't want log rotation you can use a file-handler instead.
The following CLI commands will make the change to using a file-handler.
batch
/subsystem=logging/root-logger=ROOT:remove-handler(name=FILE)
/subsystem=logging/periodic-rotating-file-handler=FILE:remove
/subsystem=logging/file-handler=FILE:add(named-formatter=PATTERN, append=true, autoflush=true, file={relative-to=jboss.server.log.dir, path=server.log})
/subsystem=logging/root-logger=ROOT:add-handler(name=FILE)
run-batch
One attribute to note is the append attribute. I've set it to true so that you won't lose any log messages on a reboot or when this command is executed. If you're not concerned about losing log messages you could set it to false.
Related
I am writing restful API with Yii, but I am getting an SQL error in create function. My purpose is to add new data to the news table, but it asks me for the author_id. How can I do it without crushing the default create method?
Solution 1. Run this below query on mysql/phpmyadmin and restart server
SET GLOBAL sql_mode = 'NO_ENGINE_SUBSTITUTION';
Solution 2.
Open the my.ini or my.cnf file for editing (the file you have depends on whether you are running Windows or Linux).
Find the following line:
sql_mode = "STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION"
Replace it with the line below:
If the line is not found, insert the line under the [mysqld] section (if there is no [mysqld] section, create it).
sql_mode= ""
Restart the MySQL service for the change to take effect.
If restarting is not a feasible option at the moment, you may log into the database server and execute the below command for the changes to take effect immediately. However, the change will be discarded the next time the MySQL service restarts unless the above process is performed.
set global sql_mode='';
Our log server consumes our log messages through kubernetes pods sysout formatted in json and indexes json fields.
We need to specify some predefined fields in messages, so that we can track transactions across pods.
For one of our pod we use Liberty profile and have issue to configure logging for these needs.
One idea was to use log4j to send customized json message in console. But all message are corrupted by Liberty log system that handles and modifies all logs done in console. I failed to configure Liberty logging parameters (copySystemStreams = false, console log level = NO) for my needs and always have liberty modify my outputs and interleaved non json messages.
To workaround all that I used liberty consoleFormat="json" logging parameter, but this introduced unnecessary fields and also do not allow me to specify my custom fields.
Is it possible to control liberty logging and console ?
What is the best way to do my use case with Liberty (and if possible Log4j)
As you mentioned, Liberty has the ability to log to console in JSON format [1]. The two problems you mentioned with that, for your use case, are 1) unnecessary fields, and 2) did not allow you to specify your custom fields.
Regarding unnecessary fields, Liberty has a fixed set of fields in its JSON schema, which you cannot customize. If you find you don't want some of the fields I can think of a few options:
use Logstash.
Some log handling tools, like Logstash, allow you to remove [2] or mutate [3] fields. If you are sending your logs to Logstash you could adjust the JSON to your needs that way.
change the JSON format Liberty sends to stdout using jq.
The default CMD (from the websphere-liberty:kernel Dockerfile) is:
CMD ["/opt/ibm/wlp/bin/server", "run", "defaultServer"]
You can add your own CMD to your Dockerfile to override that as follows (adjust jq command as needed):
CMD /opt/ibm/wlp/bin/server run defaultServer | grep --line-buffered "}" | jq -c '{ibm_datetime, message}'
If your use case also requires sending log4J output to stdout, I would suggest changing the Dockerfile CMD to run a script you add to the image. In that script you would need to tail your log4J log file as follows (this could be combined with the above advice on how to change the CMD to use jq as well)
`tail -F myLog.json &`
`/opt/ibm/wlp/bin/server run defaultServer`
[1] https://www.ibm.com/support/knowledgecenter/en/SSEQTP_liberty/com.ibm.websphere.wlp.doc/ae/rwlp_logging.html
[2] https://www.elastic.co/guide/en/logstash/current/plugins-filters-prune.html
[3] https://www.elastic.co/guide/en/logstash/current/plugins-filters-mutate.html
Just in case it helps, I ran into the same issue and the best solution I found was:
Convert app to use java.util.Logging (JUL)
In server.xml add <logging consoleSource="message,trace" consoleFormat="json" traceSpecification="{package}={level}"/> (swap package and level as required).
Add a bootstrap.properties that contains com.ibm.ws.logging.console.format=json.
This will give you consistent server and application logging in JSON. A couple of lines at the boot of the server are not json but that was one empty line and a "Launching defaultServer..." line.
I too wanted the JSON structure to be consistent with other containers using Log4j2 so, I followed the advice from dbourne above and add jq to my CMD in my dockerfile to reformat the JSON:
CMD /opt/ol/wlp/bin/server run defaultServer | stdbuf -o0 -i0 -e0 jq -crR '. as $line | try (fromjson | {level: .loglevel, message: .message, loggerName: .module, thread: .ext_thread}) catch $line'
The stdbuf -o0 -i0 -e0 stops pipe ("|") from buffering its output.
This strips out the liberty specific json attributes, which is either good or bad depending on your perspective. I don't need to new values so I don't have a good recommendation for that.
Although the JUL API is not quite as nice as Log4j2 or SLF4j, it's very little code to wrap the JUL API in something closer to Log4j2 E.g. to have varargs rather than an Object[].
OpenLiberty will also dynamically change logging if you edit the server.xml so, it pretty much has all the necessary bits; IMHO.
how can I change the directory where capistrano puts its log files? I could not find in the docs.
Currently the logs appear in myapp/log/... on my dev machine. However, since I am using laravel, and there is a log directory myapp/storage/logs I would like capistranos logs to appear here as well.
Do you mean the capistrano.log file that is created and appended to whenever you deploy?
You can specify the location by adding the following to deploy.rb:
set :format_options, log_file: "storage/logs/capistrano.log"
This tells Airbrussh (the default logging implementation in Capistrano 3.5.0+) where to place the log file. More information here: https://github.com/mattbrictson/airbrussh#configuration
I have an issue on the script, basically I don't use any log4net or whatever and im not planning, but some resource which i access during my script i suppose has some references to this log4net, so i get this messages:
log4net:ERROR XmlConfigurator: Failed to find configuration section
'log4net' in the application's .config file. Check your .config file
for the and elements. The configuration
section should look like:
I don't really care about this, as this is also not a real error, i would prefere to somehow hide this messages from the propmpt window, is this possible?
How can I ignore this information, without too much hassle?
This message comes from the log4net internal debugging, and means that not log4net configuration information is found in the config file. What I find strange is that this kind of info is usually opt-in:
There are 2 different ways to enable internal debugging in log4net.
These are listed below. The preferred method is to specify the
log4net.Internal.Debug option in the application's config file.
Internal debugging can also be enabled by setting a value in the application's configuration file (not the log4net configuration file,
unless the log4net config data is embedded in the application's config
file). The log4net.Internal.Debug application setting must be set to
the value true. For example:
This setting is read immediately on startup an will cause all internal debugging messages to be emitted.
To enable log4net's internal debug programmatically you need to set the log4net.Util.LogLog.InternalDebugging property to true.
Obviously the sooner this is set the more debug will be produced.
So either the code of one component uses the code approach, or there is a configuration value set to true. Your options are:
look through the configuration files for a reference to the log4net.Internal.Debug config key; if you find one set to true, set it to false.
add an empty log4net section in the configuration file to satisfy the configurator and prevent it from complaining
if the internal debugging is set through code, you may be able to redirect console out and the trace appenders (see link for where the internal debugging writes to) but this really depends on your environment so you'll need to dig a bit more to find how to catch all outputs. Not really simple
I want to run spring xd with Oracle(11g) which i already have in my environment. Currently my first concern is the jobs UI (my database has existing data of job executions that were performed by spring-batch and i simply want to display the details of those executions).
i'm using spring-xd-1.0.0.M5. I followed the instructions in the reference guide and i changed application.yml to have the following:
spring:
datasource:
url: jdbc:oracle:oci:MY_USERNAME/MYPWD#//orarmydomain.com:1521/myservice
username: MY_USERNAME
password: MYPWD
driverClassName: oracle.jdbc.OracleDriver
profiles:
active: default,oracle
i modified also batch-jdbc.properties to have the database configuration similar to the above.
Yet, when i start xd-singlnode.bat (or either xd-admin.bat) it seems like it ignores my oracle configuration and still uses the default hsqldb.
what am i doing wrong?
Thanks
The likely reason is that we did not upgrade the windows .bat scripts to take advantage of the property overriding via xd-config.yml. If you go into the unix script for xd-singlenode you will see that when java is invoked there there is an option
-Dspring.config.location=$XD_CONFIG
you can for now hardcode your location of that file, use file: as the prefix.
Also, The UI right now is very primitive, you will not be able to see many details about the job execution. There are however many job related commands you can execute in the shell and there is only one gap regarding step execution information as compared to what is available via spring-batch-admin.
The issue to watch for this is https://jira.springsource.org/browse/XD-1209 and it is schedule for the next milestone release.
Let me know how it goes, thanks!
Cheers,
Mark