Wildfly, how to force programmatically the flush of server log file when autoflush = false - wildfly

for performance reason I have set the property autofush=false in the subsystem urn:jboss:domain:logging:
<periodic-size-rotating-file-handler name="FILE" autoflush="false">
WildFly does not immediately flush the server.log file but does so according to some policy of its own.
In some scenarios, however, it is necessary to have the latest information available in the server.log without wait that WildFly decides to flush
I have seen that it is possible to change the autoflush flag setting 'on the fly' using wildfly's management console: this causes an immediate flush of the latest information in the server.log but also changes the configuration of xml file of the application.
The best option would be to be able to force the flush on demand without the need to change the application configuration file, but after several searches I have not found any way to do this.
I only found a less-than-elegant way: run a bunch of useless System.out in order to force JBoss to flush
I also tried putting in an application:
System.out.flush();
But no flush is executed.
I have tried using WildFly 26

Related

Wildfly detection database connection leaks

I have application, deployed in wildfly. And sometimes in application occurs db connection leaks. I really cannot find them in debugger. But they are shown in WildFly Management Console in datasource statistics page, InUseCount sometimes incremented.
So, questions:
It is possible to create handler that firing when connection created and closed? To find globally who does not close connection.
Is there connections leaks troubleshooting approach, more effective than simple debugging?
I found this article:
http://www.mastertheboss.com/jbossas/wildfly9/detecting-connection-leaks-in-wildfly-9
But it is not accurate for modern versions of WildFly (for example 19 and higher). Problem in that in modern versions WildFly when starts not use parameter ironjacamar.mcp. Instead of this parameter mcp option of datasource must be used.
Docs about datasource options:
https://docs.wildfly.org/19.1/wildscribe/subsystem/datasources/xa-data-source/index.html
After adding mcp option when flushing datasources, file leaks.txt appears.

Jboss fails after receiving a lot of queries using Jmeter

During Stress/Load test and after sending so many queries using JMeter to my JBoss server, the server becomes irresponsive / unreachable. I want to know if there is any mechanism that makes JBoss unstable.
This might be an issue with the threads, there might be some thread blocking or taking longer. On this case, you will need to get a thread dump, and verify where it's stuck/unresponsive. From the description it might a thread on JMeter that is using the resources and destabilizing JBoss. A server log could also show some issue as well.
Recommendations
Get the thread dumps on the moment it happens
Analyze it with fastthread.io or other thread analyzer, e.g. TDA.
Verify any issue on the server.log
Observation
For opening issues with JBoss 5/6/7 please update the logs and configuration files, this will make the debug easier.
-f

Setup rsyslog to send log to remote syslogserver but not to messages/syslog

I am running an ELK-Stack as a central syslogserver and I set up rsyslog to send logfiles, which are not logging into /var/lib/messages by default, to it.
The setup is working very well but since I made the configuration the external logfiles actually show up in the messages file, which is blowing it out of proportion and makes debugging normal systemlogs difficult.
I want the logs to be send to the syslogserver but not into the messages file.
This is my current configuration:
111-elk-syslog.conf:
*.* ##IP_OF_THE_SYSLOGSERVER:514
101-external-log.conf
$ModLoad imfile
$InputFileName PATH_TO_LOGFILE
$InputFileTag FILE_TAG
$InputFileStateFile FILE_TAG
$InputFileFacility local3
$InputRunFileMonitor
I know, using filebeat, I could circumvent this but rsyslog is working very well in my enviroment and this application is the only one logging so much, that this is an actual problem.
I don't understand in detail your setup, but it may help to know that in general in rsyslogd if you don't want further handling of a message that you have matched, you simply repeat the filter on the next line by using &, and the action stop. For example, you might try
*.* ##IP_OF_THE_SYSLOGSERVER:514
& stop

Catalyst: Log4perl and Apache

I host my Catalyst web application with Apache2 and ModPerl. The web application uses the Log4perl modul to generate logfiles.
The problem is that only log entries are generated when the apache service is starting. Afterwards no new entries were generated.
If I use the integrated development server of catalyst instead, log entries are generated normaly.
I already checked the access rights and these seem ok: the apache process is owner and can write.
Anyone a idea what causes this problem???
This is my log4perl config:
log4perl.logger.myapp=INFO, LOGFILE
log4perl.appender.LOGFILE=Log::Log4perl::Appender::File
log4perl.appender.LOGFILE.filename=myapp.log
log4perl.appender.LOGFILE.mode=append
log4perl.appender.LOGFILE.layout=PatternLayout
log4perl.appender.LOGFILE.layout.ConversionPattern=[%d] [%p] %m%n
I setup a test application running on Apache2 and mod_perl and I got this to work. Here were the notes that I took about it.
I used Log::Log4perl::Catalyst to do the logging within Catalyst. You mentioned using Log4perl, but I didn't know if you were using the Catalyst extension or not. In my main package, I had these lines:
use Log::Log4perl::Catalyst;
...
__PACKAGE__->log(Log::Log4perl::Catalyst->new('/full/path/to/l4p.conf'));
I did have to specify the full path to the log configuration file. I added a few logging statements to make sure that worked.
I used your sample above, but I did change one thing. I had to specify a full path to the log location again:
log4perl.appender.LOGFILE.filename=/full/path/to/myapp.log
Once I did those things, hitting the main site updated the log file.

PostgreSQL: Where does the output of ereport() go?

I am in the middle of developing a C extension library for PostgreSQL. I am using a lot of ereport() calls to help with future debugging.
A typical example of usage in my code would be something like this:
ereport(NOTICE, (errmsg("[%s]: Returned nonzero result (%d).", (const char*)__FUNCTION__, ret)) );
However, when I look in /var/log/postgresql/postgresql-8.4-main.log my messages do not seem to be appearing there - only messages from what I assume is the db server daemon.
So, where are my log messages being stored?
BTW, I am running PG 8.4 on Ubuntu Linux (10.0.4)
By default, logging of non-critical messages is not enabled on fresh installs. You configure it by setting log_destination and logging_collector.
PostgreSQL has several logging levels, and by default, NOTICE level is not saved to log files (even when they are enabled) . This is configured by log_min_messages setting. However NOTICE it emitted to client by default. This is configured by client_min_messages setting.
So if you want these to be stored in log files, you will have to either change NOTICE to WARNING in your code, or set log_min_messages = notice.
See this http://www.postgresql.org/docs/8.4/static/runtime-config-logging.html
and maybe this http://www.depesz.com/index.php/2011/05/06/understanding-postgresql-conf-log/