Why does liferay test for warnEnabled before logging a warning? - liferay-6

I often see the following in liferay source code:
if (_log.isWarnEnabled()) {
_log.warn(message);
}
What is the rationale behind the isWarnEnabled test ? Isn't that perfomed by the _log.warn ?
Alain

This is for perfomance issues,
_log.warn(message); will produce the information to be logged,
log.isWarnEnabled() avoids this "expensive" operation.
See apache commons doc

Related

Why optaplanner can not find feasible solution when I use setScoreDrlFileList api

If i call setScoreDrlFileList to set a dynamic generated drl file, the solution will not be feasible no matter how long i solve. What's wrong with the API? My code is like this:
File file = new File(drl);
List<File> fileList = new ArrayList<File>(1);
sessionSolverFactory.getSolverConfig().getScoreDirectorFacto‌​ryConfig().setScoreD‌​rlFileList(fileList)‌​;
if I remove setScoreD‌​rlFileList and use classpath resource : "org/optaplanner/examples/curriculumcourse/solver/curriculumCourseScoreRules.drl" my app can find a feasible solution in a second! What's wrong?
My application is a cloning of curriculum web example. The optaplanner version is 6.5 and JVM is oracle 1.8
Please help!

Logstash scala log parsing

I've got a problem with logstash. I use logback, logstash, kibana and elasticsearch (docker as logstash input source)
The problem is I have no idea how can I write a correct config file for logstash to get some interesting information.
The single scala log looks like this:
[INFO] [05/06/2016 13:58:31.789] [integration-akka.actor.default-dispatcher-14] [akka://integration/user/InstanceSupervisor/methodRouter/outDispatcher] sending msg: PublishMessage(instance,vouchers,Map(filePath -> /home/mateusz/JETBLUETESTING.csv, importedFiles -> Map(JETBLUETESTING.csv -> Map(status -> DoneStatus, processed -> 1, rows -> 5))),#contentHeader(content-type=application/octet-stream, content-encoding=null, headers=null, delivery-mode=2, priority=0, correlation-id=null, reply-to=null, expiration=null, message-id=null, timestamp=null, type=null, user-id=null, app-id=null, cluster-id=null)
I'd like to get something like tag [INFO], timestamp and of course the whole log in a single kibana result.
As for now i don't event know how exactly the log looks like (because its parsed by logback). Any information you can provide me would be greatly appreciated, because im stuck on this problem for few days.
When learning logstash it's best to find a debugger to help experiment (grok) with patterns. The standard appears to be hosted here. The site allows you to post a snippet from your logs, and then allows you to experiment with either pre-defined or custom patterns. The pre-defined patterns can be found here.
I had the same issue recently when trying to find out what logback was sending to logstash. I found that logback was able to convert the logs to json.A snippet I found useful is:
filter{
json{
source => "message"
}
}
Which I found in this related SO post
Once you can see the logs, it makes it much easier to experiment with patterns.
Hope this is useful.

Birt Multi Sheet report using SpudSoft

I´m having some problems to create reports in my server without default export engine.
I´m using SpudSoft to create it. I have the following configuration:
Tomcat 7
Birt 4.2.2
uk.co.spudsoft.birt.emitters.excel_0.8.0.201310230652.jar
And i followed this tutorial:
spudsoft-birt-excel-emitters
I haven´t include this file
lib/slf4j-api-1.6.2.jar
because it´s not included in the *.jar file
and either wrote this code:
'if( "XLS".equalsIgnoreCase( outputFileFormat ) ) {
renderOption.setEmitterID( "uk.co.spudsoft.birt.emitters.excel.XlsEmitter" );
} else if( "XLSX".equalsIgnoreCase( outputFileFormat ) ) {
renderOption.setEmitterID( "uk.co.spudsoft.birt.emitters.excel.XlsxEmitter" );
}'
Because I dont really know where to use it.
to run my report i use the following URL
http://127.0.0.1:8090/birt-viewer/frameset?__format=xls&__report=informes/myReport.rptdesign&__emitterid=uk.co.spudsoft.birt.emitters.excel.XlsEmitter
and i get the following message:
org.eclipse.birt.report.service.api.ReportServiceException: EmitterID uk.co.spudsoft.birt.emitters.excel for render option is invalid.
What can i do to run SpudSoft report? I've been reading for a week and I haven´t found any solution!
Thanks a lot for all!
#Dominique,
I recommend upgrading from the emitter included with BIRT 4.3 (and given the lack of responses from the BIRT team I regret letting them put it in there).
Also, you don't need to use a specific IRenderOption type - they are all the same really anyway.
#Jota,
If you are getting that error it means that BIRT hasn't picked up the emitter correctly (you do have the correct emitter ID).
I don't use the BIRT war file, so my instructions aren't aimed at that approach (I just use the report engine in my own service).
The code snippet is no use for you, it's just a way to specify the emitter ID, which you are doing on the query string.
slf4j shouldn't be needed with the version of the emitter that you have - it uses JUL instead (I hate JUL, but it's one fewer dependency).
Can you post a complete listing of the jar files in your war?
It seems because you make use of a generic IRenderOption. With spudsoft emitter you should instantiate your render options like this:
EXCELRenderOption excelOptions = new EXCELRenderOption();
Note if you upgrade to BIRT 4.3 you don't have to set the emitter anymore, it is embedded

Perl parsing a log4j log [duplicate]

We have several applications that use log4j for logging. I need to get a log4j parser working so we can combine multiple log files and run automated analysis on them. I'm not looking to reinvent the wheel, so can someone point me to a decent pre-existing parser? I do have the log4j conversion pattern if that helps.
If not, I'll have to roll our own.
I didn't realize that Log4J ships with an XML appender.
Solution was: specify an XML appender in the logging configuration file, include that output XML file as an entity into a well formed XML file, then parse the XML using your favorite technique.
The other methods had the following limitations:
Apache Chainsaw - not automated enough
jdbc - poor performance in a high performance distributed app
You can use OtrosLogViewer with batch processing. You have to:
Define you log format, you can use Log4j pattern layout parser or Log4j XmlLayout
Create java class that implements LogDataParsedListener. Method public void logDataParsed(LogData data, BatchProcessingContext context) will be called on every parsed log event.
Create jar
Run OtrosLogViewer with specifying your log processing jar, LogDataParsedListener implementation and log files.
What you are looking for is called SawMill, or something like it.
Log4j log files aren't really suitable for parsing, they're too complex and unstructured. There are third party tools that can do it, I believe (e.g. Sawmill).
If you need to perform automated, custom analysis of the logs, you should consider logging to a database, and analysing that. JDBC ships with the JdbcAppender which appends all messages to a database of your choice, but it has performance implications, and it's a bit flaky. There are other, similar, alternatives on the interweb, though (like this one).
You -can- use Log4j's Chainsaw V2 to process the various log files and collect them into one table, and either output those events as xml or use Chainsaw's built-in expression-based filtering, searching & colorizing support to slice & dice the logs.
Steps:
- Start Chainsaw V2
- Create a chainsaw configuration file by copying the example configuration file available from the Welcome tab - define one LogFilePatternReceiver 'plugin' entry for each log file that you want to process
- Start Chainsaw with that configuration
- Each log file will end up as a separate tab in the UI
- Pause the chainsaw-log tab and clear the events from that tab
- Create a new tab which aggregates the events from the various tabs by going to the 'view, crate custom expression logpanel' menu item and enter 'level >= DEBUG' in the box. It will create a new tab containing events from all of the tabs with level >= debug (which is why you cleared the chainsaw-log tab).
You can get an overview of the expression syntax used to filter, colorize and search from the tutorial (available from the Help menu).
If you don't want to use Chainsaw, you can do something similar - start a simple app that doesn't log but loads a log4j.xml config file with the 'plugin' entries you defined for the Chainsaw configuration, but also define a FileAppender with an xmllayout - all of the events received by the 'receivers' will be sent to the single appender.

Replace éàçè... with equivalent "eace" In GWT

I tried
s=Normalizer.normalize(s, Normalizer.Form.NFD).replaceAll("[^\\p{ASCII}]", "");
But it seems that GWT API doesn't provide such fonction.
I tried also :
s=s.replace("é",e);
But it doesn't work either
The scenario is I'am trying to générate token from the clicked Widget's text for the history management
You can take ASCII folding filter from Lucene and add to your project. You can just take foldToASCII() method from ASCIIFoldingFilter (the method does not have any dependencies). There is also a patch in Jira that has a full class for that without any dependencies - see here. It should be compiled by GWT without any problems. License should be also OK, since it is Apache License, but don't quote me on it - you should ask a real lawyer.
#okrasz, the foldToASCII() worked but I found a shorter one Transform a String to URL standard String in Java