Can Perl's Log::Log4perl's log levels be changed dynamically without updating config? - perl

I have a Mason template running under mod_perl, which is using Log::Log4perl.
I want to change the log level of a particular appender, but changing the config is too awkward, as it would have to pass through our deployment process to go live.
Is there a way to change the log level of an appender at run-time, after Apache has started, without changing the config file, and then have that change affect any new Apache threads?

If you've imported the log level constants from Log::Log4perl::Level, then you can do things like:
$logger->level($ERROR); # one of DEBUG, INFO, WARN, ERROR, FATAL
$logger->more_logging($delta); # Increase log level by $delta levels,
# a positive integer
$logger->less_logging($delta); # Decrease log level by $delta levels.
This is in the Changing the Log Level on a Logger section in the Log::Log4perl docs.

It seems kinda hacky to me, but it works:
$Log::Log4perl::Logger::APPENDER_BY_NAME{SCREEN}->threshold($DEBUG);
And to make it more dynamic, you could pass in a variable for the Appender name and level.
%LOG4PERL_LEVELS =
(
OFF =>$OFF,
FATAL =>$FATAL,
ERROR =>$ERROR,
WARN =>$WARN,
INFO =>$INFO,
DEBUG =>$DEBUG,
TRACE =>$TRACE,
ALL =>$ALL
);
$Log::Log4perl::Logger::APPENDER_BY_NAME{$appender_name}->threshold($LOG4PERL_LEVELS{$new_level});

Related

How to do effective logging in Spark application

I have a spark application code written in Scala that runs a series of Spark-SQL statements. These results are calculated by calling an action 'Count' in the end against the final dataframe. I would like to know what is the best way to do logging from within a Spark-scala application job? Since all the dataframes (around 20) in number are computed using a single action in the end, what are my options when it comes to logging the outputs/sequence/success of some statements.
Question is little generic in nature. Since spark works on lazy evaluation, the execeution plan is decided by spark and I want to know till what point application statements ran successfully and what were the intermediate results at that stage.
The intention here being to monitor the long running task and see till which point it was fine and where the the problems creeped in.
If we try to put logging before/after transformations then it gets printed when code is read. So, the logging has to be done with custom messages during the actual execution (calling the action in the end of the scala code). If I try to put count/take/first etc in between the code then the execution of job slows down a lot.
I understand the problem that you are facing. Let me put out a simple solution for this.
You need to make use of org.apache.log4j.Logger. Use following lines of code to generate logger messages.
org.apache.log4j.Logger logger = org.apache.log4j.Logger.getRootLogger();
logger.error(errorMessage);
logger.info(infoMessage);
logger.debug(debugMessage);
Now, in order to redirect these messages to a log file, you need to create a log4j property file with below contents.
# Root logger option
# Set everything to be logged to the console
log4j.rootCategory=INFO, console
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.err
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{1}: %m%n
# Settings to quiet third party logs that are too verbose
log4j.logger.org.eclipse.jetty=OFF
log4j.logger.org.eclipse.jetty.util.component.AbstractLifeCycle=OFF
log4j.logger.org.spark-project.jetty.servlet.ServletHandler=OFF
log4j.logger.org.spark-project.jetty.server=OFF
log4j.logger.org.spark-project.jetty=OFF
log4j.category.org.spark_project.jetty=OFF
log4j.logger.Remoting=OFF
log4j.logger.org.apache.spark.repl.SparkIMain$exprTyper=INFO
log4j.logger.org.apache.spark.repl.SparkILoop$SparkILoopInterpreter=INFO
log4j.logger.org.apache.parquet=ERROR
log4j.logger.parquet=ERROR
# Setting properties to have logger logs in local file system
log4j.appender.rolling=org.apache.log4j.RollingFileAppender
log4j.appender.rolling.encoding=UTF-8
log4j.appender.rolling.layout=org.apache.log4j.PatternLayout
log4j.appender.rolling.layout.conversionPattern=[%d] %p %m (%c)%n
log4j.appender.rolling.maxBackupIndex=5
log4j.appender.rolling.maxFileSize=50MB
log4j.logger.org.apache.spark=OFF
log4j.logger.org.spark-project=OFF
log4j.logger.org.apache.hadoop=OFF
log4j.logger.io.netty=OFF
log4j.logger.org.apache.zookeeper=OFF
log4j.rootLogger=INFO, rolling
log4j.appender.rolling.file=/tmp/logs/application.log
You can name the log file in the last statement. Ensure the folders at every node with appropriate permissions.
Now, we need to pass the configurations while submitting the spark job as follows.
--conf spark.executor.extraJavaOptions=-Dlog4j.configuration=spark-log4j.properties --conf spark.driver.extraJavaOptions=-Dlog4j.configuration=spark-log4j.properties
And,
--files "location of spark-log4j.properties file"
Hope this helps!
you can use log4j lib from maven
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-api</artifactId>
<version>${log4j.version}</version>
</dependency>
For logging, first you need to create a logger object and then you can do logging at different log levels like info, error, warning. Below is the example of logging info in spark scala using log4j:
import org.apache.logging.log4j.LogManager
val logger = LogManager.getLogger(this.getClass.getName)
logger.info("logging message")
So, to add info at some points you can use logger.info("logging message") at that point.

How to log useful error if scala play config files missing?

Is it possible to give a better error message if the config filename is mistyped when using scala play.api.Configuration
e.g. if I run my application with sbt run -J-Dconfig.file=conf/my-config.conf but the file is actually called my_config.conf, there is no error raised about file not found, but instead the first time the error is raised is when applicationConfig.has(configPath) is called, at which point it is not clear how to determine programatically the difference between a missing config value in the file or a missing config file.
Here is what I do:
Wrap the configuration in a Config-Class.
Initialize that class on startup.
Log all property - values.
This will log exceptions on Startup. Here is an example: AdaptersContext.scala
As a remark:
If you have your config-file in the conf directory (on classpath), use:
config.resource=demo.conf

Running junit tests in maven ignores programmatic log4j2 setup

I have a particular JUnit test which processes a big data file and when the logging level is left on TRACE, it kills Eclipse - something to do with the console handling, which is not relevant to this question.
I often switch between running all my tests using m2e, in which case I don't need any debugging log output, and running individual tests, where I want often want to see the TRACE output.
To avoid the necessity of editing my log4j2.xml config every time, I coded the log4j config to increase the logging level to INFO in this particular test, like this from programmatically-change-log-level-in-log4j2:
#Before
public void beforeTest() {
LoggerContext ctx = (LoggerContext) LogManager.getContext(false);
Configuration config = ctx.getConfiguration();
LoggerConfig loggerConfig = config.getLoggerConfig(
LogManager.ROOT_LOGGER_NAME);
initialLogLevel = loggerConfig.getLevel();
loggerConfig.setLevel(Level.INFO);
}
But it has no effect.
If the "ROOT_LOGGER" that I am manipulating here represents the same logger as the <root> in my log4j2.xml, then this is not going to work, is it? I need to override all the other loggers, or shut it down completely, but how?
Could it be influenced by my use of slf4j as the log4j2 wrapper in all of my other classes?
I have tried getting hold of the Appenders and using append.stop() but that doesn't work.
You can put a log4j2 config file in src/test/resources/ directory. During unit tests that file will be used.

How to change the eclipse osgi service to timeout

I need to debug through the start of an OSGi dynamic service in Eclipse RCP application but the timeout is too short (or I'm too slow at debugging!).
!ENTRY org.eclipse.equinox.ds 2 0 2015-02-25 21:46:26.374
!MESSAGE Getting a lock required more than 10000 ms. There might be a synchronization problem in this callstack or just the build/dispose process of some components took too long!
Is there a way to set the timeout value to longer than the default 10000?
Looks like this can be configured in the debugging '.options' file for 'org.eclipse.equinox.ds' plugin:
# Debugging options for the org.eclipse.equinox.ds plugin
# Turns on/off debugging of SCR
org.eclipse.equinox.ds/debug=true
# Specifies that logged entries should be printed to the framework runtime console
org.eclipse.equinox.ds/print_on_console=false
# Enables generating and printing logs about the time performance of the operations executed by the SCR
org.eclipse.equinox.ds/performance=false
# Makes instance of each component nevertheless components are "immediate" or not
org.eclipse.equinox.ds/instantiate_all=false
#Advanced options
# Enables caching of the parsed XML documents of the component descriptions
#org.eclipse.equinox.ds/cache_descriptions=false
# Specifies the maximum time in milliseconds, which is allowed to a user component's activate or bind method to take. If the method invocation has not finished, a new dispatcher thread will be launched to process the pending work of SCR
org.eclipse.equinox.ds/block_timeout=30000
To use this file specify -debug <path to options file> on the eclipse command.
You could also set a VM argument if you start the eclipse instance with run configuration
-Dequinox.ds.block_timeout=300000000
To increase the Time have a look at Equinox/RuntimeOptions:
http://wiki.eclipse.org/Equinox/RuntimeOptions
For example:
-Dequinox.scr.waitTimeOnBloc=15000

Eclipse Rhostudio not showing error messages

I have just started using the rhostudio plugin for eclipse 3.7.2 on Windows.
I have erroneous code but the console never seems to output error messages.
When I launch the simulator I do get some logs but as soon as it hits a bad line of code it stops logging and the simulator window goes blank.
Can anybody help?
You can increase the amount of log output by changing the log level in:
rhoconfig.txt
in your projects root folder and setting MinSeverity as follows:
# Rhodes log properties
# log level
# 0-trace, 1-info(app level), 3-warnings, 4-errors
# for production set to 3
MinSeverity = 0
# enable copy log messages to standard output, useful for debugging
LogToOutput = 1