I want to adopt logging within several utility classes, e. g. DBI. What is the best practice to do it with Log::Log4perl?
I think it is OK to subclass DBI (say, MyDBI) and override some methods there to make them do the logging. But there's a problem with categories. If you create a logger with
Log::Log4perl->get_logger(ref $self || $self)
then all log entries belong to MyDBI and it would be hard to filter them. So it seems better to me to pass a logger to MyDBI from the calling module (say, MyModule), so that category would be semantically right. The first question, is it OK in general? I mean, are there any hidden reefs regarding such approach?
The second question, how to pass the logger to MyDBI? I have an idea to declare a global variable, e. g. $MyDBI::logger and set in the calling method:
local $MyDBI::logger = Log::Log4perl->get_logger(ref $self || $self);
There's a traditional dislike for global variables. Can you think of a better way?
EDIT: Of course, the best code is no code. caller would suffice, if it took inheritance into account.
The third question, is it possible to log into both categories, MyDBI and MyModule, with Log::Log4perl, if they are hierarchically unrelated?
I would strongly encourage you to to log independently on the caller in a separate logger either per function or per module, so that you can run your module independently of log4perl used in your caller.
Each module will create its own logger with Log::Log4perl->get_logger("module name").If the caller does not create any appender, the program will simply not log anything and the log4perl in the modules will be ignored from a functional stand point. Log4Perl implements a singleton pattern for creating a logger, which is similar to a global variable.
Your Logging should be fine-grained as possible and as a rule of thumb I log in debug any input parameter and any result of a function/method. If really necessary, you can also use the stack trace to find out the caller which has lead to the error condition. Adding it into the parameters does just add additional complexity.
The following recipes might give you some more ideas about the flexibility on the configuration side of log4perl.Log4Perl Recipes The whole idea for me is to keep the code unchanged and change the logging configuration depending on my actual logging/bug tracing requirements (which might change in the future). To keep the code unchanged if possible is even more important with modules as you want to avoid testing all calling programs.
To answer your questions in brief.
1.) Each module should have its own logger
2.) Thus do not add the loggers into the interface
3.) Log4Perl will log on all levels depending on your appender configuration. This way you control what you will see not see - normal level will usually be INFO and specific modules might be in debug. In bad cases the Pattern layout will allow you to add the stack trace into the logging purely with configuration.
Related
Please help to resolve one issue that I am facing connected with disabling DUT instanced.
My DUT top module has many instances in it, but my test does not need them.
Is there any way to disable these instances from test-bench.
For example this is my DUT module prototype:
module top (…….);
// instances needs to be disabled
module1 #(16) inst1 (.CLK(clk_100),.PAD_RSTN(ext_reset_n),.RSTN(global_reset_n));
module2 #(16) inst2 (.CLK(clk_100),.PAD_RSTN(ext_reset_n),.RSTN(pcie_reset_n));
pcie_module #(…) inst_pci (…..);
// main test target instances
target_testmodule #(…) test_inst(…);
child1_of_target_testmodule #(…) test_inst_child1(…);
child2_of_target_testmodule #(…) test_inst_child2(…);
endmodule
so my test-bench will only test the target_testmodule and its child modules.
I am using bind to connect the interface to target_testmodule and then starting to drive the pins of target_testmodule. And the target_testmodule drives its child module pins.
So for this test I don’t need pci_module instace or other instaces, because they are big instances take much time, provides lots of warning and also they drive some of the target_testmodule ports which I don’t neet.
My question is there some mechanism to disable the pci module from the test-bench. I don’t have write permission to top module to comment the instances or put them inside `ifdefs.
Your first mechanism is to ask the person who locked the file to change it so you can get your job done more efficiently. They can put in generate or ifdef statements for you.
If you had separate clock or enables signals, you could force them to an inactive state
copy and modify a local copy of the top-level file and have that file used instead. The are a number of ways to substitute the local module
Beyond getting write permission, the next easiest way would be to make you own top.
Verilog (since IEEE1364-2001) and SystemVerilog do have a way to compile different modules of the same name into different libraries, then use a configuration to decided which one will be used during elaboration. You could use this technique to use swap the module instances you don't want with simplified or dummy version. Depending your testing environment is configured, implementing this configurations can be tricky. If you are up for the challenge, then read IEEE Std 1800-2012 § 33. Configuring the contents of a design
As much as I can (mostly for clarity/documentation), I've been trying to say
use Some::Module;
use Another::Module qw( some namespaces );
in my Perl modules that use other modules.
I've been cleaning up some old code and see some places where I reference modules in my code without ever having used them:
my $example = Yet::Another::Module->AFunction($data); # EXAMPLE 1
my $demo = Whats::The::Difference::Here($data); # EXAMPLE 2
So my questions are:
Is there a performance impact (I'm thinking compile time) by not stating use x and simply referencing it in the code?
I assume that I shouldn't use modules that aren't utilized in the code - I'm telling the compiler to compile code that is unnecessary.
What's the difference between calling functions in example 1's style versus example 2's style?
I would say that this falls firmly into the category of preemptive optimisation and if you're not sure, then leave it in. You would have to be including some vast unused libraries if removing them helped at all
It is typical of Perl to hide a complex issue behind a simple mechanism that will generally do what you mean without too much thought
The simple mechanisms are these
use My::Module 'function' is the same as writing
BEGIN {
require My::Module;
My::Module->import( 'function' );
}
The first time perl successfully executes a require statement, it adds an element to the global %INC hash which has the "pathified" module name (in this case, My/Module.pm) for a key and the absolute location where it found the source as a value
If another require for the same module is encountered (that is, it already exists in the %INC hash) then require does nothing
So your question
What happens if I reference a package but don't use/require it?
We're going to have a problem with use, utilise, include and reference here, so I'm code-quoting only use and require when I mean the Perl language words.
Keeping things simple, these are the three possibilities
As above, if require is seen more than once for the same module source, then it is ignored after the first time. The only overhead is checking to see whether there is a corresponding element in %INC
Clearly, if you use source files that aren't needed then you are doing unnecessary compilation. But Perl is damn fast, and you will be able to shave only fractions of a second from the build time unless you have a program that uses enormous libraries and looks like use Catalyst; print "Hello, world!\n";
We know what happens if you make method calls to a class library that has never been compiled. We get
Can't locate object method "new" via package "My::Class" (perhaps you forgot to load "My::Class"?)
If you're using a function library, then what matters is the part of use that says
My::Module->import( 'function' );
because the first part is require and we already know that require never does anything twice. Calling import is usually a simple function call, and you would be saving nothing significant by avoiding it
What is perhaps less obvious is that big modules that include multiple subsidiaries. For instance, if I write just
use LWP::UserAgent;
then it knows what it is likely to need, and these modules will also be compiled
Carp
Config
Exporter
Exporter::Heavy
Fcntl
HTTP::Date
HTTP::Headers
HTTP::Message
HTTP::Request
HTTP::Response
HTTP::Status
LWP
LWP::MemberMixin
LWP::Protocol
LWP::UserAgent
Storable
Time::Local
URI
URI::Escape
and that's ignoring the pragmas!
Did you ever feel like you were kicking your heels, waiting for an LWP program to compile?
I would say that, in the interests of keeping your Perl code clear and tidy, it may be an idea to remove unnecessary modules from the compilation phase. But don't agonise over it, and benchmark your build times before doing any pre-handover tidy. No one will thank you for reducing the build time by 20ms and then causing them hours of work because you removed a non-obvious requirement.
You actually have a bunch of questions.
Is there a performance impact (thinking compile time) by not stating use x and simply referencing it in the code?
No, there is no performance impact, because you can't do that. Every namespace you are using in a working program gets defined somewhere. Either you used or required it earlier to where it's called, or one of your dependencies did, or another way1 was used to make Perl aware of it
Perl keeps track of those things in symbol tables. They hold all the knowledge about namespaces and variable names. So if your Some::Module is not in the referenced symbol table, Perl will complain.
I assume that I shouldn't use modules that aren't utilized in the code - I'm telling the compiler to compile code that is unnecessary.
There is no question here. But yes, you should not do that.
It's hard to say if this is a performance impact. If you have a large Catalyst application that just runs and runs for months it doesn't really matter. Startup cost is usually not relevant in that case. But if this is a cronjob that runs every minute and processes a huge pile of data, then an additional module might well be a performance impact.
That's actually also a reason why all use and require statements should be at the top. So it's easy to find them if you need to add or remove some.
What's the difference between calling functions in example 1's style versus example 2's style?
Those are for different purposes mostly.
my $example = Yet::Another::Module->AFunction($data); # EXAMPLE 1
This syntax is very similar to the following:
my $e = Yet::Another::Module::AFunction('Yet::Another::Module', $data)
It's used for class methods in OOP. The most well-known one would be new, as in Foo->new. It passes the thing in front of the -> to the function named AFunction in the package of the thing on the left (either if it's blessed, or if it's an identifier) as the first argument. But it does more. Because it's a method call, it also takes inheritance into account.
package Yet::Another::Module;
use parent 'A::First::Module';
1;
package A::First::Module;
sub AFunction { ... }
In this case, your example would also call AFunction because it's inherited from A::First::Module. In addition to the symbol table referenced above, it uses #ISA to keep track of who inherits from whom. See perlobj for more details.
my $demo = Whats::The:Difference::Here($data); # EXAMPLE 2
This has a syntax error. There is a : missing after The.
my $demo = Whats::The::Difference::Here($data); # EXAMPLE 2
This is a function call. It calls the function Here in the package Whats::The::Difference and passes $data and nothing else.
Note that as Borodin points out in a comment, your function names are very atypical and confusing. Usually functions in Perl are written with all lowercase and with underscores _ instead of camel case. So AFunction should be a_function, and Here should be here.
1) for example, you can have multiple package definitions in one file, which you should not normally do, or you could assign stuff into a namespace directly with syntax like *Some::Namespace::frobnicate = sub {...}. There are other ways, but that's a bit out of scope for this answer.
Using the standard GWT logger, is it possible to disable the Log statements for a complete package in the xml?
Or at least individual class's?
At the moment in the xml I can disable/endable all the logging with;
<set-property name="gwt.logging.enabled" value="TRUE"/>
This seems to set it globally for all class's in all packages. Is it possible to do this selectively in the xml?
The guide here; http://www.gwtproject.org/doc/latest/DevGuideLogging.html seems to only cover configuring the rootlogger, covering every log in the project. I am probably missing the obvious, but I don't "get" how to apply this selectively.
Given all my log statements are named;
protected static Logger Log = Logger.getLogger("parent.childc");
It seems like selectively turning them on/off should be possible.
Thanks,
Thomas
If you have all of your logger instances really initialized as Logger.getLogger("Example"), you can't selectively turn them off and on, since they are all one instance called 'Example', so I'm assuming that this isn't actually the case.
For the most part, this is normal JULI - the standard mechanisms that work on the JVM will work here. This is doubly true in dev mode, which actually uses the standard java.util.logging.Logger implementation. So if you are just looking for cleaned up logging in dev mode, treat it as you would any other logger - from the root instance, find the package or class that you want to disable/enable, and set the log level to where you want it.
In compiled code this behaves a little differently, you may need to experiment a bit to see exactly how, I don't recall at the moment. However, you can still use LogManager, and can walk all existing loggers, packages:
LogManager manager = LogManager.getLogManager();
Enumeration<String> allLoggers = manager.getLoggerNames();
// for each String in the list, grab the logger via manager.getLogger
The advantage to this as opposed to Logger.getLogger is that this will only tell you about loggers that already exist, instead of creating them as you go. This way you could have a config file or url param that specifies which details you wish to see logged, and disable the rest.
EDIT: I ran an old sample I had made again, and it appears that the chief difference between dev mode and prod mode is that in prod mode you get a logger per package, with root at the top and the specific named loggers as 'leaf nodes', while in dev mode all loggers seem to be placed in the root logger directly, with no intermediate packages. It is possible that if you requested a logger for a particular package that it would then make one, but I didn't test to find out.
How can I perform a "shallow" syntax check on perl files. The standard perl -c is useful but it checks the syntax of imports. This is sometimes nice but not great when you work in a code repository and push to a running environment and you have a function defined in the repository but not yet pushed to the running environment. It fails checking a function because the imports reference system paths (ie. use Custom::Project::Lib qw(foo bar baz)).
It can't practically be done, because imports have the ability to influence the parsing of the code that follows. For example use strict makes it so that barewords aren't parsed as strings (and changes the rules for how variable names can be used), use constant causes constant subs to be defined, and use Try::Tiny changes the parse of expressions involving try, catch, or finally (by giving them & prototypes). More generally, any module that exports anything into the caller's namespace can influence parsing because the perl parser resolves ambiguity in different ways when a name refers to an existing subroutine than when it doesn't.
There are two problems with this:
How to not fail -c if the required modules are missing?
There are two solutions:
A. Add a fake/stub module in production
B. In all your modules, use a special catch-all #INC subroutine entry (using subs in #INC is explained here). This obviously has a problem of having the module NOT fail in real production runtime if the libraries are missing - DoublePlusNotGood in my book.
Even if you could somehow skip failing on missing modules, you would STILL fail on any use of the identifiers imported from the missing module or used explicitly from that module's namespace.
The only realistic solution to this is to go back to #1a and use a fake stub module, but this time one that has a declared and (as needed) exported identifier for every public interface. E.g. do-nothing subs or dummy variables.
However, even that will fail for some advanced modules that dynamically determine what to create in their own namespace and what to export in runtime (and the caller code could dynamically determine which subs to call - heck, sometimes which modules to import).
But this approach would work just fine for normal "Java/C-like" OO or procedural code that only calls statically named predefined public subs, methods and accesses exported variables.
I would suggest that it's better to include your code repository in your syntax check. perl -I/path/to/working/code/repo/local_perl/ -c or set PERL5LIB=/path/to/working/code/repo/local_perl/ prior to running perl -c. Either option should allow you to check against your working code, assuming you have it in a directory structure similar to your live code.
I guess you could make stubs for the missing libraries in your home folder.
Have you looked into PPI? I think it does follow imports, however it could perhaps be more easily modified to guess what looks like a function name.
I'm writing a series of related mod_perl handlers for various login-related functions in Apache, so my Apache config file looks like this (for example)
PerlAccessHandler MyApache::MyAccess
PerlAuthenHandler MyApache::MyAuthen
PerlAuthzHandler MyApache::MyAuthz
Each of the modules (MyAccess, MyAuthen, MyAuthz) defines a
sub handler() {}
Which mod_perl calls at the relevant point in the processing of the request.
What I'd like to know is whether there is a way of doing this with one Perl module rather than three (it's just tidier and less work for users to install one module instead of 3)?
Is there some way to define the name of the handler method, perhaps. Or is there a way of detecting from within the handler() code which sort of handling I'm supposed to be doing?
It appears from the mod_perl 2.0 docs that you can use the "method" syntax to do what you're wanting (I've not tested this):
PerlAccessHandler MyApache::MyLoginModule->access_handler
PerlAuthenHandler MyApache::MyLoginModule->authen_handler
PerlAuthzHandler MyApache::MyLoginModule->authz_handler
I believe this will cause mod_perl to call each of the named methods in a static way on your MyApache::MyLoginModule class.
You can also create an object to be used when calling a handler method if you want to:
<Perl>
use MyApache::MyLoginModule;
$MyApache::MyLoginModule::access = MyApache::MyLoginModule->new(phase => 'access');
$MyApache::MyLoginModule::authen = MyApache::MyLoginModule->new(phase => 'authen');
$MyApache::MyLoginModule::authz = MyApache::MyLoginModule->new(phase => 'authz');
</Perl>
PerlAccessHandler $MyApache::MyLoginModule::access->handler
PerlAuthenHandler $MyApache::MyLoginModule::authen->handler
PerlAuthzHandler $MyApache::MyLoginModule::authz->handler
This approach would allow you to have a single handler method that could have different behavior based on the properties of the object set up upon object creation.
Disclaimer: It's been a while since I've worked with this part of mod_perl configuration so your results may vary!
Looks like one possibility might be using the push_handlers() call and setting up the handlers in code rather than in the apache conf file
See here: http://tinyurl.com/bwdeew