Why is Elixir Logger composed of Macros? - macros

Had to use Logger in one of my applications today, and remembered that I needed to call require Logger first. So I decided to look at the Logger source code to see why debug, info, error, etc. are macros and not simple functions.
From the code, the macros for debug, info, etc (and even their underlying functions) look very simple. Wasn't it possible to simply export them as methods instead of macros?
From the Logger code:
defmacro log(level, chardata_or_fn, metadata \\ []) do
macro_log(level, chardata_or_fn, metadata, __CALLER__)
end
defp macro_log(level, data, metadata, caller) do
%{module: module, function: fun, file: file, line: line} = caller
caller =
compile_time_application ++
[module: module, function: form_fa(fun), file: file, line: line]
quote do
Logger.bare_log(unquote(level), unquote(data), unquote(caller) ++ unquote(metadata))
end
end
From what I can see, it would've been just simpler to make them into functions instead of macros.

It's probably because of the __CALLER__ special form, that provides info about the calling context, including file and line, but is only available in macros.

Turns out, another reason is so that the calling code can be stripped out during compile time for the logging levels that the application doesn't want.
From the ElixirForum discussion:
It is to allow the code to just entirely not exist for logging levels that you do not want, so the code is never even called and incurs no speed penalty, thus you only pay for the logging levels that you are actively logging.
and
It's so the entire Logger call can be stripped out of the code at compile time if you use :compile_time_purge_level

Related

How to avoid not used function to be wiped out by optimizer

While compiling and Xcode swift project for MacOS, not used functions are removed from the binary (removed by the optimizer I guess). Is there a way to tell the compiler to not remove unused functions, perhaps with a compiler option (--force-attribute?) so that even with optimization enabled those functions remain in the binary?
I know that if a global function is declared as public (public func test()) then it's not removed even if not used (Since it can be used by other modules), but I can't use public since that would export the symbol for that function.
Any suggestion?
If this is indeed removed by the optimiser, then the answer is two-fold.
The optimiser removes only things that it can prove are safely removable. So to make it not remove something, you remove something that the optimiser uses to prove removability.
For example, you can change the visibility of a function in the .bc file by running a pass that makes the functions externally callable. When a function is private to the .bc file (also called module) and not used, then the compiler can prove that nothing will ever call it. If it's visible beyong the .bc file, then the compiler must assume that something can come along later and call it, so the function has to be left alive.
This is really the generic answer to how to prevent an optimisation: Prevent the compiler from inferring that the optimisation is safe.
Writing and invoking a suitable pass is left as an exercise for the reader. Writing should be perhaps 20 lines of code, invoking… might be simple, or not, it depends on your setting. Quite often you can add a call to opt to your build system.
As I discovered, the solution here is to use a magic compiler flag -enable-private-imports (described here: "...Allows this module's internal and private API to be accessed") which basically add the function name to the list #llvm.used that you can see in the bitcode, the purpose of the list is:
If a symbol appears in the #llvm.used list, then the compiler,
assembler, and linker are required to treat the symbol as if there is
a reference to the symbol that it cannot see (which is why they have
to be named)
(cit.) As described here.
This solves my original problem as the private function even if not used anywhere and not being public, during compilation is not stripped out by the optimiser.

Turn off Scala lint warnings at a less-than-global scope?

I'd like to be able to turn off a particular compiler warning, but only for a single file in my project. Is this possible?
The context is that I have a single source file that makes calls to an external macro library that produces adapted-arg warnings. I found that I can eliminate these warnings by changing my build.sbt file:
scalacOptions ++= Seq("-Xlint:-adapted-args,_" /*, ... */)
However, this turns off the warning globally, and I only want it off for the single file that raises them.
I haven't had any luck searching for any of the following possible solutions I thought might exist:
Specifying separate compilation options for different files in my project in my build.sbt file
Providing some pragma-like comment in my source file to change the warnings generated by the compiler, similar to the special scalastyle:on/off comments recognized by Scalastyle
Some annotation for smaller regions of code, like #unchecked
So, is there any way to have different linting options in effect for different files, or even for limited regions of code?
The expectation was that folks would write a custom Reporter that would filter out undesirable warnings.
It's easy to write one that filters by file name, perhaps given a whitelist.
The error/warn API supplies the textual Position. It would also be easy to parse the position.lineContent for a magic comment token like IGNORE or SUPPRESS, which is not as convenient as a SuppressWarnings annotation but is easy to implement.
The compiler asks the reporter if there were errors.
The custom reporter is specified with -Xreporter myclass, or it wouldn't surprise me if someone has written an sbt plugin.

Does Scala have "macro" definitions ready to use -- like LINE, FILE?

When you get a stack trace from exception you get files and line numbers. I need something like this for my reporting, so I could get to the cause very fast.
I am looking in particular for LINE and FILE macro. Is there anything like this in Scala?
There is no such macro neither in Scala nor in Java. Of course the line number information is stored in the bytecode (also for debugging purposes) but there is no API to obtain it.
Stack traces with class names and line numbers are generated via native Throwable.fillInStackTrace(). Logging libraries might also use Thread.getStackTrace().
In both cases it boils down to parse stack trace and find our current location. Note that generating stack trace is time-consuming and should be avoided.
The sourcecode library is based on macros and provides metadata at compile-time.
Example (taken from their page):
object Main extends App {
def log(message: String)(implicit line: sourcecode.Line, file: sourcecode.File) =
println(s"${file.value}:${line.value} $message")
log("foo")
}
This will print:
/Users/jhoffmann/Development/sourcecode/src/main/scala/Main.scala:5 foo
You can use the implicits anywhere in your code.
The is a project to provide a macro capability in Scala.
Perhaps you could approach the project team to discuss what you need

Perl shallow syntax check? ie. do not check syntax of imports

How can I perform a "shallow" syntax check on perl files. The standard perl -c is useful but it checks the syntax of imports. This is sometimes nice but not great when you work in a code repository and push to a running environment and you have a function defined in the repository but not yet pushed to the running environment. It fails checking a function because the imports reference system paths (ie. use Custom::Project::Lib qw(foo bar baz)).
It can't practically be done, because imports have the ability to influence the parsing of the code that follows. For example use strict makes it so that barewords aren't parsed as strings (and changes the rules for how variable names can be used), use constant causes constant subs to be defined, and use Try::Tiny changes the parse of expressions involving try, catch, or finally (by giving them & prototypes). More generally, any module that exports anything into the caller's namespace can influence parsing because the perl parser resolves ambiguity in different ways when a name refers to an existing subroutine than when it doesn't.
There are two problems with this:
How to not fail -c if the required modules are missing?
There are two solutions:
A. Add a fake/stub module in production
B. In all your modules, use a special catch-all #INC subroutine entry (using subs in #INC is explained here). This obviously has a problem of having the module NOT fail in real production runtime if the libraries are missing - DoublePlusNotGood in my book.
Even if you could somehow skip failing on missing modules, you would STILL fail on any use of the identifiers imported from the missing module or used explicitly from that module's namespace.
The only realistic solution to this is to go back to #1a and use a fake stub module, but this time one that has a declared and (as needed) exported identifier for every public interface. E.g. do-nothing subs or dummy variables.
However, even that will fail for some advanced modules that dynamically determine what to create in their own namespace and what to export in runtime (and the caller code could dynamically determine which subs to call - heck, sometimes which modules to import).
But this approach would work just fine for normal "Java/C-like" OO or procedural code that only calls statically named predefined public subs, methods and accesses exported variables.
I would suggest that it's better to include your code repository in your syntax check. perl -I/path/to/working/code/repo/local_perl/ -c or set PERL5LIB=/path/to/working/code/repo/local_perl/ prior to running perl -c. Either option should allow you to check against your working code, assuming you have it in a directory structure similar to your live code.
I guess you could make stubs for the missing libraries in your home folder.
Have you looked into PPI? I think it does follow imports, however it could perhaps be more easily modified to guess what looks like a function name.

Logging within utility classes

I want to adopt logging within several utility classes, e. g. DBI. What is the best practice to do it with Log::Log4perl?
I think it is OK to subclass DBI (say, MyDBI) and override some methods there to make them do the logging. But there's a problem with categories. If you create a logger with
Log::Log4perl->get_logger(ref $self || $self)
then all log entries belong to MyDBI and it would be hard to filter them. So it seems better to me to pass a logger to MyDBI from the calling module (say, MyModule), so that category would be semantically right. The first question, is it OK in general? I mean, are there any hidden reefs regarding such approach?
The second question, how to pass the logger to MyDBI? I have an idea to declare a global variable, e. g. $MyDBI::logger and set in the calling method:
local $MyDBI::logger = Log::Log4perl->get_logger(ref $self || $self);
There's a traditional dislike for global variables. Can you think of a better way?
EDIT: Of course, the best code is no code. caller would suffice, if it took inheritance into account.
The third question, is it possible to log into both categories, MyDBI and MyModule, with Log::Log4perl, if they are hierarchically unrelated?
I would strongly encourage you to to log independently on the caller in a separate logger either per function or per module, so that you can run your module independently of log4perl used in your caller.
Each module will create its own logger with Log::Log4perl->get_logger("module name").If the caller does not create any appender, the program will simply not log anything and the log4perl in the modules will be ignored from a functional stand point. Log4Perl implements a singleton pattern for creating a logger, which is similar to a global variable.
Your Logging should be fine-grained as possible and as a rule of thumb I log in debug any input parameter and any result of a function/method. If really necessary, you can also use the stack trace to find out the caller which has lead to the error condition. Adding it into the parameters does just add additional complexity.
The following recipes might give you some more ideas about the flexibility on the configuration side of log4perl.Log4Perl Recipes The whole idea for me is to keep the code unchanged and change the logging configuration depending on my actual logging/bug tracing requirements (which might change in the future). To keep the code unchanged if possible is even more important with modules as you want to avoid testing all calling programs.
To answer your questions in brief.
1.) Each module should have its own logger
2.) Thus do not add the loggers into the interface
3.) Log4Perl will log on all levels depending on your appender configuration. This way you control what you will see not see - normal level will usually be INFO and specific modules might be in debug. In bad cases the Pattern layout will allow you to add the stack trace into the logging purely with configuration.