Using lstlisting environment withing macros? - macros

I tried to define a macro, for me to quickly create a listing environment. The definition looked like that:
% \def \lstlistingcode[#1] { \begin{lstlisting}#1\end{lstlisting} }
which ends in an Error:
job aborted, no legal \end found
Is it possible to use environments within macros, and if yes - how?

Some environments can, e.g. alltt; however, lstlisting can't, because of how it is written (it apparently has to do with various catcode stuff that it does).
Using \def as you do is sort of outdated; \newcommand is preferred:
\newcommand \allttcode[1] { \begin{alltt}#1\end{alltt} }
...
\allttcode{test}

Related

Pybind11: Release GIL by default

I have a C++ library with many functions exported to Python using PyBind11. I am sure that these functions are thread-safe and would like to maximize the performance of multi-threading in Python. Normally we have to release GIL like:
PYBIND11_MODULE(MyModule, m) {
m.def("foo", &foo, py::call_guard<py::gil_scoped_release>());
m.def("bar", &bar, py::call_guard<py::gil_scoped_release>());
...
}
for all functions, including my class methods.
Is there a way to set GIL-release as library/application default so that I don't have to write py::call_guard<...> all the time and possibly gain some performance gain since I am sure there is no hidden function on the code path holding the GIL?
You can write macros to replace def or try to patch pybind11 yourself, but there's no official way I'm aware of.

Using $crate in Rust's procedural macros?

I know what the $crate variable is, but as far as I can tell, it can't be used inside procedural macros. Is there another way to achieve a similar effect?
I have an example that roughly requires me to write something like this using quote and nightly Rust
quote!(
struct Foo {
bar: [SomeTrait;#len]
}
)
I need to make sure SomeTrait is in scope (#len is referencing an integer outside the scope of the snippet).
I am using procedural macros 2.0 on nightly using quote and syn because proc-macro-hack didn't work for me. This is the example I'm trying to generalize.
Since Rust 1.34, you can use extern crate self as my_crate, and use my_crate::Foo instead of $crate::Foo.
https://github.com/rust-lang/rust/issues/54647
https://github.com/rust-lang/rust/pull/57407
(Credit: Neptunepink ##rust irc.freenode.net)
Based on replies from https://github.com/rust-lang/rust/issues/38356#issuecomment-412920528, it looks like there is no way to do this (as of 2018-08), neither to refer to the proc-macro crate nor to refer to any other crate unambiguously.
In Edition 2015 (classic Rust), you can do this (but it's hacky):
use ::defining_crate::SomeTrait in the macro
within third-party crates depending on defining_crate, the above works fine
within defining_crate itself, add a module in the root:
mod defining_crate { pub use super::*; }
In Edition 2018 even more hacky solutions are required (see this issue), though #55275 may give us a simple workaround.
You can wrap your proc-macro inside a declarative macro, and pass the $crate identifier to your proc-macro for reuse (see this commit for example). It will create a proc_macro::Ident value with the special $crate identifier.
Note that you cannot manually create such an identifier, since $ is normally invalid inside identifiers.

How to prefix/suffix identifiers within a macro? [duplicate]

This question already has answers here:
Is it possible to declare variables procedurally using Rust macros?
(4 answers)
Closed 5 years ago.
When using a macro that defines a function, is it possible to add a prefix to the function?
macro_rules! my_test {
($id:ident, $arg:expr) => {
#[test]
fn $id() {
my_test_impl(stringify!($id), $arg);
}
}
}
For example, fn my_test_$id() {
I'm defining tests using an identifier which may begin with numbers, and I would like to use a common prefix.
Currently this is not supported in stable.
However there is a feature in nightly called concat_idents:
concat_idents!(my_test_, $id)
See
Rust docs
Issue
Update: it seems there aren't near-term plans to add this into stable releases, see issue.
[...] is it possible to add a prefix to the function?
No. Really, really no. Super totally not at all even in the slightest.
I would like to have use a common prefix.
Put them all in a mod instead.
As mentioned, you should use submodules for this, but remember that macros can create submodules, submodules can be nested allowing their names to overlap, submodules can provide impls, and the tests submodule is not magic.
I once submitted a pull request that avoids numerous "boiler plate names" by refactoring the code using these tricks, although the #[no_mangle] exports make it harder.

Perl shallow syntax check? ie. do not check syntax of imports

How can I perform a "shallow" syntax check on perl files. The standard perl -c is useful but it checks the syntax of imports. This is sometimes nice but not great when you work in a code repository and push to a running environment and you have a function defined in the repository but not yet pushed to the running environment. It fails checking a function because the imports reference system paths (ie. use Custom::Project::Lib qw(foo bar baz)).
It can't practically be done, because imports have the ability to influence the parsing of the code that follows. For example use strict makes it so that barewords aren't parsed as strings (and changes the rules for how variable names can be used), use constant causes constant subs to be defined, and use Try::Tiny changes the parse of expressions involving try, catch, or finally (by giving them & prototypes). More generally, any module that exports anything into the caller's namespace can influence parsing because the perl parser resolves ambiguity in different ways when a name refers to an existing subroutine than when it doesn't.
There are two problems with this:
How to not fail -c if the required modules are missing?
There are two solutions:
A. Add a fake/stub module in production
B. In all your modules, use a special catch-all #INC subroutine entry (using subs in #INC is explained here). This obviously has a problem of having the module NOT fail in real production runtime if the libraries are missing - DoublePlusNotGood in my book.
Even if you could somehow skip failing on missing modules, you would STILL fail on any use of the identifiers imported from the missing module or used explicitly from that module's namespace.
The only realistic solution to this is to go back to #1a and use a fake stub module, but this time one that has a declared and (as needed) exported identifier for every public interface. E.g. do-nothing subs or dummy variables.
However, even that will fail for some advanced modules that dynamically determine what to create in their own namespace and what to export in runtime (and the caller code could dynamically determine which subs to call - heck, sometimes which modules to import).
But this approach would work just fine for normal "Java/C-like" OO or procedural code that only calls statically named predefined public subs, methods and accesses exported variables.
I would suggest that it's better to include your code repository in your syntax check. perl -I/path/to/working/code/repo/local_perl/ -c or set PERL5LIB=/path/to/working/code/repo/local_perl/ prior to running perl -c. Either option should allow you to check against your working code, assuming you have it in a directory structure similar to your live code.
I guess you could make stubs for the missing libraries in your home folder.
Have you looked into PPI? I think it does follow imports, however it could perhaps be more easily modified to guess what looks like a function name.

lua - capturing variable assignments

Ruby has this very interesting functionality in which when you create a class with 'Class.new' and assign it to a constant (uppercase), the language "magically" sets up the name of the class so it matches the constant.
# This is ruby code
MyRubyClass = Class.new(SuperClass)
puts MyRubyClass.name # "MyRubyClass"
It seems ruby "captures" the assignment and inserts sets the name on the anonymous class.
I'd like to know if there's a way to do something similar in Lua.
I've implemented my own class system, but for it to work I've got to specify the same name twice:
-- This is Lua code
MyLuaClass = class('MyLuaClass', SuperClass)
print(MyLuaClass.name) -- MyLuaClass
I'd like to get rid of that 'MyLuaClass' string. Is there any way to do this in Lua?
When assigning to global variables you can set a __newindex metamethod for the table of globals to catch assignments of class variables and do whatever is needed.
You can eliminate one of the mentions of MyLuaClass...
> function class(name,superclass) _G[name] = {superclass=superclass} end
> class('MyLuaClass',33)
> =MyLuaClass
table: 0x10010b900
> =MyLuaClass.superclass
33
>
Not really. Lua is not an object-orientated language. It can behave like one sometimes. But far from every time. Classes are not special values in Lua. A table has the value you put in it, no more. The best you can do is manually set the key in _G from the class function and eliminate having to take the return value.
I guess that if it REALLY, REALLY bothers you, you could use debug.traceback(), get a stack trace, find the calling file, and parse it to find the variable name. Then set that. But that's more than a little overkill.
With respect at least to Lua 5.2: You can capture assignments to A) the global table of a Lua State, as mentioned in a previous reply, and also B) to any other Lua Object whose __index and __newindex metamethods have been substituted (by replacing the metatable), this I can confirm as I'm currently using both these techniques to hook and redirect assignments made by Lua scripts to external C/C++ resource management.
There is a gotcha with regards to reading them back though, the trick is to NOT let the values be set in a Lua State.
As soon as they exist there, your hooks will fail to be called, so if you want to go down this path, you need to capture ALL get/set attempts, and NEVER store the values in a Lua State.