I know what the $crate variable is, but as far as I can tell, it can't be used inside procedural macros. Is there another way to achieve a similar effect?
I have an example that roughly requires me to write something like this using quote and nightly Rust
quote!(
struct Foo {
bar: [SomeTrait;#len]
}
)
I need to make sure SomeTrait is in scope (#len is referencing an integer outside the scope of the snippet).
I am using procedural macros 2.0 on nightly using quote and syn because proc-macro-hack didn't work for me. This is the example I'm trying to generalize.
Since Rust 1.34, you can use extern crate self as my_crate, and use my_crate::Foo instead of $crate::Foo.
https://github.com/rust-lang/rust/issues/54647
https://github.com/rust-lang/rust/pull/57407
(Credit: Neptunepink ##rust irc.freenode.net)
Based on replies from https://github.com/rust-lang/rust/issues/38356#issuecomment-412920528, it looks like there is no way to do this (as of 2018-08), neither to refer to the proc-macro crate nor to refer to any other crate unambiguously.
In Edition 2015 (classic Rust), you can do this (but it's hacky):
use ::defining_crate::SomeTrait in the macro
within third-party crates depending on defining_crate, the above works fine
within defining_crate itself, add a module in the root:
mod defining_crate { pub use super::*; }
In Edition 2018 even more hacky solutions are required (see this issue), though #55275 may give us a simple workaround.
You can wrap your proc-macro inside a declarative macro, and pass the $crate identifier to your proc-macro for reuse (see this commit for example). It will create a proc_macro::Ident value with the special $crate identifier.
Note that you cannot manually create such an identifier, since $ is normally invalid inside identifiers.
Related
This question already has answers here:
Is it possible to declare variables procedurally using Rust macros?
(4 answers)
Closed 5 years ago.
When using a macro that defines a function, is it possible to add a prefix to the function?
macro_rules! my_test {
($id:ident, $arg:expr) => {
#[test]
fn $id() {
my_test_impl(stringify!($id), $arg);
}
}
}
For example, fn my_test_$id() {
I'm defining tests using an identifier which may begin with numbers, and I would like to use a common prefix.
Currently this is not supported in stable.
However there is a feature in nightly called concat_idents:
concat_idents!(my_test_, $id)
See
Rust docs
Issue
Update: it seems there aren't near-term plans to add this into stable releases, see issue.
[...] is it possible to add a prefix to the function?
No. Really, really no. Super totally not at all even in the slightest.
I would like to have use a common prefix.
Put them all in a mod instead.
As mentioned, you should use submodules for this, but remember that macros can create submodules, submodules can be nested allowing their names to overlap, submodules can provide impls, and the tests submodule is not magic.
I once submitted a pull request that avoids numerous "boiler plate names" by refactoring the code using these tricks, although the #[no_mangle] exports make it harder.
I implement MyClass containing the method method() and I store the instance in $_ENV['key'] in test.php. Also in test.php the code completion works when I type $_ENV['key']->.
In test2.php I include test.php and the code completion does not work any more for $_ENV['key']->.
Does anyone know how to enable this in PhpStorm?
AFAIK type tracking for arrays works within the same file only.
You can bypass it via intermediate variable (yes, it's not a nicest solution) and small PHPDoc comment, like this:
/** #var MyClass $myVar */
$myVar = $_ENV['key'];
$myVar->
P.S.
In general, I'd suggest not using global arrays this way (or even not using global vars at all -- only very basic stuff during bootstrap, if possible). Instead (based on your code) I may suggest using some static class (as one of the alternatives) with dedicated field where you can easily give type hint (via PHPDoc) to a class field -- this way IDE will always know hat type it is. Current PHP versions (5.5 and especially 5.6) work with objects nearly as fast as with arrays, even leading in (smaller) memory consumption.
Obviously, such suggestion does not really apply if this code is not yours.
I wanted to write something like ((IStringer)object).ToString() (in C#) in Erlang. After some studying I've learnt that Elixir has something called Protocols that pretty much resembles the same thing of C# (in an inside-out manner). Then I came up with this idea/code in Erlang - which is nice enough to me like:
?stringer(my_val):to_string().
And it either returns the expected value or not_implemented atom!
But 2 questions:
1 - Why nobody use this or promote things based on stateful modules in Erlang? (OTP aside and from talking to some Erlangers they did not know that actually OTP is built around this! So really there is a need to change how Erlang is being taught and promoted. It's possible that I am confused.).
2 - Why I get this warning? That call actually can never fails.
The warning:
stringer.erl:18: Warning: invalid module and/or function name; this call will always fail
stringer.erl:19: Warning: invalid module and/or function name; this call will always fail
stringer.erl:20: Warning: invalid module and/or function name; this call will always fail
The code:
-module(stringer).
-export([to_string/1,sample/0]).
-define(stringer(V), {stringer, V}).
to_string({stringer, V}) when is_list(V) ->
to_string(V, nop);
to_string({stringer, V}) when is_atom(V) ->
to_string(V, nop);
to_string({stringer, _V}) ->
not_implemented.
to_string(V, _Nop) ->
Buffer = io_lib:format("~p",[V]),
lists:flatten(Buffer).
sample() ->
io:format("~p~n", [?stringer([1,2]):to_string()]),
io:format("~p~n", [?stringer(cute_atom):to_string()]),
io:format("~p~n", [?stringer(13):to_string()]).
And the output is:
"[1,2]"
"cute_atom"
not_implemented
I am doing this on Erlang R16 B2 (V5.10.3) 32 bit on Windows 8 64 bit.
The warning you see is an Erlang bug. If Erlang sees you are invoking a function in a literal tuple, it shows the warning. I have seen this while working with Elixir, I have silenced it in Elixir's compiler but forgot to report it to the Erlang team as a bug. Sorry.
The stateful module thing is actually avoided in Erlang by the majority of developers. They were added to support a feature called "parameterized modules", which has then since been removed, but the underlying dispatching mechanism still exists. If you search the Erlang Questions mailing list you can find plenty of discussion on the topic. Note that protocols in Elixir are not implemented like that though.
In fact, it seems your implementation does not seem to add anything compared to a regular function. For example, you could have simply written:
to_string(V) when is_list(V); is_atom(V) ->
Buffer = io_lib:format("~p",[V]),
lists:flatten(Buffer);
to_string(V) ->
not_implemented.
and called the function directly. Your implementation is simply using the classic ad-hoc polymorphism provided by Erlang at the end of the day. The limitation of this approach is that, since dispatch is hardcoded to ?stringer, the only way to extend the to_string/1 behaviour to work with a new data type is to reimplement and replace the whole stringer module.
Here is an example of an issue that helps you ponder about this: if application A defines a "protocol" named stringer, how can applications B and C extend this protocol to their own data types and all be used by application D without loss of functionality?
In very simple words, the way protocols work in Elixir is by making the stringer module an intermediate dispatch module. So the stringer module actually works like this:
to_string(V) when is_list(V) ->
string_list:to_string(V);
to_string(A) when is_atom(A) ->
string_atom:to_string(A);
%% ...
to_string(A) when is_tuple(A) ->
string_tuple:to_string(A).
and imagine that code is wrapped around something that checks if the module exists and fails accordingly if not. Of course, all of that is defined automatically for you by simply defining the protocol. There is also a mechanism (called consolidation) to compile protocols down to a fast dispatch mechanism on releases.
Ruby has this very interesting functionality in which when you create a class with 'Class.new' and assign it to a constant (uppercase), the language "magically" sets up the name of the class so it matches the constant.
# This is ruby code
MyRubyClass = Class.new(SuperClass)
puts MyRubyClass.name # "MyRubyClass"
It seems ruby "captures" the assignment and inserts sets the name on the anonymous class.
I'd like to know if there's a way to do something similar in Lua.
I've implemented my own class system, but for it to work I've got to specify the same name twice:
-- This is Lua code
MyLuaClass = class('MyLuaClass', SuperClass)
print(MyLuaClass.name) -- MyLuaClass
I'd like to get rid of that 'MyLuaClass' string. Is there any way to do this in Lua?
When assigning to global variables you can set a __newindex metamethod for the table of globals to catch assignments of class variables and do whatever is needed.
You can eliminate one of the mentions of MyLuaClass...
> function class(name,superclass) _G[name] = {superclass=superclass} end
> class('MyLuaClass',33)
> =MyLuaClass
table: 0x10010b900
> =MyLuaClass.superclass
33
>
Not really. Lua is not an object-orientated language. It can behave like one sometimes. But far from every time. Classes are not special values in Lua. A table has the value you put in it, no more. The best you can do is manually set the key in _G from the class function and eliminate having to take the return value.
I guess that if it REALLY, REALLY bothers you, you could use debug.traceback(), get a stack trace, find the calling file, and parse it to find the variable name. Then set that. But that's more than a little overkill.
With respect at least to Lua 5.2: You can capture assignments to A) the global table of a Lua State, as mentioned in a previous reply, and also B) to any other Lua Object whose __index and __newindex metamethods have been substituted (by replacing the metatable), this I can confirm as I'm currently using both these techniques to hook and redirect assignments made by Lua scripts to external C/C++ resource management.
There is a gotcha with regards to reading them back though, the trick is to NOT let the values be set in a Lua State.
As soon as they exist there, your hooks will fail to be called, so if you want to go down this path, you need to capture ALL get/set attempts, and NEVER store the values in a Lua State.
I'm embedding Python into a C++ application. I plan to use PyEval_EvalCode to execute Python code, but instead of providing the locals and globals as dictionaries, I'm looking for a way to have my program resolve symbol references dynamically.
For example, let's say my Python code consists of the following expression:
bear + lion * bunny
Instead of placing bear, lion and bunny and their associated objects into the dictionaries that I'm passing to PyEval_EvalCode, I'd like the Python interpreter to call back my program and request these named objects.
Is there a way to accomplish this?
By providing the locals and globals dictionaries, you are providing the environment in which the evaled code is executed. That effectively provides you with an interface to map names to objects defined in the C++ app.
Can you clarify why you do not want to use the dictionaries?
Another thing you could do is process the string in C++ and do string substitution before you eval the code....
Possibly. I've never tried this but in theory you might be able to implement a small extension class in C++ that overrides the __getattr__ method (probably via the tp_as_mapping or tp_getattro function pointers of PyTypeObject). Pass an instance of this as locals and/or globals to PyEval_EvalCode and your C++ method should be asked to resolve your lions, tigers, & bears for you.