What does #pragma("vm:prefer-inline") mean in Flutter? - flutter

I have seen a lot of code, like following, in frameworks.
What does the #pragma annotation do?
#pragma("vm:entry-point", "call")
#pragma("vm:entry-point", "set")
#pragma("vm:entry-point", "get")
#pragma("vm:prefer-inline")`
......

For the general meaning of #pragma cfr above the NKSM's answer.
I will cite two important specific use case:
#pragma('vm:prefer-inline') to inline functions
(I mean compile-time inlining, which has nothing to do with 'inline function' intended as a synonym for closure/lambda, as is sometimes done)
This annotation is similar to inline keyword in Kotlin
#pragma("vm:entry-point") to mark a function (or other entities, as classes) to indicate to the compiler that it will be used from native code. Without this annotation, the dart compiler could strip out unused functions, inline them, shrink names, etc, and the native code would fail to call it.
A very good doc (written much more clearly than usual) about entry-point is https://github.com/dart-lang/sdk/blob/master/runtime/docs/compiler/aot/entry_point_pragma.md
If you want to do a first test with inlining in dart, I suggest you compile with dart2js, which outputs fairly readable javascript code (at least until you increase the shrinking level beyond the default one; and readability is obviously decent only in minimal programs). However, inlining in dart/js requires a slightly different #pragma annotation: #pragma ('dart2js: tryInline').
An interesting discussion about inlining in dart can be found in
dart-lang issue #40522 - Annotation for method inlining
In general, I suggest Mraleph blog. His latest article is on benchmarking in dart, and also shows the use of #pragma(vm:entry-point). Mraleph is a Dart Sdk developer (he is also an author of the official doc cited above), and it is a very precious source about Dart VM related topics.

Flutter uses the Dart programming language.
Pragma class is a hint to tools.
Tools that work with Dart programs may accept hints to guide their behavior as pragma annotations on declarations. Each tool decides which hints it accepts, what they mean, and whether and how they apply to sub-parts of the annotated entity.
Tools that recognize pragma hints should pick a pragma prefix to identify the tool. They should recognize any hint with a name starting with their prefix followed by : as if it was intended for that tool. A hint with a prefix for another tool should be ignored (unless compatibility with that other tool is a goal).
A tool may recognize unprefixed names as well, if they would recognize that name with their own prefix in front.
If the hint can be parameterized, an extra options object can be added as well.
For example:
#pragma('Tool:pragma-name', [param1, param2, ...])
class Foo { }
#pragma('OtherTool:other-pragma')
void foo() { }
Here class Foo is annotated with a Tool specific pragma 'pragma-name'
and function foo is annotated with a pragma 'other-pragma' specific to
OtherTool.
The above can be found on dart.dev documentation.
The #pragma('vm:entry-point') annotation here. Its core logic is
Tree-Shaking. In AOT (Ahead of Time) compilation, if it cannot be
called by the Main entry of the application, it will be discarded as
useless code. The injection logic of the AOP code is non-invasive, so
obviously it will not be called by the Main entry. Therefore, this
annotation is required to tell the compiler not to discard this logic.

Related

How can i import a a library , hfile in a CLR project [duplicate]

I have heard using namespace std; is bad practice, and that I should use std::cout and std::cin directly instead.
Why is this? Does it risk declaring variables that share the same name as something in the std namespace?
Consider two libraries called Foo and Bar:
using namespace foo;
using namespace bar;
Everything works fine, and you can call Blah() from Foo and Quux() from Bar without problems. But one day you upgrade to a new version of Foo 2.0, which now offers a function called Quux(). Now you've got a conflict: Both Foo 2.0 and Bar import Quux() into your global namespace. This is going to take some effort to fix, especially if the function parameters happen to match.
If you had used foo::Blah() and bar::Quux(), then the introduction of foo::Quux() would have been a non-event.
It can get worse than what Greg wrote!
Library Foo 2.0 could introduce a function, Quux(), that is an unambiguously better match for some of your calls to Quux() than the bar::Quux() your code called for years. Then your code still compiles, but it silently calls the wrong function and does god-knows-what. That's about as bad as things can get.
Keep in mind that the std namespace has tons of identifiers, many of which are very common ones (think list, sort, string, iterator, etc.) which are very likely to appear in other code, too.
If you consider this unlikely: There was a question asked here on Stack Overflow where pretty much exactly this happened (wrong function called due to omitted std:: prefix) about half a year after I gave this answer. Here is another, more recent example of such a question.
So this is a real problem.
Here's one more data point: Many, many years ago, I also used to find it annoying having to prefix everything from the standard library with std::. Then I worked in a project where it was decided at the start that both using directives and declarations are banned except for function scopes. Guess what? It took most of us very few weeks to get used to writing the prefix, and after a few more weeks most of us even agreed that it actually made the code more readable. There's a reason for that: Whether you like shorter or longer prose is subjective, but the prefixes objectively add clarity to the code. Not only the compiler, but you, too, find it easier to see which identifier is referred to.
In a decade, that project grew to have several million lines of code. Since these discussions come up again and again, I once was curious how often the (allowed) function-scope using actually was used in the project. I grep'd the sources for it and only found one or two dozen places where it was used. To me this indicates that, once tried, developers don't find std:: painful enough to employ using directives even once every 100 kLoC even where it was allowed to be used.
Bottom line: Explicitly prefixing everything doesn't do any harm, takes very little getting used to, and has objective advantages. In particular, it makes the code easier to interpret by the compiler and by human readers — and that should probably be the main goal when writing code.
The problem with putting using namespace in the header files of your classes is that it forces anyone who wants to use your classes (by including your header files) to also be 'using' (i.e. seeing everything in) those other namespaces.
However, you may feel free to put a using statement in your (private) *.cpp files.
Beware that some people disagree with my saying "feel free" like this -- because although a using statement in a cpp file is better than in a header (because it doesn't affect people who include your header file), they think it's still not good (because depending on the code it could make the implementation of the class more difficult to maintain). This C++ Super-FAQ entry says,
The using-directive exists for legacy C++ code and to ease the transition to namespaces, but you probably shouldn’t use it on a regular basis, at least not in your new C++ code.
The FAQ suggests two alternatives:
A using-declaration:
using std::cout; // a using-declaration lets you use cout without qualification
cout << "Values:";
Just typing std::
std::cout << "Values:";
I recently ran into a complaint about Visual Studio 2010. It turned out that pretty much all the source files had these two lines:
using namespace std;
using namespace boost;
A lot of Boost features are going into the C++0x standard, and Visual Studio 2010 has a lot of C++0x features, so suddenly these programs were not compiling.
Therefore, avoiding using namespace X; is a form of future-proofing, a way of making sure a change to the libraries and/or header files in use is not going to break a program.
Short version: don't use global using declarations or directives in header files. Feel free to use them in implementation files. Here's what Herb Sutter and Andrei Alexandrescu have to say about this issue in C++ Coding Standards (bolding for emphasis is mine):
Summary
Namespace usings are for your convenience, not for you to inflict on others: Never write a using declaration or a using directive before an #include directive.
Corollary: In header files, don’t write namespace-level using directives or using declarations; instead, explicitly namespace-qualify all names. (The second rule follows from the first, because headers can never know what other header #includes might appear after them.)
Discussion
In short: You can and should use namespace using declarations and directives liberally in your implementation files after #include directives and feel good about it. Despite repeated assertions to the contrary, namespace using declarations and directives are not evil and they do not defeat the purpose of namespaces. Rather, they are what make namespaces usable.
One shouldn't use the using directive at the global scope, especially in headers. However, there are situations where it is appropriate even in a header file:
template <typename FloatType> inline
FloatType compute_something(FloatType x)
{
using namespace std; // No problem since scope is limited
return exp(x) * (sin(x) - cos(x * 2) + sin(x * 3) - cos(x * 4));
}
This is better than explicit qualification (std::sin, std::cos...), because it is shorter and has the ability to work with user defined floating point types (via argument-dependent lookup (ADL)).
Do not use it globally
It is considered "bad" only when used globally. Because:
You clutter the namespace you are programming in.
Readers will have difficulty seeing where a particular identifier comes from, when you use many using namespace xyz;.
Whatever is true for other readers of your source code is even more true for the most frequent reader of it: yourself. Come back in a year or two and take a look...
If you only talk about using namespace std; you might not be aware of all the stuff you grab -- and when you add another #include or move to a new C++ revision you might get name conflicts you were not aware of.
You may use it locally
Go ahead and use it locally (almost) freely. This, of course, prevents you from repetition of std:: -- and repetition is also bad.
An idiom for using it locally
In C++03 there was an idiom -- boilerplate code -- for implementing a swap function for your classes. It was suggested that you actually use a local using namespace std; -- or at least using std::swap;:
class Thing {
int value_;
Child child_;
public:
// ...
friend void swap(Thing &a, Thing &b);
};
void swap(Thing &a, Thing &b) {
using namespace std; // make `std::swap` available
// swap all members
swap(a.value_, b.value_); // `std::stwap(int, int)`
swap(a.child_, b.child_); // `swap(Child&,Child&)` or `std::swap(...)`
}
This does the following magic:
The compiler will choose the std::swap for value_, i.e. void std::swap(int, int).
If you have an overload void swap(Child&, Child&) implemented the compiler will choose it.
If you do not have that overload the compiler will use void std::swap(Child&,Child&) and try its best swapping these.
With C++11 there is no reason to use this pattern any more. The implementation of std::swap was changed to find a potential overload and choose it.
If you import the right header files you suddenly have names like hex, left, plus or count in your global scope. This might be surprising if you are not aware that std:: contains these names. If you also try to use these names locally it can lead to quite some confusion.
If all the standard stuff is in its own namespace you don't have to worry about name collisions with your code or other libraries.
Another reason is surprise.
If I see cout << blah, instead of std::cout << blah I think: What is this cout? Is it the normal cout? Is it something special?
Experienced programmers use whatever solves their problems and avoid whatever creates new problems, and they avoid header-file-level using-directives for this exact reason.
Experienced programmers also try to avoid full qualification of names inside their source files. A minor reason for this is that it's not elegant to write more code when less code is sufficient unless there are good reasons. A major reason for this is turning off argument-dependent lookup (ADL).
What are these good reasons? Sometimes programmers explicitly want to turn off ADL, other times they want to disambiguate.
So the following are OK:
Function-level using-directives and using-declarations inside functions' implementations
Source-file-level using-declarations inside source files
(Sometimes) source-file-level using-directives
I agree that it should not be used globally, but it's not so evil to use locally, like in a namespace. Here's an example from "The C++ Programming Language":
namespace My_lib {
using namespace His_lib; // Everything from His_lib
using namespace Her_lib; // Everything from Her_lib
using His_lib::String; // Resolve potential clash in favor of His_lib
using Her_lib::Vector; // Resolve potential clash in favor of Her_lib
}
In this example, we resolved potential name clashes and ambiguities arising from their composition.
Names explicitly declared there (including names declared by using-declarations like His_lib::String) take priority over names made accessible in another scope by a using-directive (using namespace Her_lib).
I also consider it a bad practice. Why? Just one day I thought that the function of a namespace is to divide stuff, so I shouldn't spoil it with throwing everything into one global bag.
However, if I often use 'cout' and 'cin', I write: using std::cout; using std::cin; in the .cpp file (never in the header file as it propagates with #include). I think that no one sane will ever name a stream cout or cin. ;)
It's nice to see code and know what it does. If I see std::cout I know that's the cout stream of the std library. If I see cout then I don't know. It could be the cout stream of the std library. Or there could be an int cout = 0; ten lines higher in the same function. Or a static variable named cout in that file. It could be anything.
Now take a million line code base, which isn't particularly big, and you're searching for a bug, which means you know there is one line in this one million lines that doesn't do what it is supposed to do. cout << 1; could read a static int named cout, shift it to the left by one bit, and throw away the result. Looking for a bug, I'd have to check that. Can you see how I really really prefer to see std::cout?
It's one of these things that seem a really good idea if you are a teacher and never had to write and maintain any code for a living. I love seeing code where (1) I know what it does; and, (2) I'm confident that the person writing it knew what it does.
It's all about managing complexity. Using the namespace will pull things in that you don't want, and thus possibly make it harder to debug (I say possibly). Using std:: all over the place is harder to read (more text and all that).
Horses for courses - manage your complexity how you best can and feel able.
A concrete example to clarify the concern. Imagine you have a situation where you have two libraries, foo and bar, each with their own namespace:
namespace foo {
void a(float) { /* Does something */ }
}
namespace bar {
...
}
Now let's say you use foo and bar together in your own program as follows:
using namespace foo;
using namespace bar;
void main() {
a(42);
}
At this point everything is fine. When you run your program it 'Does something'. But later you update bar and let's say it has changed to be like:
namespace bar {
void a(float) { /* Does something completely different */ }
}
At this point you'll get a compiler error:
using namespace foo;
using namespace bar;
void main() {
a(42); // error: call to 'a' is ambiguous, should be foo::a(42)
}
So you'll need to do some maintenance to clarify that 'a' meant foo::a. That's undesirable, but fortunately it is pretty easy (just add foo:: in front of all calls to a that the compiler marks as ambiguous).
But imagine an alternative scenario where bar changed instead to look like this instead:
namespace bar {
void a(int) { /* Does something completely different */ }
}
At this point your call to a(42) suddenly binds to bar::a instead of foo::a and instead of doing 'something' it does 'something completely different'. No compiler warning or anything. Your program just silently starts doing something completely different than before.
When you use a namespace you're risking a scenario like this, which is why people are uncomfortable using namespaces. The more things in a namespace, the greater the risk of conflict, so people might be even more uncomfortable using namespace std (due to the number of things in that namespace) than other namespaces.
Ultimately this is a trade-off between writability vs. reliability/maintainability. Readability may factor in also, but I could see arguments for that going either way. Normally I would say reliability and maintainability are more important, but in this case you'll constantly pay the writability cost for an fairly rare reliability/maintainability impact. The 'best' trade-off will determine on your project and your priorities.
Consider
// myHeader.h
#include <sstream>
using namespace std;
// someoneElses.cpp/h
#include "myHeader.h"
class stringstream { // Uh oh
};
Note that this is a simple example. If you have files with 20 includes and other imports, you'll have a ton of dependencies to go through to figure out the problem. The worse thing about it is that you can get unrelated errors in other modules depending on the definitions that conflict.
It's not horrible, but you'll save yourself headaches by not using it in header files or the global namespace. It's probably all right to do it in very limited scopes, but I've never had a problem typing the extra five characters to clarify where my functions are coming from.
You need to be able to read code written by people who have different style and best practices opinions than you.
If you're only using cout, nobody gets confused. But when you have lots of namespaces flying around and you see this class and you aren't exactly sure what it does, having the namespace explicit acts as a comment of sorts. You can see at first glance, "oh, this is a filesystem operation" or "that's doing network stuff".
Using many namespaces at the same time is obviously a recipe for disaster, but using JUST namespace std and only namespace std is not that big of a deal in my opinion because redefinition can only occur by your own code...
So just consider them functions as reserved names like "int" or "class" and that is it.
People should stop being so anal about it. Your teacher was right all along. Just use ONE namespace; that is the whole point of using namespaces the first place. You are not supposed to use more than one at the same time. Unless it is your own. So again, redefinition will not happen.
I agree with the others here, but I would like to address the concerns regarding readability - you can avoid all of that by simply using typedefs at the top of your file, function or class declaration.
I usually use it in my class declaration as methods in a class tend to deal with similar data types (the members) and a typedef is an opportunity to assign a name that is meaningful in the context of the class. This actually aids readability in the definitions of the class methods.
// Header
class File
{
typedef std::vector<std::string> Lines;
Lines ReadLines();
}
and in the implementation:
// .cpp
Lines File::ReadLines()
{
Lines lines;
// Get them...
return lines;
}
as opposed to:
// .cpp
vector<string> File::ReadLines()
{
vector<string> lines;
// Get them...
return lines;
}
or:
// .cpp
std::vector<std::string> File::ReadLines()
{
std::vector<std::string> lines;
// Get them...
return lines;
}
A namespace is a named scope. Namespaces are used to group related declarations and to keep separate
items separate. For example, two separately developed libraries may use the same name to refer to different
items, but a user can still use both:
namespace Mylib{
template<class T> class Stack{ /* ... */ };
// ...
}
namespace Yourlib{
class Stack{ /* ... */ };
// ...
}
void f(int max) {
Mylib::Stack<int> s1(max); // Use my stack
Yourlib::Stack s2(max); // Use your stack
// ...
}
Repeating a namespace name can be a distraction for both readers and writers. Consequently, it is possible
to state that names from a particular namespace are available without explicit qualification. For example:
void f(int max) {
using namespace Mylib; // Make names from Mylib accessible
Stack<int> s1(max); // Use my stack
Yourlib::Stack s2(max); // Use your stack
// ...
}
Namespaces provide a powerful tool for the management of different libraries and of different versions of code. In particular, they offer the programmer alternatives of how explicit to make a reference to a nonlocal name.
Source: An Overview of the C++ Programming Language
by Bjarne Stroustrup
An example where using namespace std throws a compilation error because of the ambiguity of count, which is also a function in algorithm library.
#include <iostream>
#include <algorithm>
using namespace std;
int count = 1;
int main() {
cout << count << endl;
}
It doesn't make your software or project performance worse. The inclusion of the namespace at the beginning of your source code isn't bad. The inclusion of the using namespace std instruction varies according to your needs and the way you are developing the software or project.
The namespace std contains the C++ standard functions and variables. This namespace is useful when you often would use the C++ standard functions.
As is mentioned in this page:
The statement using namespace std is generally considered bad
practice. The alternative to this statement is to specify the
namespace to which the identifier belongs using the scope operator(::)
each time we declare a type.
And see this opinion:
There is no problem using "using namespace std" in your source file
when you make heavy use of the namespace and know for sure that
nothing will collide.
Some people had said that is a bad practice to include the using namespace std in your source files because you're invoking from that namespace all the functions and variables. When you would like to define a new function with the same name as another function contained in the namespace std you would overload the function and it could produce problems due to compile or execute. It will not compile or executing as you expect.
As is mentioned in this page:
Although the statement saves us from typing std:: whenever
we wish to access a class or type defined in the std namespace, it
imports the entirety of the std namespace into the current namespace
of the program. Let us take a few examples to understand why this
might not be such a good thing
...
Now at a later stage of development, we wish to use another version of
cout that is custom implemented in some library called “foo” (for
example)
...
Notice how there is an ambiguity, to which library does cout point to?
The compiler may detect this and not compile the program. In the worst
case, the program may still compile but call the wrong function, since
we never specified to which namespace the identifier belonged.
It's case by case. We want to minimize the "total cost of ownership" of the software over its lifespan. Stating "using namespace std" has some costs, but not using it also has a cost in legibility.
People correctly point out that when using it, when the standard library introduces new symbols and definitions, your code ceases to compile, and you may be forced to rename variables. And yet this is probably good long-term, since future maintainers will be momentarily confused or distracted if you're using a keyword for some surprising purpose.
You don't want to have a template called vector, say, which isn't the vector known by everyone else. And the number of new definitions thus introduced in the C++ library is small enough it may simply not come up. There is a cost to having to do this kind of change, but the cost is not high and is offset by the clarity gained by not using std symbol names for other purposes.
Given the number of classes, variables, and functions, stating std:: on every one might fluff up your code by 50% and make it harder to get your head around. An algorithm or step in a method that could be taken in on one screenful of code now requires scrolling back and forth to follow. This is a real cost. Arguably it may not be a high cost, but people who deny it even exists are inexperienced, dogmatic, or simply wrong.
I'd offer the following rules:
std is different from all other libraries. It is the one library everyone basically needs to know, and in my view is best thought of as part of the language. Generally speaking there is an excellent case for using namespace std even if there isn't for other libraries.
Never force the decision onto the author of a compilation unit (a .cpp file) by putting this using in a header. Always defer the decision to the compilation unit author. Even in a project that has decided to use using namespace std everywhere may fine a few modules that are best handled as exceptions to that rule.
Even though the namespace feature lets you have many modules with symbols defined the same, it's going to be confusing to do so. Keep the names different to the extent possible. Even if not using the namespace feature, if you have a class named foo and std introduces a class named foo, it's probably better long-run to rename your class anyway.
An alternative to using namespaces is to manually namespace symbols by prefixing them. I have two libraries I've used for decades, both starting as C libraries, actually, where every symbol is prefixed with "AK" or "SCWin". Generally speaking, this is like avoiding the "using" construct, but you don't write the twin colons. AK::foo() is instead AKFoo(). It makes code 5-10% denser and less verbose, and the only downside is that you'll be in big trouble if you have to use two such libraries that have the same prefixing. Note the X Window libraries are excellent in this regard, except they forgot to do so with a few #defines: TRUE and FALSE should have been XTRUE and XFALSE, and this set up a namespace clash with Sybase or Oracle that likewise used TRUE and FALSE with different values! (ASCII 0 and 1 in the case of the database!) One special advantage of this is that it applies seemlessly to preprocessor definitions, whereas the C++ using/namespace system doesn't handle them. A nice benefit of this is that it gives an organic slope from being part of a project to eventually being a library. In a large application of mine, all window classes are prefixed Win, all signal-processing modules Mod, and so on. There's little chance of any of these being reused so there's no practical benefit to making each group into a library, but it makes obvious in a few seconds how the project breaks into sub-projects.
I agree with others – it is asking for name clashes, ambiguities and then the fact is it is less explicit. While I can see the use of using, my personal preference is to limit it. I would also strongly consider what some others pointed out:
If you want to find a function name that might be a fairly common name, but you only want to find it in the std namespace (or the reverse – you want to change all calls that are not in namespace std, namespace X, ...), then how do you propose to do this?
You could write a program to do it, but wouldn't it be better to spend time working on your project itself rather than writing a program to maintain your project?
Personally, I actually don't mind the std:: prefix. I like the look more than not having it. I don't know if that is because it is explicit and says to me "this isn't my code... I am using the standard library" or if it is something else, but I think it looks nicer. This might be odd given that I only recently got into C++ (used and still do C and other languages for much longer and C is my favourite language of all time, right above assembly).
There is one other thing although it is somewhat related to the above and what others point out. While this might be bad practise, I sometimes reserve std::name for the standard library version and name for program-specific implementation. Yes, indeed this could bite you and bite you hard, but it all comes down to that I started this project from scratch, and I'm the only programmer for it. Example: I overload std::string and call it string. I have helpful additions. I did it in part because of my C and Unix (+ Linux) tendency towards lower-case names.
Besides that, you can have namespace aliases. Here is an example of where it is useful that might not have been referred to. I use the C++11 standard and specifically with libstdc++. Well, it doesn't have complete std::regex support. Sure, it compiles, but it throws an exception along the lines of it being an error on the programmer's end. But it is lack of implementation.
So here's how I solved it. Install Boost's regex, and link it in. Then, I do the following so that when libstdc++ has it implemented entirely, I need only remove this block and the code remains the same:
namespace std
{
using boost::regex;
using boost::regex_error;
using boost::regex_replace;
using boost::regex_search;
using boost::regex_match;
using boost::smatch;
namespace regex_constants = boost::regex_constants;
}
I won't argue on whether that is a bad idea or not. I will however argue that it keeps it clean for my project and at the same time makes it specific: True, I have to use Boost, but I'm using it like the libstdc++ will eventually have it. Yes, starting your own project and starting with a standard (...) at the very beginning goes a very long way with helping maintenance, development and everything involved with the project!
Just to clarify something: I don't actually think it is a good idea to use a name of a class/whatever in the STL deliberately and more specifically in place of. The string is the exception (ignore the first, above, or second here, pun if you must) for me as I didn't like the idea of 'String'.
As it is, I am still very biased towards C and biased against C++. Sparing details, much of what I work on fits C more (but it was a good exercise and a good way to make myself a. learn another language and b. try not be less biased against object/classes/etc which is maybe better stated as less closed-minded, less arrogant, and more accepting.). But what is useful is what some already suggested: I do indeed use list (it is fairly generic, is it not ?), and sort (same thing) to name two that would cause a name clash if I were to do using namespace std;, and so to that end I prefer being specific, in control and knowing that if I intend it to be the standard use then I will have to specify it. Put simply: no assuming allowed.
And as for making Boost's regex part of std. I do that for future integration and – again, I admit fully this is bias - I don't think it is as ugly as boost::regex:: .... Indeed, that is another thing for me. There are many things in C++ that I still have yet to come to fully accept in looks and methods (another example: variadic templates versus var arguments [though I admit variadic templates are very very useful!]). Even those that I do accept it was difficult, and I still have issues with them.
From my experiences, if you have multiple libraries that uses say, cout, but for a different purpose you may use the wrong cout.
For example, if I type in, using namespace std; and using namespace otherlib; and type just cout (which happens to be in both), rather than std::cout (or 'otherlib::cout'), you might use the wrong one, and get errors. It's much more effective and efficient to use std::cout.
I do not think it is necessarily bad practice under all conditions, but you need to be careful when you use it. If you're writing a library, you probably should use the scope resolution operators with the namespace to keep your library from butting heads with other libraries. For application level code, I don't see anything wrong with it.
With unqualified imported identifiers you need external search tools like grep to find out where identifiers are declared. This makes reasoning about program correctness harder.
This is a bad practice, often known as global namespace pollution. Problems may occur when more than one namespace has the same function name with signature, then it will be ambiguous for the compiler to decide which one to call and this all can be avoided when you are specifying the namespace with your function call like std::cout . Hope this helps. :)
"Why is 'using namespace std;' considered a bad practice in C++?"
I put it the other way around: Why is typing five extra characters considered cumbersome by some?
Consider e.g. writing a piece of numerical software. Why would I even consider polluting my global namespace by cutting general "std::vector" down to "vector" when "vector" is one of the problem domain's most important concepts?
To answer your question I look at it this way practically: a lot of programmers (not all) invoke namespace std. Therefore one should be in the habit of NOT using things that impinge or use the same names as what is in the namespace std. That is a great deal granted, but not so much compared to the number of possible coherent words and pseudonyms that can be come up with strictly speaking.
I mean really... saying "don't rely on this being present" is just setting you up to rely on it NOT being present. You are constantly going to have issues borrowing code snippets and constantly repairing them. Just keep your user-defined and borrowed stuff in limited scope as they should be and be VERY sparing with globals (honestly globals should almost always be a last resort for purposes of "compile now, sanity later"). Truly I think it is bad advice from your teacher because using std will work for both "cout" and "std::cout" but NOT using std will only work for "std::cout". You will not always be fortunate enough to write all your own code.
NOTE: Don't focus too much on efficiency issues until you actually learn a little about how compilers work. With a little experience coding you don't have to learn that much about them before you realize how much they are able to generalize good code into something something simple. Every bit as simple as if you wrote the whole thing in C. Good code is only as complex as it needs to be.

Is there an alternative to the deprecated enclosingClass method in Scala refelection library?

I am writing a macro to get the enclosing val/var definition. I can get the enclosing val/var symbol, but I can not get the defining tree. One solution here suggested using enclosingClass:
https://stackoverflow.com/a/18451114/11989864
But all the enclosing-tree style API is deprecated:
https://www.scala-lang.org/api/2.13.0/scala-reflect/scala/reflect/macros/blackbox/Context.html
Is there a way to implement the functionality of enclosingClass? Or to get a tree from a symbol?
Reasons for deprecation are
Starting from Scala 2.11.0, the APIs to get the trees enclosing by
the current macro application are deprecated, and the reasons for that
are two-fold. Firstly, we would like to move towards the philosophy of
locally-expanded macros, as it has proven to be important for
understanding of code. Secondly, within the current architecture of
scalac, we are unable to have c.enclosingTree-style APIs working
robustly. Required changes to the typechecker would greatly exceed the
effort that we would like to expend on this feature given the
existence of more pressing concerns at the moment. This is somewhat
aligned with the overall evolution of macros during the 2.11
development cycle, where we played with c.introduceTopLevel and
c.introduceMember, but at the end of the day decided to reject them.
If you're relying on the now deprecated APIs, consider using the new
c.internal.enclosingOwner method that can be used to obtain the names
of enclosing definitions. Alternatively try reformulating your macros
in terms of completely local expansion...
https://www.scala-lang.org/api/2.13.0/scala-reflect/scala/reflect/macros/Enclosures.html
Regarding getting a tree from a symbol
there's no standard way to go from a symbol to a defining tree
https://stackoverflow.com/a/13768595/5249621
Why do you need def macro to get the enclosing val/var definition?
Maybe macro annotatations can be enough
https://docs.scala-lang.org/overviews/macros/annotations.html

Idiomatic Rust plugin system

I want to outsource some code for a plugin system. Inside my project, I have a trait called Provider which is the code for my plugin system. If you activate the feature "consumer" you can use plugins; if you don't, you are an author of plugins.
I want authors of plugins to get their code into my program by compiling to a shared library. Is a shared library a good design decision? The limitation of the plugins is using Rust anyway.
Does the plugin host have to go the C way for loading the shared library: loading an unmangled function?
I just want authors to use the trait Provider for implementing their plugins and that's it.
After taking a look at sharedlib and libloading, it seems impossible to load plugins in a idiomatic Rust way.
I'd just like to load trait objects into my ProviderLoader:
// lib.rs
pub struct Sample { ... }
pub trait Provider {
fn get_sample(&self) -> Sample;
}
pub struct ProviderLoader {
plugins: Vec<Box<Provider>>
}
When the program is shipped, the file tree would look like:
.
├── fancy_program.exe
└── providers
├── fp_awesomedude.dll
└── fp_niceplugin.dll
Is that possible if plugins are compiled to shared libs? This would also affect the decision of the plugins' crate-type.
Do you have other ideas? Maybe I'm on the wrong path so that shared libs aren't the holy grail.
I first posted this on the Rust forum. A friend advised me to give it a try on Stack Overflow.
UPDATE 3/27/2018:
After using plugins this way for some time, I have to caution that in my experience things do get out of sync, and it can be very frustrating to debug (strange segfaults, weird OS errors). Even in cases where my team independently verified the dependencies were in sync, passing non-primitive structs between the dynamic library binaries tended to fail on OS X for some reason. I'd like to revisit this, find what cases it happens in, and perhaps open an issue with Rust, but I'm going to advise caution with this going forward.
LLDB and valgrind are near-essential to debug these issues.
Intro
I've been investigating things along these lines myself, and I've found there's little official documentation for this, so I decided to play around!
First let me note, as there is little official word on these properties please do not rely on any code here if you're trying to keep planes in the air or nuclear missiles from errantly launching, at least not without doing far more comprehensive testing than I've done. I'm not responsible if the code here deletes your OS and emails an erroneous tearful confession of committing the Zodiac killings to your local police; we're on the fringes of Rust here and things could change from one release or toolchain to another.
I have personally tested this on Rust 1.20 stable in both debug and release configurations on Windows 10 (stable-x86_64-pc-windows-msvc) and Cent OS 7 (stable-x86_64-unknown-linux-gnu).
Approach
The approach I took was a shared common crate both crates listed as a dependency defining common struct and trait definitions. At first, I was also going to test having a struct with the same structure, or trait with the same definitions, defined independently in both libraries, but I opted against it because it's too fragile and you wouldn't want to do it in a real design. That said, if anybody wants to test this, feel free to do a PR on the repository above and I will update this answer.
In addition, the Rust plugin was declared dylib. I'm not sure how compiling as cdylib would interact, since I think it would mean that upon loading the plugin there are two versions of the Rust standard library hanging around (since I believe cdylib statically links the Rust stdlib into the shared object).
Tests
General Notes
The structs I tested were not declared #repr(C). This could provide an extra layer of safety by guaranteeing a layout, but I was most curious about writing "pure" Rust plugins with as little "treating Rust like C" fiddling as possible. We already know you can use Rust via FFI by wrapping things in opaque pointers, manually dropping, and such, so it's not very enlightening to test this.
The function signature I used was pub fn foo(args) -> output with the #[no_mangle] directive, it turns out that rustfmt automatically changes extern "Rust" fn to simply fn. I'm not sure I agree with this in this case since they are most certainly "extern" functions here, but I will choose to abide by rustfmt.
Remember that even though this is Rust, this has elements of unsafety because libloading (or the unstable DynamicLib functionality) will not type check the symbols for you. At first I thought my Vec test was proving you couldn't pass Vecs between host and plugin until I realized on one end I had Vec<i32> and on the other I had Vec<usize>
Interestingly, there were a few times I pointed an optimized test build to an unoptimized plugin and vice versa and it still worked. However, I still can't in good faith recommending building plugins and host applications with different toolchains, and even if you do, I can't promise that for some reason rustc/llvm won't decide to do certain optimizations on one version of a struct and not another. In addition, I'm not sure if this means that passing types through FFI prevents certain optimizations such as Null Pointer Optimizations from occurring.
You're still limited to calling bare functions, no Foo::bar because of the lack of name mangling. In addition, due to the fact that functions with trait bounds are monomorphized, generic functions and structs are also out. The compiler can't know you're going to call foo<i32> so no foo<i32> is going to be generated. Any functions over the plugin boundary must take only concrete types and return only concrete types.
Similarly, you have to be careful with lifetimes for similar reasons, since there's no static lifetime checking Rust is forced to believe you when you say a function returns &'a when it's really &'b.
Native Rust
The first tests I performed were on no custom structures; just pure, native Rust types. This would give a baseline for if this is even possible. I chose three baseline types: &mut i32, &mut Vec, and Option<i32> -> Option<i32>. These were all chosen for very specific reasons: the &mut i32 because it tests a reference, the &mut Vec because it tests growing the heap from memory allocated in the host application, and the Option as a dual purpose of testing passing by move and matching a simple enum.
All three work as expected. Mutating the reference mutates the value, pushing to a Vec works properly, and the Option works properly whether Some or None.
Shared Struct Definition
This was meant to test if you could pass a non-builtin struct with a common definition on both sides between plugin and host. This works as expected, but as mentioned in the "General Notes" section, can't promise you Rust won't fail to optimize and/or optimize a structure definition on one side and not another. Always test your specific use case and use CI in case it changes.
Boxed Trait Object
This test uses a struct whose definition is only defined on the plugin side, but implements a trait defined in a common crate, and returns a Box<Trait>. This works as expected. Calling trait_obj.fun() works properly.
At first I actually anticipated there would be issues with dropping without making the trait explicitly have Drop as a bound, but it turns out Drop is properly called as well (this was verified by setting the value of a variable declared on the test stack via raw pointer from the struct's drop function). (Naturally I'm aware drop is always called even with trait objects in Rust, but I wasn't sure if dynamic libraries would complicate it).
NOTE:
I did not test what would happen if you load a plugin, create a trait object, then drop the plugin (which would likely close it). I can only assume this is potentially catastrophic. I recommend keeping the plugin open as long as the trait object persists.
Remarks
Plugins work exactly as you'd expect just linking a crate naturally, albeit with some restrictions and pitfalls. As long as you test, I think this is a very natural way to go. It makes symbol loading more bearable, for instance, if you only need to load a new function and then receive a trait object implementing an interface. It also avoids nasty C memory leaks because you couldn't or forgot to load a drop/free function. That said, be careful, and always test!
There is no official plugin system, and you cannot do plugins loaded at runtime in pure Rust. I saw some discussions about doing a native plugin system, but nothing is decided for now, and maybe there will never be any such thing. You can use one of these solutions:
You can extend your code with native dynamic libraries using FFI. To use the C ABI, you have to use repr(C), no_mangle attribute, extern etc. You will find more information by searching Rust FFI on the internets. With this solution, you must use raw pointers: they come with no safety guarantee (i.e. you must use unsafe code).
Of course, you can write your dynamic library in Rust, but to load it and call the functions, you must go through the C ABI. This means that the safety guarantees of Rust do not apply there. Furthermore, you cannot use the highest level Rust's functionalities as trait, enum, etc. between the library and the binary.
If you do not want this complexity, you can use a language adapted to expand Rust: with which you can dynamically add functions to your code and execute them with same guarantees as in Rust. This is, in my opinion, the easier way to go: if you have the choice, and if the execution speed is not critical, use this to avoid tricky C/Rust interfaces.
Here is a (not exhaustive) list of languages that can easily extend Rust:
Gluon, a functional language like Haskell
Dyon, a small but powerful scripting language intended for video games
Lua with rlua or hlua
You can also use Python or Javascript, or see the list in awesome-rust.

What types of Macros/Syntax Extensions/Compiler Plugins are there?

I am very confused by the many terms used for several macro-like things in the Rust ecosystem. Could someone clarify what macros/syntax extensions/compiler plugins there are as well as explain the relationship between those terms?
You are right: it is confusing. Especially, because most of those features are unstable and change fairly often. But I'll try to summarize the current situation (December 2016).
Let's start with the Syntax Extension: it's something that has to be "called" or annotated manually in order to have any effect. There are three kinds of syntax extensions, which differ in the way you annotate them:
function-like syntax extensions: these are probably the most common syntax extensions, also called "macros". The syntax for invoking them is foo!(…) or (and this is pretty rare) foo! some_ident (…), where foo is the macro's name. Note that the () parenthesis can be replaced by [] or {}. Function-like syntax extensions can be defined either as a "macro by example" or as a "procedural macro".
attribute-like syntax extensions: these are invoked like #[foo(…)] where the parenthesis are not necessary and, again, foo is the name of the syntax extension. The item the attribute is belonging to can then be modified or extended by additional items (decorator).
custom derives: most Rust-programmers have already used the #[derive(…)] attribute. Of course, derive itself can be seen as attribute-like syntax extension. But it can also be extended, which is then invoked like #[derive(Foo)], where Foo is the name of the custom derive.
Most of these syntax extensions are also "compiler plugins". The only exception are function-like syntax extensions which are defined via "macro by example" (meaning macro_rules! syntax). Macros by example can be defined in your source code without writing a compiler plugin whatsoever.
But there are also compiler plugins that aren't syntax extensions. Those types of compiler plugins are linters or other plugins which run some code at some stage of the compiling process. They don't need to be invoked manually: once loaded, the compiler will call them at certain points during compilation.
All compiler plugins need to be loaded – either via #![plugin(foo)] at the crate-root or with the -Zextra-plugins=too,bar command line parameter – before they can have any effect!
Compiler plugins are currently unstable, therefore you need a nightly-compiler to use them. But the "Macro 1.1"-RFC will probably be stabilized soon, which means that a small subsets of compiler plugins can then be used with the stable compiler.
Useful links:
Documentation about registering compiler plugins
Book Chapter about compiler plugins

How to use CoffeeScript together with Google Closure

Recently I have started to use Google Closure Tools for my javascript development. Until now, I have used to write my code in CoffeeScript, however, the javascript generated by CoffeeScript seems to be incompatible with Google Closure Compiler's advanced mode.
Is there any extension to the CoffeeScript compiler adding Google Closure support?
There are various tools that aiming to make CoffeeScript usable with Google Closure Tools. I will describe three of them:
Bolinfest's CoffeeScript fork
Features:
Fixed function binding, loops, comprehensions, in operator and various other incompatibilities
Fixed classes syntax for Google Closure
Automatic generation of #constructor and #extends annotations
Automatically inserts goog.provide statement for each class declared
Python's like include namespace as alias support translated to goog.require and goog.scope
Drawbacks:
Constructor has to be the very first statement in the class
Cannot use short aliases for classes inside the class (i.e. class My.Long.Named.Car cannot be refered as Car in class definition as pure CoffeeScript allows)
User written JsDoc comments don't get merged with compiler generated ones
Missing provide equivalent for include
No support for type casting, this can be done only by inserting pure javascript code inside backticks "`"
Based on outdated CoffeeScript 1.0
Read more at http://bolinfest.com/coffee/
My CoffeeScript fork
Disclaimer: I am the author of this solution
This solution is inspired by the Bolinfest's work and extends it in these ways:
Constructor can be placed anywhere inside the class
Short aliases for classes work using goog.scope
User written JsDoc comments get merged with compiler generated, user written #constructor and #extends annotations are replaced by generated
Each namespace is provided or included mostly once, namespace, that is provided is never included. You can provide namespace by keyword provide
Support for typecasting using cast<typeToCastTo>(valueToBeCast) syntax
Based on CoffeeScript 1.6
Read more at https://github.com/hleumas/coffee-script/wiki
Steida's Coffee2Closure
Unlike the two solutions above, Steida's Coffee2Closure is postprocessor of javascript code generated by upstream nontweaked CoffeeScript. This approach has a one major advantage, that it will need no or only slight updates with continued development of CoffeeScript and still be actual. However, by the very nature of this approach, some of the features cannot be delivered. Currently it fixes only classes and bindings, loops, in operator and few other incompatibilities. It has no support for automatic annotation generation, type casting or custom keywords.
https://github.com/Steida/coffee2closure